Page 2 of 3
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Mon May 19, 2014 8:45 am
by spencer.ryan
We've got a mix of SSD, FC and NL.
99% of the storage presented to the VMWare cluster is in the FC tier, with AO set to Performance.
I only have about 2 TB of storage presented out of NL for really cold archive stuff. Everything else goes in the middle and I let AO figure it out.
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Mon May 19, 2014 10:24 am
by Richard Siemers
Thanks Ryan,
What % of your SSD is utilized? I would assume that with performance policy set, it would use all of what you allow?
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Tue May 20, 2014 8:17 am
by spencer.ryan
We have no warning/allocation limits set and the SSDs seem to hover around 90%.
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Wed May 21, 2014 6:52 am
by Cleanur
The hitatchi SSD drives are not exactly high endurance, compared to an SLC, in that they only allowed 2 drive writes per day over a five year period. then again, that is 2TB of data written every day on each disk. So it varies if it will be an issue.
The 480/920GB MLC drives from HP now come with a 5 year warranty at no additional cost. Both dial home data from the field for the installed base and new SSD specific optimizations enable this level of confidence. AFAIK none of the other major vendors offer this level of warranty for MLC based drives.
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Wed May 21, 2014 2:17 pm
by Schmoog
FWIW: Level of confidence in the duty cycle of the drive notwithstanding, isn't the performance of the MLC significantly lower compared to an SLC?
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Wed May 21, 2014 7:00 pm
by spencer.ryan
For writes yes, for reads not so much but the cache on the controllers largely eliminates any issues you'd see between MLC and SLC.
The disks themselves keep track of total bytes written and will stop at a pre set limit. HP will replaces drives for free if you run them to 0% while the unit is under support.
We've had our 7400 in production for about 3 months and all the SSDs show 100% life remaining.
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Wed May 21, 2014 8:04 pm
by Schmoog
Whats the difference iop-wise between the two?
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Thu May 22, 2014 10:52 am
by corge
With AO, isn't that really the only way to do it...the way you put into your first post? Without something like a 7450 to handle strictly SSD writes, there isn't a way that I know of in the 3PAR system to have everything come into SSD first from cache and then keep it there until it becomes colder to move down to spinning disks.
We've run into this issue and decided for some VDI stuff, to go with another vendor. Quite frankly, it was much cheaper and easier to let the box handle all incoming reads/writes to go directly to the SSD and then move down to cold storage if ever needed. This is all de-duped and compressed inline, so not going to be much of a need to go to cold storage much at all.
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Thu May 22, 2014 11:09 am
by BryanW
I suppose you could create both tier0 and tier1 as SSD tiers, then tier2 as FC if you want all writes to go to SSD but you would be making regionmover work awfully hard. I think the 3PAR friendly fix for what you are trying to solve for is to run AO more often esp. when you either adding or expanding LUNs.
While I understand the philosophical issues with writing to SAS when you paid crazy margin for SSD
, I have found AO works pretty well even in our IO intensive MSSQL analytics environment with the initial writes going to FC. The controller cache seems to pick up the slack where needed then AO copies the hot bits over without much fanfare.
Bryan @ Match.com
Re: New 7400 just for ESX cluster, CPG ideas
Posted: Thu May 22, 2014 4:22 pm
by Cleanur
On paper there's a big difference, but real world usable IOPs difference between the two is minimal especially with the help of writeback cache, coalescing and SSD optimized writes . A couple of things to keep in mind though, the smaller the usable the capacity the harder it is to drive the max I/O. SLC's have lower capacity, so even though they're quicker it's more difficult to hit those max performance numbers. Also 3PAR wide stripes so it's typically cheaper to put a larger qty of MLC's in a box than SLC's, this on top of higher capacity means better overall performance and potentially more free blocks to handle wear leveling and increase overall endurance. In addition 3PAR also does some clever stuff around adaptive sparing, which allows even more available capacity per MLC than is available to other vendors sporting identical drives.
Having a very fast top tier for ingest is good as long as that tier is big enough / fast enough not to be overwhelmed by incoming writes (MB/s). If it is overwhelmed then the system will tank, so their are some advantages to landing in the middle tier. From a VDI perspective there are many ways to cut the solution and the ISVs introduce new offload features that move the goal posts with every release. Some of the niche solutions are focused very specifically on VDI and do handle it's rather unique requirements really well. In plenty of recent cases I've seen Customer serve the I/O intensive stuff directly from an internal server accelerator and save big time on backend storage, but VDI is a pretty special case and IMHO should really be handled as such if you want things to scale.