Page 1 of 1
esxi 5.1 or 5.5 queue depth settings
Posted: Mon Jun 23, 2014 12:23 pm
by mujzeptu
I have a V400 3par array running with esxi 5.1 hosts, soon to be upgrading to 5.5. I am curious what queue depth settings you run and why? I currently have my AQLEN for the adapters as 2176, and the DQLEN for the device set at 64.
Re: esxi 5.1 or 5.5 queue depth settings
Posted: Tue Jun 24, 2014 5:28 am
by hdtvguy
We have experimented with a few different settings and basically found no much difference for normal workloads. We recently had SQL performance issues and to prove it was not the HBA bottle next raised the DQLEN to 256 and the performance did not move. Prior to 5.5 the vmware Disk.SchedNumReqOutstanding setting has an impact if more than 1 VM shares a datastore/volume, but evidently that setting is going away in 5.5. Prior to 5.5 I believe that value default was 32 which means if more than one VM is on a datastore then each VM would be throttled to 32 queue depth to prevent a VM from over running the HBA. In our environment we have not seen any impact on change any of these settings since none of our system drive so much IO as to overwhelm the HBA or the array.
There are numerous vm blogs and articles that talk about tweaking HBAs, but I have not had much time to read the latest info on 5.5 since it change things yet again.
Some optimizations we do across the board to help all VMs;
- we use vmxnet3 NIC drives on all guests that support it without issues, usually Linux and 2008 R2+. Also just using vmxnet3 alone provides very little improvement if you do not go into the guest and turn on TCP offload and such, we have seen huge gains on network heavy apps.
- we use paravirtual SCSI adapters on ALL guests Win 2003 + and Lunix. For additional optimizations on SQL boxes we mount individual vmdks for DATA, LOG and TEMPDB each on their own logical SCSI adapter. To me this is more important than tweaking HBA settings as you are letting the guest OS better manage IOs
- we moved away from RDMs since with the paravirtual SCSI adapter and advances in tweaking in vsphere the performance gains of RDMs are negligible in most cases today.
- We disable all power settings on the esxi hosts and also int he Windows guests
- each esxi host has either local storage or a LUN to use for guest swap files (we do not store them with the VMs). This does increase vmotion times a bit, but well worth it from a management standpoint.