3PAR 7200 poor performance
Re: 3PAR 7200 poor performance
"Best IOMETER results for 12 15k FC disks on my 3Par are 350MBps."
"Best IOMETER results for 24 7k NL disks on my 3Par are 170MBps."
"Even in RAID0."
Your results are meaningless without knowing queue depth, I/O size, read / write ratio, random / sequential mix, alignment etc. The fact you are seeing similar results on Raid 0 suggests you aren't pushing the array. If you can post those details then it may be possible for someone to repeat the test on a similar setup maybe with a CPG filter to match your drive layout.
Check the Operating System Implementation Guides here http://goo.gl/mzM4Y7 they list all of the required steps and settings required to connect each O/S type to ensure Host Persona, MPIO, load balancing etc are correct.
If you are using 8Gb switches ensure the fill word is set correctly to 3 on all ports connecting to the 3PAR, this info can be found in the SAN Design & Reference Guide http://goo.gl/Zpkkxg page 103 rule 3.
The array is designed to handle multiple concurrent workloads (multi-tenancy) and to avoid any one workload swamping the others. So if you have a single stream sequential workload and are expecting stellar performance, that's just not what it's optimized for. Ramp up the queue depth or get more workers running in IO Meter and you'll see the performance increase. This is how arrays work in the real world rather than on a artificial benchmark.
Finally if you are still having problems you could try the below to eliminate comms issues, but you'll need to have a repeatable test you can use to validate the results.
If you think you are having retry issues you should try switching all of the 3PAR front end ports to 4Gb using "controlport" also lock the switch ports to 4Gb. It sounds counter intuitive but if you are having undetected comms retries due to cable or transceiver issues, they should reduce and performance will improve. If that's the case log a call with support to investigate / replace the failing component.
"Best IOMETER results for 24 7k NL disks on my 3Par are 170MBps."
"Even in RAID0."
Your results are meaningless without knowing queue depth, I/O size, read / write ratio, random / sequential mix, alignment etc. The fact you are seeing similar results on Raid 0 suggests you aren't pushing the array. If you can post those details then it may be possible for someone to repeat the test on a similar setup maybe with a CPG filter to match your drive layout.
Check the Operating System Implementation Guides here http://goo.gl/mzM4Y7 they list all of the required steps and settings required to connect each O/S type to ensure Host Persona, MPIO, load balancing etc are correct.
If you are using 8Gb switches ensure the fill word is set correctly to 3 on all ports connecting to the 3PAR, this info can be found in the SAN Design & Reference Guide http://goo.gl/Zpkkxg page 103 rule 3.
The array is designed to handle multiple concurrent workloads (multi-tenancy) and to avoid any one workload swamping the others. So if you have a single stream sequential workload and are expecting stellar performance, that's just not what it's optimized for. Ramp up the queue depth or get more workers running in IO Meter and you'll see the performance increase. This is how arrays work in the real world rather than on a artificial benchmark.
Finally if you are still having problems you could try the below to eliminate comms issues, but you'll need to have a repeatable test you can use to validate the results.
If you think you are having retry issues you should try switching all of the 3PAR front end ports to 4Gb using "controlport" also lock the switch ports to 4Gb. It sounds counter intuitive but if you are having undetected comms retries due to cable or transceiver issues, they should reduce and performance will improve. If that's the case log a call with support to investigate / replace the failing component.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: 3PAR 7200 poor performance
Also, how are your datastores configured? What are the host personas set to?
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: 3PAR 7200 poor performance
Hi,
I followed 3Par and VMware best practices and triple checked.
Host persona is VMware. Datastores are VMFS 5.6 with round robin multipathing.
I'm testing only with VMs because we have no physical server for years. E.g. currently running over 100VMs (RAD) on old MSA2000G1 with fewer disks and better results compared to hosting of these VMs on 3Par.
Also tried setting 3Par host ports to 4Gbps. No errors on either side with 4Gbps and 8Gbps.
ATTO test showed throughput (from array cache) 1680MBps which is over 13Gbps.
So problem is not in hosts or FC path.
Same hosts preform twice as fast on the same FC infrastructure with different storage array (older, cheaper, with less disks).
That's why I'm ready to shoot at 3Par.
Also when I added to 3Par 12 new NL disks to 12 existing NL disks, performance really doubled (from extremely poor to poor).
I could accept that array needs at least 48 FC disks to perform somehow satisfactorily.
But please help me solve mystery of 3Par read IO. I observed in Windows VM guest that writes are 2-3 times faster than reads.
statpd shows during write max IOsize and moderate IOPS.
statpd shows during read small IOsize and maximum IOPS (almost 1000 for 1 FC 15k disk).
Why are reads much slower and crippling whole array performance?
Thank you
Richard
I followed 3Par and VMware best practices and triple checked.
Host persona is VMware. Datastores are VMFS 5.6 with round robin multipathing.
I'm testing only with VMs because we have no physical server for years. E.g. currently running over 100VMs (RAD) on old MSA2000G1 with fewer disks and better results compared to hosting of these VMs on 3Par.
Also tried setting 3Par host ports to 4Gbps. No errors on either side with 4Gbps and 8Gbps.
ATTO test showed throughput (from array cache) 1680MBps which is over 13Gbps.
So problem is not in hosts or FC path.
Same hosts preform twice as fast on the same FC infrastructure with different storage array (older, cheaper, with less disks).
That's why I'm ready to shoot at 3Par.
Also when I added to 3Par 12 new NL disks to 12 existing NL disks, performance really doubled (from extremely poor to poor).
I could accept that array needs at least 48 FC disks to perform somehow satisfactorily.
But please help me solve mystery of 3Par read IO. I observed in Windows VM guest that writes are 2-3 times faster than reads.
statpd shows during write max IOsize and moderate IOPS.
statpd shows during read small IOsize and maximum IOPS (almost 1000 for 1 FC 15k disk).
Why are reads much slower and crippling whole array performance?
Thank you
Richard
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: 3PAR 7200 poor performance
risko wrote:With statpd command on 3PAR we noticed, that reads are total disaster. IOPS go to maximum (600 for FC disk) but IO size is only about 40kB and resulting throughput per disk 25MB. If IO size would be something like 128kB, throughput would be ok.
During writes IOPS are moderate (150 for 15k FC disk), IO size nears sometimes 128kB, but throughput is similar.
Any ideas how to get acceptable performance from HP 3PAR 7200?
Any idea how to force maximum IO size for disk during reads/writes?
Have you tinkered with enabling/disabling VAAI and comparing?
-- to show all your storage and which have VAAI enable/disabled currently:
esxcfg-scsidevs –l | egrep “Display Name|VAAI Status:â€
Based on what you posted, VAAI offloading of the VMDK copies should be active and enabled. You should see low, near 0 bandwidth during the copy when monitored over the SAN port, it should be internal to the array.
VMFS DataMover does not leverage hardware offloads and instead uses software data movement instead in these cases:
• The source and destination VMFS volumes have different block sizes.
• The source file type is raw device mapping (RDM) and the destination file type is non-RDM (regular file).
• The source VMDK type is eagerzeroedthick and the destination VMDK type is thin.
• The source or destination VMDK is in any sort of sparse or hosted format.
• The source VM has a snapshot.
• The logical address and transfer length in the requested operation are not aligned to the minimum alignment
required by the storage device (all datastores that are created with the vSphere Client are aligned automatically).
• The VMFS has multiple LUNs/extents and they are all on different arrays.
• Hardware cloning between different arrays (even if within the same VMFS volume) does not work.
Reference:
http://h20195.www2.hp.com/V2/GetDocumen ... A4-2864ENW
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
-
- Posts: 1
- Joined: Tue Feb 18, 2014 7:20 pm
Re: 3PAR 7200 poor performance
Hi risko,
I've seen a similar situation and it turned out to be faulty SFPs generating a massive amount of CRC errors. If this happens to be the issue, doing as Cleanur suggested and dropping your port speed (can be done in MC), is a workaround.
I've seen a similar situation and it turned out to be faulty SFPs generating a massive amount of CRC errors. If this happens to be the issue, doing as Cleanur suggested and dropping your port speed (can be done in MC), is a workaround.
Re: 3PAR 7200 poor performance
Hi Cleanur
I had the same situation on 2 3Par 7400 Systems, both with 144 Disk 900GB 10k.
Both installed one year ago, both with the same problem.
The first CPG's and LUN's have been created initially with 3Par OS Release 3.1.2 in 2013
This LUNS had and have a very poor performance.
Actual the Inservs are on OS Release 3.1.3.
Luns witch are now created in a new CPG with OS release 3.1.3 runs excellent.
The CPG and LUNS , however, were all created equal:
createcpg -t r6 -ha cage -ssz 12 -ss 128 -sdgs 120g -ch first -p -devtype FC -rpm 10 FC10k_Raid6_10D2P_VMFS
createvv -cnt 1 -spt 512 -hpc 8 -i 11 -tpvv FC10k_Raid6_10D2P_VMFS VMFS01 16T <- created in 2013 with 3.1.2 -> poor
createvv -cnt 1 -spt 512 -hpc 8 -i 12 -tpvv FC10k_Raid6_10D2P_VMFS VMFS02 16T <- created in 2013 with 3.1.2MU2 -> good
The solution was:
update to Inform 3.1.2MU2 (now it is 3.1.3)
remove all LUN's an CPG's
Create all new
To see what i mean:
============================ Poor Performance LUN ====================================
--------------------- Random read, 8k Blocksize, Queue Size 64 -------------------
09:24:24: ios = 6707.6 mbytes = 54.9 respTime = 9.519
09:24:29: ios = 5505.0 mbytes = 45.1 respTime = 11.641
09:24:34: ios = 5299.8 mbytes = 43.4 respTime = 12.082
09:24:39: ios = 5415.4 mbytes = 44.4 respTime = 11.808
09:24:44: ios = 5462.2 mbytes = 44.7 respTime = 11.730
09:24:49: ios = 5491.4 mbytes = 45.0 respTime = 11.631
09:24:54: ios = 5586.6 mbytes = 45.8 respTime = 11.482
09:24:59: ios = 5423.6 mbytes = 44.4 respTime = 11.774
09:25:04: ios = 5336.6 mbytes = 43.7 respTime = 12.012
09:25:09: ios = 4392.8 mbytes = 36.0 respTime = 14.535
09:25:14: ios = 5389.0 mbytes = 44.1 respTime = 11.900
09:25:19: ios = 5452.2 mbytes = 44.7 respTime = 11.711
09:25:24: ios = 5047.0 mbytes = 41.3 respTime = 12.701
09:25:29: ios = 5252.2 mbytes = 43.0 respTime = 12.171
09:25:34: ios = 5285.8 mbytes = 43.3 respTime = 12.122
09:25:39: ios = 5377.8 mbytes = 44.1 respTime = 11.875
09:25:44: ios = 5055.6 mbytes = 41.4 respTime = 12.680
09:25:49: ios = 5342.6 mbytes = 43.8 respTime = 11.958
09:25:54: ios = 5336.4 mbytes = 43.7 respTime = 12.001
09:25:59: ios = 5296.6 mbytes = 43.4 respTime = 12.076
09:26:04: ios = 4596.4 mbytes = 37.7 respTime = 13.935
09:26:09: ios = 5266.2 mbytes = 43.1 respTime = 12.133
09:26:14: ios = 5360.6 mbytes = 43.9 respTime = 11.954
09:26:19: ios = 5018.6 mbytes = 41.1 respTime = 12.738
--------------------- Sequiental read, 256k Blocksize, Queue Size 64 -------------
09:26:56: ios = 2032.0 mbytes = 532.7 respTime = 31.539
09:27:01: ios = 1801.8 mbytes = 472.3 respTime = 35.492
09:27:06: ios = 1974.0 mbytes = 517.5 respTime = 32.521
09:27:11: ios = 2165.8 mbytes = 567.8 respTime = 29.580
09:27:16: ios = 1867.6 mbytes = 489.6 respTime = 34.215
09:27:21: ios = 1807.0 mbytes = 473.7 respTime = 35.431
09:27:26: ios = 1703.6 mbytes = 446.6 respTime = 37.499
09:27:31: ios = 1632.4 mbytes = 427.9 respTime = 39.250
09:27:36: ios = 1743.0 mbytes = 456.9 respTime = 36.636
09:27:41: ios = 1931.8 mbytes = 506.4 respTime = 33.226
09:27:46: ios = 1755.4 mbytes = 460.2 respTime = 36.283
09:27:51: ios = 1835.8 mbytes = 481.2 respTime = 34.939
09:27:56: ios = 1782.6 mbytes = 467.3 respTime = 35.940
09:28:01: ios = 1816.2 mbytes = 476.1 respTime = 35.233
09:28:06: ios = 1773.0 mbytes = 464.8 respTime = 36.219
09:28:11: ios = 1921.0 mbytes = 503.6 respTime = 33.187
09:28:16: ios = 1816.2 mbytes = 476.1 respTime = 35.312
09:28:21: ios = 1888.6 mbytes = 495.1 respTime = 33.877
09:28:26: ios = 1920.2 mbytes = 503.4 respTime = 33.292
09:28:31: ios = 1918.0 mbytes = 502.8 respTime = 33.293
09:28:36: ios = 1849.4 mbytes = 484.8 respTime = 34.679
09:28:41: ios = 1910.8 mbytes = 500.9 respTime = 33.454
09:28:46: ios = 1623.6 mbytes = 425.6 respTime = 39.420
============================ Good Performance LUN ====================================
--------------------- Random read, 8k Blocksize, Queue Size 64 -------------------
16:06:25: ios = 9665.8 mbytes = 79.2 respTime = 6.631
16:06:30: ios = 9563.2 mbytes = 78.3 respTime = 6.702
16:06:35: ios = 9667.2 mbytes = 79.2 respTime = 6.610
16:06:40: ios = 9616.0 mbytes = 78.8 respTime = 6.667
16:06:45: ios = 9744.2 mbytes = 79.8 respTime = 6.554
16:06:50: ios = 9534.6 mbytes = 78.1 respTime = 6.722
16:06:55: ios = 9457.2 mbytes = 77.5 respTime = 6.749
16:07:00: ios = 9479.8 mbytes = 77.7 respTime = 6.760
16:07:05: ios = 9359.8 mbytes = 76.7 respTime = 6.829
16:07:10: ios = 9805.4 mbytes = 80.3 respTime = 6.537
16:07:15: ios = 9734.0 mbytes = 79.7 respTime = 6.560
16:07:20: ios = 9724.8 mbytes = 79.7 respTime = 6.588
16:07:25: ios = 9722.2 mbytes = 79.6 respTime = 6.565
16:07:30: ios = 9762.6 mbytes = 80.0 respTime = 6.567
16:07:35: ios = 9499.8 mbytes = 77.8 respTime = 6.732
16:07:40: ios = 9652.4 mbytes = 79.1 respTime = 6.637
16:07:45: ios = 9149.0 mbytes = 74.9 respTime = 6.986
16:07:50: ios = 9525.6 mbytes = 78.0 respTime = 6.724
16:07:55: ios = 9790.2 mbytes = 80.2 respTime = 6.522
16:08:00: ios = 9863.0 mbytes = 80.8 respTime = 6.501
16:08:05: ios = 9797.4 mbytes = 80.3 respTime = 6.523
16:08:10: ios = 9849.4 mbytes = 80.7 respTime = 6.506
16:08:15: ios = 9791.6 mbytes = 80.2 respTime = 6.523
--------------------- Sequiental read, 256k Blocksize, Queue Size 64 -------------
16:23:56: ios = 6205.0 mbytes = 1626.6 respTime = 10.293
16:24:01: ios = 5763.8 mbytes = 1510.9 respTime = 11.115
16:24:06: ios = 6206.2 mbytes = 1626.9 respTime = 10.295
16:24:11: ios = 6224.8 mbytes = 1631.8 respTime = 10.294
16:24:16: ios = 6238.4 mbytes = 1635.4 respTime = 10.239
16:24:21: ios = 6235.4 mbytes = 1634.6 respTime = 10.271
16:24:26: ios = 5966.4 mbytes = 1564.1 respTime = 10.705
16:24:31: ios = 6022.8 mbytes = 1578.8 respTime = 10.651
16:24:36: ios = 6227.0 mbytes = 1632.4 respTime = 10.260
16:24:41: ios = 6243.0 mbytes = 1636.6 respTime = 10.263
16:24:46: ios = 6199.6 mbytes = 1625.2 respTime = 10.305
16:24:51: ios = 6218.4 mbytes = 1630.1 respTime = 10.306
16:24:56: ios = 6198.6 mbytes = 1624.9 respTime = 10.306
16:25:01: ios = 6248.0 mbytes = 1637.9 respTime = 10.250
16:25:06: ios = 6197.2 mbytes = 1624.6 respTime = 10.311
16:25:11: ios = 6216.4 mbytes = 1629.6 respTime = 10.308
16:25:16: ios = 6211.4 mbytes = 1628.3 respTime = 10.290
16:25:21: ios = 6198.2 mbytes = 1624.8 respTime = 10.338
16:25:26: ios = 5674.2 mbytes = 1487.5 respTime = 11.254
16:25:31: ios = 6251.2 mbytes = 1638.7 respTime = 10.251
16:25:36: ios = 6204.8 mbytes = 1626.6 respTime = 10.301
16:25:41: ios = 6253.6 mbytes = 1639.3 respTime = 10.248
16:25:46: ios = 6180.8 mbytes = 1620.3 respTime = 10.335
I had the same situation on 2 3Par 7400 Systems, both with 144 Disk 900GB 10k.
Both installed one year ago, both with the same problem.
The first CPG's and LUN's have been created initially with 3Par OS Release 3.1.2 in 2013
This LUNS had and have a very poor performance.
Actual the Inservs are on OS Release 3.1.3.
Luns witch are now created in a new CPG with OS release 3.1.3 runs excellent.
The CPG and LUNS , however, were all created equal:
createcpg -t r6 -ha cage -ssz 12 -ss 128 -sdgs 120g -ch first -p -devtype FC -rpm 10 FC10k_Raid6_10D2P_VMFS
createvv -cnt 1 -spt 512 -hpc 8 -i 11 -tpvv FC10k_Raid6_10D2P_VMFS VMFS01 16T <- created in 2013 with 3.1.2 -> poor
createvv -cnt 1 -spt 512 -hpc 8 -i 12 -tpvv FC10k_Raid6_10D2P_VMFS VMFS02 16T <- created in 2013 with 3.1.2MU2 -> good
The solution was:
update to Inform 3.1.2MU2 (now it is 3.1.3)
remove all LUN's an CPG's
Create all new
To see what i mean:
============================ Poor Performance LUN ====================================
--------------------- Random read, 8k Blocksize, Queue Size 64 -------------------
09:24:24: ios = 6707.6 mbytes = 54.9 respTime = 9.519
09:24:29: ios = 5505.0 mbytes = 45.1 respTime = 11.641
09:24:34: ios = 5299.8 mbytes = 43.4 respTime = 12.082
09:24:39: ios = 5415.4 mbytes = 44.4 respTime = 11.808
09:24:44: ios = 5462.2 mbytes = 44.7 respTime = 11.730
09:24:49: ios = 5491.4 mbytes = 45.0 respTime = 11.631
09:24:54: ios = 5586.6 mbytes = 45.8 respTime = 11.482
09:24:59: ios = 5423.6 mbytes = 44.4 respTime = 11.774
09:25:04: ios = 5336.6 mbytes = 43.7 respTime = 12.012
09:25:09: ios = 4392.8 mbytes = 36.0 respTime = 14.535
09:25:14: ios = 5389.0 mbytes = 44.1 respTime = 11.900
09:25:19: ios = 5452.2 mbytes = 44.7 respTime = 11.711
09:25:24: ios = 5047.0 mbytes = 41.3 respTime = 12.701
09:25:29: ios = 5252.2 mbytes = 43.0 respTime = 12.171
09:25:34: ios = 5285.8 mbytes = 43.3 respTime = 12.122
09:25:39: ios = 5377.8 mbytes = 44.1 respTime = 11.875
09:25:44: ios = 5055.6 mbytes = 41.4 respTime = 12.680
09:25:49: ios = 5342.6 mbytes = 43.8 respTime = 11.958
09:25:54: ios = 5336.4 mbytes = 43.7 respTime = 12.001
09:25:59: ios = 5296.6 mbytes = 43.4 respTime = 12.076
09:26:04: ios = 4596.4 mbytes = 37.7 respTime = 13.935
09:26:09: ios = 5266.2 mbytes = 43.1 respTime = 12.133
09:26:14: ios = 5360.6 mbytes = 43.9 respTime = 11.954
09:26:19: ios = 5018.6 mbytes = 41.1 respTime = 12.738
--------------------- Sequiental read, 256k Blocksize, Queue Size 64 -------------
09:26:56: ios = 2032.0 mbytes = 532.7 respTime = 31.539
09:27:01: ios = 1801.8 mbytes = 472.3 respTime = 35.492
09:27:06: ios = 1974.0 mbytes = 517.5 respTime = 32.521
09:27:11: ios = 2165.8 mbytes = 567.8 respTime = 29.580
09:27:16: ios = 1867.6 mbytes = 489.6 respTime = 34.215
09:27:21: ios = 1807.0 mbytes = 473.7 respTime = 35.431
09:27:26: ios = 1703.6 mbytes = 446.6 respTime = 37.499
09:27:31: ios = 1632.4 mbytes = 427.9 respTime = 39.250
09:27:36: ios = 1743.0 mbytes = 456.9 respTime = 36.636
09:27:41: ios = 1931.8 mbytes = 506.4 respTime = 33.226
09:27:46: ios = 1755.4 mbytes = 460.2 respTime = 36.283
09:27:51: ios = 1835.8 mbytes = 481.2 respTime = 34.939
09:27:56: ios = 1782.6 mbytes = 467.3 respTime = 35.940
09:28:01: ios = 1816.2 mbytes = 476.1 respTime = 35.233
09:28:06: ios = 1773.0 mbytes = 464.8 respTime = 36.219
09:28:11: ios = 1921.0 mbytes = 503.6 respTime = 33.187
09:28:16: ios = 1816.2 mbytes = 476.1 respTime = 35.312
09:28:21: ios = 1888.6 mbytes = 495.1 respTime = 33.877
09:28:26: ios = 1920.2 mbytes = 503.4 respTime = 33.292
09:28:31: ios = 1918.0 mbytes = 502.8 respTime = 33.293
09:28:36: ios = 1849.4 mbytes = 484.8 respTime = 34.679
09:28:41: ios = 1910.8 mbytes = 500.9 respTime = 33.454
09:28:46: ios = 1623.6 mbytes = 425.6 respTime = 39.420
============================ Good Performance LUN ====================================
--------------------- Random read, 8k Blocksize, Queue Size 64 -------------------
16:06:25: ios = 9665.8 mbytes = 79.2 respTime = 6.631
16:06:30: ios = 9563.2 mbytes = 78.3 respTime = 6.702
16:06:35: ios = 9667.2 mbytes = 79.2 respTime = 6.610
16:06:40: ios = 9616.0 mbytes = 78.8 respTime = 6.667
16:06:45: ios = 9744.2 mbytes = 79.8 respTime = 6.554
16:06:50: ios = 9534.6 mbytes = 78.1 respTime = 6.722
16:06:55: ios = 9457.2 mbytes = 77.5 respTime = 6.749
16:07:00: ios = 9479.8 mbytes = 77.7 respTime = 6.760
16:07:05: ios = 9359.8 mbytes = 76.7 respTime = 6.829
16:07:10: ios = 9805.4 mbytes = 80.3 respTime = 6.537
16:07:15: ios = 9734.0 mbytes = 79.7 respTime = 6.560
16:07:20: ios = 9724.8 mbytes = 79.7 respTime = 6.588
16:07:25: ios = 9722.2 mbytes = 79.6 respTime = 6.565
16:07:30: ios = 9762.6 mbytes = 80.0 respTime = 6.567
16:07:35: ios = 9499.8 mbytes = 77.8 respTime = 6.732
16:07:40: ios = 9652.4 mbytes = 79.1 respTime = 6.637
16:07:45: ios = 9149.0 mbytes = 74.9 respTime = 6.986
16:07:50: ios = 9525.6 mbytes = 78.0 respTime = 6.724
16:07:55: ios = 9790.2 mbytes = 80.2 respTime = 6.522
16:08:00: ios = 9863.0 mbytes = 80.8 respTime = 6.501
16:08:05: ios = 9797.4 mbytes = 80.3 respTime = 6.523
16:08:10: ios = 9849.4 mbytes = 80.7 respTime = 6.506
16:08:15: ios = 9791.6 mbytes = 80.2 respTime = 6.523
--------------------- Sequiental read, 256k Blocksize, Queue Size 64 -------------
16:23:56: ios = 6205.0 mbytes = 1626.6 respTime = 10.293
16:24:01: ios = 5763.8 mbytes = 1510.9 respTime = 11.115
16:24:06: ios = 6206.2 mbytes = 1626.9 respTime = 10.295
16:24:11: ios = 6224.8 mbytes = 1631.8 respTime = 10.294
16:24:16: ios = 6238.4 mbytes = 1635.4 respTime = 10.239
16:24:21: ios = 6235.4 mbytes = 1634.6 respTime = 10.271
16:24:26: ios = 5966.4 mbytes = 1564.1 respTime = 10.705
16:24:31: ios = 6022.8 mbytes = 1578.8 respTime = 10.651
16:24:36: ios = 6227.0 mbytes = 1632.4 respTime = 10.260
16:24:41: ios = 6243.0 mbytes = 1636.6 respTime = 10.263
16:24:46: ios = 6199.6 mbytes = 1625.2 respTime = 10.305
16:24:51: ios = 6218.4 mbytes = 1630.1 respTime = 10.306
16:24:56: ios = 6198.6 mbytes = 1624.9 respTime = 10.306
16:25:01: ios = 6248.0 mbytes = 1637.9 respTime = 10.250
16:25:06: ios = 6197.2 mbytes = 1624.6 respTime = 10.311
16:25:11: ios = 6216.4 mbytes = 1629.6 respTime = 10.308
16:25:16: ios = 6211.4 mbytes = 1628.3 respTime = 10.290
16:25:21: ios = 6198.2 mbytes = 1624.8 respTime = 10.338
16:25:26: ios = 5674.2 mbytes = 1487.5 respTime = 11.254
16:25:31: ios = 6251.2 mbytes = 1638.7 respTime = 10.251
16:25:36: ios = 6204.8 mbytes = 1626.6 respTime = 10.301
16:25:41: ios = 6253.6 mbytes = 1639.3 respTime = 10.248
16:25:46: ios = 6180.8 mbytes = 1620.3 respTime = 10.335
Re: 3PAR 7200 poor performance
What i forgot:
the Hosts are:
4x BL660cG8 (VMWare)
2x DL360G7 (VMWare)
4x BL870cI4 (HP-UX)
the Problem was on all Hosts, VMWare with Windows Guest and HP-UX.
I think under 3.1.2 (without MU2) the assigment of chunklets was bugy.
after the Update to 3.1.2MU, the allready assignent chunklets rest bugy...
...only the new ones are good.
the Hosts are:
4x BL660cG8 (VMWare)
2x DL360G7 (VMWare)
4x BL870cI4 (HP-UX)
the Problem was on all Hosts, VMWare with Windows Guest and HP-UX.
I think under 3.1.2 (without MU2) the assigment of chunklets was bugy.
after the Update to 3.1.2MU, the allready assignent chunklets rest bugy...
...only the new ones are good.
Re: 3PAR 7200 poor performance
I've read that 3.1.3 gives substantial performance increases due to a changed algorithm I think it was, I can't find the info now, but it was something like 20-30% increase for certain workloads I think.
Re: 3PAR 7200 poor performance
Found it, the table of contents has been moved from being on disk to in ram apparently.
Re: 3PAR 7200 poor performance
Hi all
Not only the 3.1.3 has fixed our performance problem.
"old" Luns (created under 3.1.2 and now running under 3.1.3 are still poor in Performance)
The key ist to move all data somewere, drop the whole Virtual Volume, recreate them, move data back.
Urs
Not only the 3.1.3 has fixed our performance problem.
"old" Luns (created under 3.1.2 and now running under 3.1.3 are still poor in Performance)
The key ist to move all data somewere, drop the whole Virtual Volume, recreate them, move data back.
Urs