HPE Storage Users Group
https://3parug.org/

How to interpret PD performance data
https://3parug.org/viewtopic.php?f=18&t=903
Page 1 of 2

Author:  skumflum [ Mon Sep 08, 2014 10:34 am ]
Post subject:  How to interpret PD performance data

I have a feeling that Adaptive Optimization on our 7400 is not working properly but I’m unsure how to interpret PD performance.

I would expect that SSD should be “assigned” more IO that FC disk and likewise NL disk.

NL is averaging around 180 IOPS and sometimes goes up to 300 IOPS
FC is averaging around 100 IOPS and maximum is about 200 IOPS
SSD is averaging around 200 IOS and maximum is about 300 IOPS

NL performance looks completely wrong IMO, am I right? What should I look for?

Author:  skumflum [ Mon Sep 08, 2014 12:40 pm ]
Post subject:  Re: How to interpret PD performance data

Just to clarify. We do the AO measurements between 08.00-16.00 (office hours) and the policy is set to balanced.

The total capacity of the different Tiers:

Image

IO of NL PD:

Image

IO of FC PD (64 disks):

Image

IO of SSD PD:

Image

I know we ought to have more SSD capacity but why would AO “allow” the NL disk to be under pressure like that when FC is not, and the FC tiers is only allocated 27%.

I have a feeling that I’m missing some conceptual knowledge. :oops:

Author:  RitonLaBevue [ Mon Sep 08, 2014 1:10 pm ]
Post subject:  Re: How to interpret PD performance data

could you get return of "showcpg" ?

Author:  skumflum [ Mon Sep 08, 2014 1:17 pm ]
Post subject:  Re: How to interpret PD performance data

RitonLaBevue wrote:
could you get return of "showcpg" ?


Image

Author:  Darking [ Mon Sep 08, 2014 4:00 pm ]
Post subject:  Re: How to interpret PD performance data

Im sure more SSD space would help with the issue since it seems you have an undetermend amount of data in a slower tier, that could benefit from the SSDs but not enough storage capacity in that tier.

To identify the volumes that might need the most help you should be able to run:

statvlun –vvsum –ni –rw

Note that each volume will show several times, if you have more than one path.

Author:  skumflum [ Tue Sep 09, 2014 12:49 am ]
Post subject:  Re: How to interpret PD performance data

Image

But according to my logic it should move more data to FC disks as they are not under pressure. Am I right?

Author:  Darking [ Tue Sep 09, 2014 2:36 am ]
Post subject:  Re: How to interpret PD performance data

Its quite a bit of IO especially in the _8 Volume. i would maybe consider not having a three tier strategy for that volume if its a permanent thing.

It might just be a question of rekajigger the volume to a bit more aggressive AO policy, so more of that specific data is moved up. Or perhaps even just create a two tier AO policy for that Volume.

Note: im not a super experienced 3Par admin yet, but i do want to help :)

Author:  Darking [ Tue Sep 09, 2014 7:52 am ]
Post subject:  Re: How to interpret PD performance data

Ehm i did notice one thing, it appears that your nearline statistics actually show the SSDs or visa-versa?

the Disk IDs are the same atleast in the screenshot.

Author:  skumflum [ Tue Sep 09, 2014 2:18 pm ]
Post subject:  Re: How to interpret PD performance data

Darking wrote:
Ehm i did notice one thing, it appears that your nearline statistics actually show the SSDs or visa-versa?

the Disk IDs are the same atleast in the screenshot.


Hmm... wrong pic. Sorry for the confusion :oops:

Image

Author:  Darking [ Wed Sep 10, 2014 7:13 am ]
Post subject:  Re: How to interpret PD performance data

I feel sorry for you!

those are some absolutely horrible IoPS on those poor NL disks.

My immediate thought is the following:

i would try and see if setting a performance AO policy instead of a Balanced Policy will force more of the active blocks up to the middle and high tiers.

If that does not work, i would purchase some extra SSDs, your a bit on the low end with regards to recommended practice for AO.

You only have around 1.28% of your total capacity in your SSD tier, i do believe that the recommendations is 5% for tier 0, 55% for tier 1, and the rest for tier2.

i did have an issue like you where alot of my IO was placed on the slowest tier, and i ended up forcing a limit on the amount of data that is located on my NL tier. That forces the array to move data to the middle tier, where you have the most spindles anyhow.

Page 1 of 2 All times are UTC - 5 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/