HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 9 posts ] 
Author Message
 Post subject: 3par write cache performance.
PostPosted: Mon Jun 21, 2021 8:55 am 

Joined: Thu Jun 10, 2021 5:52 am
Posts: 5
Hi.
I`m not real 3par administrator, but i`m trying to investigate performance of our 3par 8200
From VM that has its files on 3par i`m getting 200k iops with diskspd with default settings and 1k iops with -h (No cache parameter)
3par administrator says its normal, but from what i`ve understood from HP site this is looks like Write-though mode is enabled on our 3par.
I got some noobish questions:
1. Is it normal that shownode shows both power supplies connected to node 0?
2. Is it normal that pdsld`s have WThru=Y?
3. Is there ways to check that cache is actually in write-back mode, not write-through?

3par UI shows everything green.

Code:
cli% shownode -verbose
********Node 0********

----------------------General----------------------

Node ID             :   0
Name                :   CZ3708FKFA-0
Manufacturer        :   FXN
Assembly  Part      :   K2Q35-63001
Assembly Serial     :   PFLKQA8LM8****
Master              :   Yes
Online              :   Yes
In Cluster          :   Yes
HPE 3PAR OS Version :   3.3.1
BIOS Info           :   5.5.7
Control Memory(MB)  :   16384
Data Memory(MB)     :   16384
LED                 :   GreenBlnk
ServiceLED          :   Off
Cache Available(%)  :   100
State               :   OK
State Description   :   OK
Node Up Since       :   2020-10-28 17:56:11 AST

----------------------PCI Cards----------------------

---------PCI Slot 0---------
Node ID       :   0
Slot          :   0
Type          :   FC
Manufacturer  :   EMULEX
Model         :   LPE16002
Serial Number :   Onboard
Revision      :   30
Firmware      :   11.4.415.0

---------PCI Slot 1----------
Node ID       :   0
Slot          :   1
Type          :   SAS
Manufacturer  :   LSI
Model         :   9300-2P
Serial Number :   Onboard
Revision      :   02
Firmware      :   10.10.03.00

-------PCI Slot 3--------
Node ID       :   0
Slot          :   3
Type          :   Eth
Manufacturer  :   Intel
Model         :   e1000e
Serial Number :   Onboard
Revision      :   n/a
Firmware      :   3.2.6-k

-------------------------CPUs------------------------

......
-------------------Physical Memory-------------------

......

--------------------Power Supplies-------------------

--------------Power Supply 0--------------
Node ID                 :   0
Power Supply ID         :   0
Manufacturer            :   XYRATEX
Assembly Part           :   726237-001
Assembly Serial Number  :   5DNSFA3438****
State                   :   OK
Fan State               :   OK
Fan Speed               :   Low
AC State                :   OK
DC State                :   OK
Battery State           :   OK
Battery Detail State    :   normal
Battery Charge State    :   FullyCharged
Battery Charge Level(%) :   100
Max Battery Life(mins)  :   16

--------------Power Supply 1--------------
Node ID                 :   0
Power Supply ID         :   1
Manufacturer            :   XYRATEX
Assembly Part           :   726237-001
Assembly Serial Number  :   5DNSFA3438****
State                   :   OK
Fan State               :   OK
Fan Speed               :   Low
AC State                :   OK
DC State                :   OK
Battery State           :   OK
Battery Detail State    :   normal
Battery Charge State    :   FullyCharged
Battery Charge Level(%) :   100
Max Battery Life(mins)  :   16

********Node 1********

----------------------General----------------------

Node ID             :   1
Name                :   CZ3708FKFA-1
Manufacturer        :   FXN
Assembly  Part      :   K2Q35-63001
Assembly Serial     :   PFLKQA8LM8***
Master              :   No
Online              :   Yes
In Cluster          :   Yes
HPE 3PAR OS Version :   3.3.1
BIOS Info           :   5.5.7
Control Memory(MB)  :   16384
Data Memory(MB)     :   16384
LED                 :   GreenBlnk
ServiceLED          :   Off
Cache Available(%)  :   100
State               :   OK
State Description   :   OK
Node Up Since       :   2020-10-28 17:58:23 AST

----------------------PCI Cards----------------------

---------PCI Slot 0---------
Node ID       :   1
Slot          :   0
Type          :   FC
Manufacturer  :   EMULEX
Model         :   LPE16002
Serial Number :   Onboard
Revision      :   30
Firmware      :   11.4.415.0

---------PCI Slot 1----------
Node ID       :   1
Slot          :   1
Type          :   SAS
Manufacturer  :   LSI
Model         :   9300-2P
Serial Number :   Onboard
Revision      :   02
Firmware      :   10.10.03.00

-------PCI Slot 3--------
Node ID       :   1
Slot          :   3
Type          :   Eth
Manufacturer  :   Intel
Model         :   e1000e
Serial Number :   Onboard
Revision      :   n/a
Firmware      :   3.2.6-k

-------------------------CPUs------------------------

....

-------------------Physical Memory-------------------

....

----------------------Internal Drive----------------------

-------------Drive 0--------------
Node ID       :   1
Drive ID      :   0
WWN           :   5001B444A94887AF
Manufacturer  :   SanDisk
Model         :   DX300128A5xnEMLC
Serial Number :   172011400436
Firmware      :   X2200400
Size(MB)      :   122104
Type          :   SATA
SedState      :   capable

--------------------Power Supplies-------------------

---------------------Uptime----------------------
---------------------Node 0----------------------
Node up since         :   2020-10-28 17:56:11 AST

---------------------Node 1----------------------
Node up since         :   2020-10-28 17:58:23 AST

Code:
 Id Name          RAID -Detailed_State- Own   SizeMB   UsedMB Use  Lgct LgId WThru MapV
  9 .srdata.usr.0    5 normal           0/1    49152    46080 V       0  ---     N    Y
 10 .srdata.usr.1    5 normal           1/0    49152    46080 V       0  ---     N    Y
  0 admin.usr.0      1 normal           0/1     4096     4096 V       0  ---     N    Y
  1 admin.usr.1      1 normal           0/1     1024     1024 V       0  ---     N    Y
  2 admin.usr.2      1 normal           1/0     4096     4096 V       0  ---     N    Y
  3 admin.usr.3      1 normal           1/0     1024     1024 V       0  ---     N    Y
  4 log0.0           1 normal           0/-    20480        0 log     0  ---     Y    N
  5 log1.0           1 normal           1/-    20480        0 log     0  ---     Y    N
  6 pdsld0.0         1 normal           1/0     1024        0 P,F     0  ---     Y    N
  7 pdsld0.1         6 normal           1/0    16256        0 P       0  ---     Y    N
  8 pdsld0.2         6 normal           1/0    20352        0 P       0  ---     Y    N
......


Top
 Profile  
Reply with quote  
 Post subject: Re: 3par write cache performance.
PostPosted: Mon Jun 21, 2021 3:04 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
1. Yes. On the 8200 you have a shared power infrastructure in the node cage, so both PS0 and PS1 is connected to both node0 and node1. "shownode -i" provides a better view showing that they are connected to both nodes.
2. Nope (except for a few internal volumes, which you see in the bottom of your list)
3. Checkhealth should throw an error if the entire system is in write-thru mode. Another "dirty" way is to check "statcmp" in CLI as it will show read and write cache hits.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: 3par write cache performance.
PostPosted: Mon Jun 21, 2021 3:38 pm 

Joined: Thu Jun 10, 2021 5:52 am
Posts: 5
MammaGutt wrote:
1. Yes. On the 8200 you have a shared power infrastructure in the node cage, so both PS0 and PS1 is connected to both node0 and node1. "shownode -i" provides a better view showing that they are connected to both nodes.
2. Nope (except for a few internal volumes, which you see in the bottom of your list)
3. Checkhealth should throw an error if the entire system is in write-thru mode. Another "dirty" way is to check "statcmp" in CLI as it will show read and write cache hits.

Hi, and thanks.

Seems like we do have write cache enabled
Code:
                                      CMP  FMP Total              CMP  FMP Total
VVid VVname           Type  Accesses Hit% Hit%  Hit%    Accesses Hit% Hit%  Hit%
   0 admin            Read         0    0    0     0           0    0    0     0
   0 admin            Write        0    0    0     0          54   61    0    61
   1 .srdata          Read         0    0    0     0           0    0    0     0
   1 .srdata          Write        0    0    0     0           8   12    0    12
   2 .shared.SSD_r6_0 Read      9305    1    0     1       49893    1    0     1
   2 .shared.SSD_r6_0 Write       84  100    0   100        2218  100    0   100
   3 DS3Par03-01-C1   Read     12166   82    0    82       27053   89    0    89
   3 DS3Par03-01-C1   Write     1119   70    0    70       16047   38    0    38
   4 DS3Par03-11-C1   Read        22    0    0     0         362   11    0    11
   4 DS3Par03-11-C1   Write     1859   23    0    23       13446   22    0    22
   5 DS3Par03-12-C1   Read      8615    1    0     1       44767    1    0     1
   5 DS3Par03-12-C1   Write     1817   16    0    16       17818   23    0    23
   6 DS3Par03-31-C1   Read         3    0    0     0          59   10    0    10
   6 DS3Par03-31-C1   Write     1889   21    0    21       25462   11    0    11
   7 DS3Par03-02-C1   Read       782    0    0     0       13055   14    0    14
   7 DS3Par03-02-C1   Write      951   77    0    77       15882   35    0    35
  14 DS3Par03-13-C1   Read        13    0    0     0         386    7    0     7
  14 DS3Par03-13-C1   Write     1056   24    0    24        6761   16    0    16
  15 DS3Par03-14-C1   Read      1436    8    0     8        9948   14    0    14
  15 DS3Par03-14-C1   Write     1564   15    0    15        9913   26    0    26


Is it even a problem that i get 1k iops writing with "write directly to disk" directive instead of usual cached 220k? I may be just overestimating what write cache should be like, but thats how sql databases write to disk and i suppose cache on 3par should give me something more than 1k.

Seems like we are using only CMP and maybe we just dont have enough memory for that. 3par ui shows 94% cache utilization.
Is it even possible to increase DRAM cache on 8200? If so, maybe there`s any documentation about how to calculate proper size?


Top
 Profile  
Reply with quote  
 Post subject: Re: 3par write cache performance.
PostPosted: Mon Jun 21, 2021 5:06 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
Well, cache is fast, disk is not. Writing to cache doesn’t provide RAID overhead (only mirror to partner) no data reduction (if dedupe and/or compression) and no disk overhead (latency or throughput). This is why some performance/IOPS numbers for some enterprise storage array is notes with an asterix saying «numbers based on 4k blocks, 100% cache hits» providing pretty insane numbers for weak systems.

You can’t increase cache on a 3PAR, BUT the number of write cache pages is directly linked to the number of drives installed in the array and type (ssd/fc/nl). So a system with 50 SSDs will have potentially 5x write cache vs a system with only 10 SSDs. The flip side is that amount of data cache is fixed, so more pages used for write cache = less for read cache… but again, the 3PAR is smart so it will use it wisely.

You’re not saying a lot about the system, the load and «the issue». Are you doing any data reduction on the volume in question? That could impact the performance. Also, what is IO size? 1k IOPS with 1MB IO size and 100% write would be very good on deco volume. 1k IOPS with 4kB IO size and 100% read is shit. SQL (db in general) doesn’t play well with dedupe and the 8200 isn’t all that powerful when doing data reduction. There is a reason for the 8440/8450 existing, it’s not there just for show.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: 3par write cache performance.
PostPosted: Mon Jun 21, 2021 5:16 pm 

Joined: Thu Jun 10, 2021 5:52 am
Posts: 5
DiskSpd.exe -c1G -d60 -r -w100 -t1 -o1 -b4K -L -h .\testfile.dat
1k is 100% write with 4k block.
~500iops with 100% write and 64k block.

I`m not sure about data reduction, i will try to ask our 3par admin tomorrow.
We have 2x 8200 connected to VMWare VCenter via VPLEX. Maybe VPLEX slows it down too.

Is it possible to increase total cache by enabling adaptive flash cache? which as far as i read in manuals uses flash storage to cache reads. which probably should leave more CMP cache to writes. Stats show 0% usage, so we probably doesnt have it enabled.

I cant quite understand why 3par cant just report that writes are done as soon as data hits cache (i thought thats how write-back cache should work).
DiskSpd.exe -c1G -d60 -r -w100 -t1 -o1 -b4K -L -h .\testfile.dat = 1k iops
DiskSpd.exe -c1G -d60 -r -w100 -t1 -o1 -b4K -L .\testfile.dat = 200k iops
its kinda both writes, and second line shows that write cache is pretty good.


Top
 Profile  
Reply with quote  
 Post subject: Re: 3par write cache performance.
PostPosted: Mon Jun 21, 2021 11:45 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
With the VPLEX there, your host will not get the ack before both 3PARs have acked. The 3PAR will ack as soon as write hits cache and is safely stored in cache on two nodes (assuming there is available cache pages). Check for DelAck on statcmp. That increases for every time the system doesn't have available cache pages.

Flash cache only works for FC/NL.. what would the benefit be of using SSD for caching SSD? :) so that would depend again on your configuration if it could help.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: 3par write cache performance.
PostPosted: Tue Jun 22, 2021 1:29 am 

Joined: Thu Jun 10, 2021 5:52 am
Posts: 5
OK. Thanks, now i much more understand how this all works.
Is it possible to configure caching policy? For example to disable read cache at all. I was asking about ssd cache in case it is possible to cache reads on SSD and give all CMP to write caching. Again, i`m not familiar with 3par at all, so sorry for such weird questions and for my english i`m still trying to improve. :)

Here is our DelAck stats. I suppose thats my main issue
Code:
        Page Statistics
     -CfcDirty- --CfcMax--- ---DelAck---
Node FC NL  SSD FC NL   SSD FC NL    SSD
   0  0  0 6669  0  0 96000  0  0 150663
   1  0  0 6470  0  0 96000  0  0 174684
Press the enter key to stop...


Top
 Profile  
Reply with quote  
 Post subject: Re: 3par write cache performance.
PostPosted: Tue Jun 22, 2021 2:47 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
As mentioned before, write cache is set based on the number and type of drives. Based on your statcmp you have only SSDs on the system and 96000 x 16kB of available write cache per node.

Using SSD for read cache for SSD is pointless. If the cache media has the same speed as the permanent media, then you'll never get any performance improvement, only waste space.

Caching policy is fixed and cannot be adjusted.

As for Delack, that counter only resets during node reboot. 150k over 9 months isn't always superbad. The question is _when_ it increases. Backups are often the biggest load on storage systems ... if DelAcks happen during backup slots in the middle of the night and the users only access the system during daytime it doesn't matter. If the DelAcks occur during peak business hours, it does matter and one need to understand what is causing it and how to prevent it.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: 3par write cache performance.
PostPosted: Tue Jun 22, 2021 4:28 am 

Joined: Thu Jun 10, 2021 5:52 am
Posts: 5
Thank you.
I will ask our 3par engineer to create new datastore without deduplication and compression to check if there will be increase in iops. For some reason we have dedup enabled on all volumes, and from what i understood i suppose this could be our main issue, since our 3par is trying to deduplicate everything.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 9 posts ] 


Who is online

Users browsing this forum: Google [Bot] and 41 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt