MammaGutt wrote:
You are also not 100% correct
If you have multiple node pairs, each node pair is a failure domain. If you have HA cage, all drives in a cage could be considered "one drive".
But RAID5 have never been more secure than to withstand one failure. It can survive more under different scenarios and configurations, but worst case is always data loss on failure number 2.
The 3PAR RAID parity is per RAID set, but each RAID set consists of chunklets, not disks. So given enough data, all drives in the same failure domain will share some RAID set with every other drive in that failure domain.
So are you saying that with more node pairs the chunklet layout is different (I only have dual node arrays)? If not, then given that you are using any reasonable amount of data you will have overlapping chunklets within raid sets on any 2 drives, unless you have a significate number of very small drives, or you specify availability at the JBOD/shelf, then it would be any 2 disks in any 2 JBOD/shelf.
Back when this happened, it was also stated by HPE support that in certain circumstances RAID6 can see data loss with a 2 disk failure on the 3par.
This means that the availability of data is reduced due to the significantly higher chance of data loss in the event of 2 disks failing, unlike with standard RAID5 which with a large number of drives and a small RAID set would have a lower probability of dual drive failure within the same set. So the increase in performance that the chunklet method provides comes at the cost of increased chance of data loss. Which no doubt is hoped that the increased speed of rebuild and reduced load on all disks is hoped will not cause a second drive to fail before it is complete.
This will also be the same for mirror any 2 failed disks will likely cause data loss. Again unlike normal RAID 1, availability at the cost of performance, spreading the chunklets over all the drives.