Hiya,
We have some shiney new Primera 650s, which HPE have assured us can actually do de-dupe and compression without crashing
How practical is it to just have all volumes as de-duplicated+compressed and just let the array handle it ? Will I see CPU issues ?
- volumes with known data sets unsuitable to compression would just be done thin though.
Data is mostly VMware with some older big Oracle on HP-UX (VxFS), array being migrated from has about 1PiB of data on it, all thin provisioned (because of crashes etc), has about 204 3.8TiB SSDs.
New Primera is all 7TiB SSD and NvME disks (96 of them), sales say the compaction of the new array will make up the space.
Thanks!
De-dupe + compress all volumes .. practical ?
Re: De-dupe + compress all volumes .. practical ?
markinnz wrote:Hiya,
We have some shiney new Primera 650s, which HPE have assured us can actually do de-dupe and compression without crashing
How practical is it to just have all volumes as de-duplicated+compressed and just let the array handle it ? Will I see CPU issues ?
- volumes with known data sets unsuitable to compression would just be done thin though.
Data is mostly VMware with some older big Oracle on HP-UX (VxFS), array being migrated from has about 1PiB of data on it, all thin provisioned (because of crashes etc), has about 204 3.8TiB SSDs.
New Primera is all 7TiB SSD and NvME disks (96 of them), sales say the compaction of the new array will make up the space.
Thanks!
I would say, try.
If the volume compaction is less than 1.2:1, I probably wouldn't do the data reduction. If you see CPU issues, is dependent of the load. For anything compressed, the CPU would need to uncompress at read and compress at write. But the 650 is a pretty powerful system.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
Re: De-dupe + compress all volumes .. practical ?
Primera do it in a different Way as 3PAR.
First it try to Dedup, because it is fast and done by the ASIC.
It the result is good, processed Data go to the Disk.
If it is not good, it goes to the Compess Engine, this is done by CPU (Software).
In almost you will either have good Dedup or good Compession.
Compession is good for Database all other Data is (mostly) better for Dedup.
First it try to Dedup, because it is fast and done by the ASIC.
It the result is good, processed Data go to the Disk.
If it is not good, it goes to the Compess Engine, this is done by CPU (Software).
In almost you will either have good Dedup or good Compession.
Compession is good for Database all other Data is (mostly) better for Dedup.
Re: De-dupe + compress all volumes .. practical ?
cali wrote:Primera do it in a different Way as 3PAR.
First it try to Dedup, because it is fast and done by the ASIC.
It the result is good, processed Data go to the Disk.
If it is not good, it goes to the Compess Engine, this is done by CPU (Software).
In almost you will either have good Dedup or good Compession.
Compession is good for Database all other Data is (mostly) better for Dedup.
Pretty sure that works in the exact same way as 3PAR .... expect Primera has better control over background processes, more processes are multi-threaded and has a lot more horsepower
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
-
- Posts: 390
- Joined: Fri Jun 27, 2014 2:01 am
Re: De-dupe + compress all volumes .. practical ?
I think you are right, MammaGutt.
On 3PAR data is deduped then compressed and added to other compressed data until it fit 16k and then written to disk.
But that works only if the VV has dedup and compression enabled.
On 3PAR data is deduped then compressed and added to other compressed data until it fit 16k and then written to disk.
But that works only if the VV has dedup and compression enabled.