De-dupe + compress all volumes .. practical ?

Post Reply
markinnz
Posts: 136
Joined: Tue Mar 29, 2016 9:32 pm

De-dupe + compress all volumes .. practical ?

Post by markinnz »

Hiya,

We have some shiney new Primera 650s, which HPE have assured us can actually do de-dupe and compression without crashing :-)

How practical is it to just have all volumes as de-duplicated+compressed and just let the array handle it ? Will I see CPU issues ?
- volumes with known data sets unsuitable to compression would just be done thin though.

Data is mostly VMware with some older big Oracle on HP-UX (VxFS), array being migrated from has about 1PiB of data on it, all thin provisioned (because of crashes etc), has about 204 3.8TiB SSDs.
New Primera is all 7TiB SSD and NvME disks (96 of them), sales say the compaction of the new array will make up the space.

Thanks!
MammaGutt
Posts: 1577
Joined: Mon Sep 21, 2015 2:11 pm
Location: Europe

Re: De-dupe + compress all volumes .. practical ?

Post by MammaGutt »

markinnz wrote:Hiya,

We have some shiney new Primera 650s, which HPE have assured us can actually do de-dupe and compression without crashing :-)

How practical is it to just have all volumes as de-duplicated+compressed and just let the array handle it ? Will I see CPU issues ?
- volumes with known data sets unsuitable to compression would just be done thin though.

Data is mostly VMware with some older big Oracle on HP-UX (VxFS), array being migrated from has about 1PiB of data on it, all thin provisioned (because of crashes etc), has about 204 3.8TiB SSDs.
New Primera is all 7TiB SSD and NvME disks (96 of them), sales say the compaction of the new array will make up the space.

Thanks!


I would say, try.

If the volume compaction is less than 1.2:1, I probably wouldn't do the data reduction. If you see CPU issues, is dependent of the load. For anything compressed, the CPU would need to uncompress at read and compress at write. But the 650 is a pretty powerful system.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
User avatar
cali
Posts: 214
Joined: Tue Oct 07, 2014 8:34 am
Location: Germany

Re: De-dupe + compress all volumes .. practical ?

Post by cali »

Primera do it in a different Way as 3PAR.
First it try to Dedup, because it is fast and done by the ASIC.
It the result is good, processed Data go to the Disk.
If it is not good, it goes to the Compess Engine, this is done by CPU (Software).
In almost you will either have good Dedup or good Compession.
Compession is good for Database all other Data is (mostly) better for Dedup.
MammaGutt
Posts: 1577
Joined: Mon Sep 21, 2015 2:11 pm
Location: Europe

Re: De-dupe + compress all volumes .. practical ?

Post by MammaGutt »

cali wrote:Primera do it in a different Way as 3PAR.
First it try to Dedup, because it is fast and done by the ASIC.
It the result is good, processed Data go to the Disk.
If it is not good, it goes to the Compess Engine, this is done by CPU (Software).
In almost you will either have good Dedup or good Compession.
Compession is good for Database all other Data is (mostly) better for Dedup.


Pretty sure that works in the exact same way as 3PAR :) .... expect Primera has better control over background processes, more processes are multi-threaded and has a lot more horsepower :D
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
RitonLaBevue
Posts: 390
Joined: Fri Jun 27, 2014 2:01 am

Re: De-dupe + compress all volumes .. practical ?

Post by RitonLaBevue »

I think you are right, MammaGutt.
On 3PAR data is deduped then compressed and added to other compressed data until it fit 16k and then written to disk.
But that works only if the VV has dedup and compression enabled.
Post Reply