[gpfsug-discuss] GPFS de duplication

Stephen Ulmer ulmer at ulmer.org
Fri May 21 00:42:30 BST 2021


Do file clones meet the workflow requirement? That is, can you control from whence the second (and further) copies are made?

 -- 
Stephen


> On May 20, 2021, at 9:01 AM, Andrew Beattie <abeattie at au1.ibm.com> wrote:
> 
> Dave,
> 
> Spectrum Scale does not support de-duplication, it does support compression. 
> You can however use block storage that supports over subscription / thin provisioning / deduplication for data only NSD’s, we do not recommend them for metadata.
> 
> In your scenario is user B planning on making changes to the data which is why you need a copy? 
> 
> I know of customers that do this regularly with block storage such as the IBM Flashsystem product family In conjunction with IBM Spectrum Copy Data Management.  But I don’t believe CDM supports file based storage.
> 
> Regards, 
> 
> Andrew Beattie
> Technical Sales Specialist
> Storage for Data and AI
> IBM Australia and New Zealand
> P. +61 421 337 927
> E. Abeattie at au1.ibm.com
> 
>>> On 20 May 2021, at 22:58, Dave Bond <davebond7787 at gmail.com> wrote:
>>> 
>> 
>> 
>> 
>> Hello
>> 
>> As part of a project I am doing I am looking if there are any de duplication options for GPFS?  I see there is no native de dupe for the filesystem. The scenario would be user A creates a file or folder and user B takes a copy within the same filesystem, though separate independent filesets.  The intention would be to store 1 copy.    So I was wondering ....
>> 
>> 1) Is this is planned to be implemented into GPFS in the future?
>> 2) Is anyone is using any other solutions that have good GPFS integration?
>> 
>> Dave
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210520/ce7a47ee/attachment-0002.htm>


More information about the gpfsug-discuss mailing list