[gpfsug-discuss] BM Spectrum Scale transparent cloud tiering

Oesterlin, Robert Robert.Oesterlin at nuance.com
Fri Jan 29 15:00:12 GMT 2016


Without getting into a whole lot of detail - The service is not based around the existing DMAPI interface. This service uses the Cluster Export Service (CES) nodes in GPFS 4.2 to perform the work. There is a process running on these nodes that’s configured to use a cloud provider and it performs the data migration between cloud and GPFS. It can be done using an automated process or by manual policy migration.

I’m not sure what level of technical detail I can share on the mailing list – if you reply to Rob Basham’s post on DeveloperWorks I’m sure he can fill in the lede lot detail you need.

Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid

From: "service at metamodul.com<mailto:service at metamodul.com>" <service at metamodul.com<mailto:service at metamodul.com>>
Date: Friday, January 29, 2016 at 8:46 AM
To: Robert Oesterlin <Robert.Oesterlin at nuance.com<mailto:Robert.Oesterlin at nuance.com>>, gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] IBM Spectrum Scale transparent cloud tiering

Hi Robert,
i refered to your posting i assume ^_^
Note the following is from what I know, Since i did not had any change to work
with GPFS in the last 2 years my knowledge will be outdated.
The current GPFS tiering options depending on the DMAPI which i am not a big fan
since it had in the past some limitations,
In past i read some talking about "lightweight callbacks" - thats what i
remember as the name - which could hook into the stream of open or write process
( to/from a gpfs )
Thus in case the new solution is still based on DMAPI no further info is
required. If not i would like to know a little bit more ... if possible,
Cheers
Hajo

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160129/051ff9af/attachment-0001.htm>


More information about the gpfsug-discuss mailing list