[gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated
orlando.richards at ed.ac.uk
Tue Nov 6 09:21:09 GMT 2012
Excellent stuff - thanks Dean!
On 02/11/12 19:57, Dean Hildebrand wrote:
> Hi Orlando,
> Thanks for all of your feedback, many great suggestions. Sorry for the
> late response, I've been trying to go through and digest all the
> comments from the user group meeting. I'll do my best to forward your
> suggestions internally.
> The one thing I wanted to comment on was that "hot file" identification
> was shipped in gpfs 188.8.131.52.
> Here is a link to the docs discussing it:
> Dean Hildebrand
> Research Staff Member - Storage Systems
> IBM Almaden Research Center
> On 25/09/12 14:05, Jez Tucker wrote:
> >/ Hello all
> />/ Firstly can I thank all who attended UG #6. We had a great turn
> out and the opportunity to network with more people from IBM was most
> />/ I have uploaded the presentations from UG to this small, catchy URL:
> />/ [Bar the SCCS presentation, awaiting clearance].
> />/ Please have a read of the presentations.
> />/ IBM Almaden Labs welcome your feedback regarding pNFS and Panache as
> well as FRQs etc.
> />/ For instance, one FRQ idea banded around was a GRIO/QoS
> implementation for GPFS : E.G.: //_http://goo.gl/zjkN8_//
> />/ It would be most helpful if a couple of lines use-case was alongside
> each of these.
> />/ If these are messaged back to the list for healthy debate or sent to
> me directly I'll put them on the UG website for Almaden Labs to
> peruse/discuss with us.
> />/ Regards,
> />/ Jez
> />/ p.s. I'll also start to solicit previous presentations for UG < #5,
> so if you were a speaker, please get in touch.
> />/ ---
> Thanks for the great meeting Jez, and Claire et al at OCF.
> On feature requests, I think one desirable feature request discussed at
> the meeting was for "better" performance monitoring tools.
> A quick think through the things on my plate which would be eased with
> new/changed features in GPFS led me to this wishlist:
> - ability to change the designated NSD servers for an NSD without
> unmounting the filesystem everywhere
> - expansion of the AFM toolchain, including the following to assist
> with migration of data between filesystems:
> - ability to set a pre-existing fileset as a "cache" of an empty
> 'home' fileset with AFM, allowing for a push of the data from the
> "cache" fileset/filesystem to the "home" target fileset/filesystem as a
> data migration strategy
> - ability to remove an AFM relationship between filesets, preserving
> data in the 'cache' fileset (and making it, independently, a 'live' fileset)
> - ability to "flip" the 'home'<->'cache' relationship, resulting in
> a flush from the new 'cache' fileset to the new 'home' fileset
> - better documentation (and, indeed, automation/automagic) on making
> best use of available memory within NSD servers
> - read caching of data blocks within an NSD server's memory (when
> acting in "server" mode in a multi-cluster environment where the client
> nodes do not have direct block access to the disks)
> - "hot file" identification tools/data for policy based HSM migration
> - some easy and non-invasive method for logging file and folder
> deletions (for the purposes of expiring backup data without using a
> separate database of files, in my case)
> - better licensing model (dare I say it - capacity based?)
> I'd love to be able to change the blocksize on an existing filesystem
> too, but I imagine that's not possible.
> Dr Orlando Richards
> Information Services
> IT Infrastructure Division
> Unix Section
> Tel: 0131 650 4994
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
Dr Orlando Richards
IT Infrastructure Division
Tel: 0131 650 4994
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
More information about the gpfsug-discuss