[gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated

Dean Hildebrand dhildeb at us.ibm.com
Fri Nov 2 19:57:52 GMT 2012



Hi Orlando,

Thanks for all of your feedback, many great suggestions. Sorry for the late
response, I've been trying to go through and digest all the comments from
the user group meeting.  I'll do my best to forward your suggestions
internally.

The one thing I wanted to comment on was that "hot file" identification was
shipped in gpfs 3.5.0.3.

Here is a link to the docs discussing it:
http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r50-3.gpfs200.doc%2Fbl1adv_userpool.htm&resultof=%22file%22%20%22heat%22%20


Dean Hildebrand
Research Staff Member - Storage Systems
IBM Almaden Research Center

On 25/09/12 14:05, Jez Tucker wrote:
> Hello all
>
>    Firstly can I thank all who attended UG #6.  We had a great turn out
and the opportunity to network with more people from IBM was most welcome.
>
> I have uploaded the presentations from UG to this small, catchy URL:
http://goo.gl/n1in1
> [Bar the SCCS presentation, awaiting clearance].
>
> Please have a read of the presentations.
>
> IBM Almaden Labs welcome your feedback regarding pNFS and Panache as well
as FRQs etc.
>
> For instance, one FRQ idea banded around was a GRIO/QoS implementation
for GPFS : E.G.: http://goo.gl/zjkN8
> It would be most helpful if a couple of lines use-case was alongside each
of these.
>
> If these are messaged back to the list for healthy debate or sent to me
directly I'll put them on the UG website for Almaden Labs to peruse/discuss
with us.
>
> Regards,
>
>      Jez
>
> p.s. I'll also start to solicit previous presentations for UG < #5, so if
you were a speaker, please get in touch.
> ---

Thanks for the great meeting Jez, and Claire et al at OCF.

On feature requests, I think one desirable feature request discussed at
the meeting was for "better" performance monitoring tools.

A quick think through the things on my plate which would be eased with
new/changed features in GPFS led me to this wishlist:

  - ability to change the designated NSD servers for an NSD without
unmounting the filesystem everywhere

  - expansion of the AFM toolchain, including the following to assist
with migration of data between filesystems:
    - ability to set a pre-existing fileset as a "cache" of an empty
'home' fileset with AFM, allowing for a push of the data from the
"cache" fileset/filesystem to the "home" target fileset/filesystem as a
data migration strategy
    - ability to remove an AFM relationship between filesets, preserving
data in the 'cache' fileset (and making it, independently, a 'live'
fileset)
    - ability to "flip" the 'home'<->'cache' relationship, resulting in
a flush from the new 'cache' fileset to the new 'home' fileset

  - better documentation (and, indeed, automation/automagic) on making
best use of available memory within NSD servers

  - read caching of data blocks within an NSD server's memory (when
acting in "server" mode in a multi-cluster environment where the client
nodes do not have direct block access to the disks)

  - "hot file" identification tools/data for policy based HSM migration

  - some easy and non-invasive method for logging file and folder
deletions (for the purposes of expiring backup data without using a
separate database of files, in my case)

  - better licensing model (dare I say it - capacity based?)


I'd love to be able to change the blocksize on an existing filesystem
too, but I imagine that's not possible.


--
Orlando



--
             --
    Dr Orlando Richards
   Information Services
IT Infrastructure Division
        Unix Section
     Tel: 0131 650 4994

The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20121102/5fe2d224/attachment-0002.htm>


More information about the gpfsug-discuss mailing list