From stuartb at 4gh.net Sat Aug 1 22:45:40 2015 From: stuartb at 4gh.net (Stuart Barkley) Date: Sat, 1 Aug 2015 17:45:40 -0400 (EDT) Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: On Tue, 28 Jul 2015 at 12:28 -0000, Martin Gasthuber wrote: > In our setup, the files gets copied to a (user accessible) GPFS > instance which controls the access by NFSv4 ACLs (only !) and from > time to time, we had to modify these ACLs (add/remove user/group > etc.). Doing a (non policy-run based) simple approach, changing 9 > million files requires ~200 hours to run - which we consider not > really a good option. Just a thought, but instead of applying the ACLs to the files individually, could you apply the ACLs on a few parent directories instead? There are certainly issues to consider (current directory structure, actual security model, any write permissions, etc), but this might simplify things considerably. Stuart -- I've never been lost; I was once bewildered for three days, but never lost! -- Daniel Boone From makaplan at us.ibm.com Mon Aug 3 18:05:51 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 3 Aug 2015 13:05:51 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Reality check on GPFS ACLs. I think it would be helpful to understand how ACLs are implemented in GPFS - - All ACLs for a file sytem are stored as records in a special file. - Each inode that has an ACL (more than just the classic Posix mode bits) has a non-NULL offset to the governing ACL in the special acl file. - Yes, inodes with identical ACLs will have the same ACL offset value. Hence in many (most?) use cases, the ACL file can be relatively small - it's size is proportional to the number of unique ACLs, not the number of files. And how and what mmapplypolicy can do for you - mmapplypolicy can rapidly scan the directories and inodes of a file system. This scanning bypasses most locking regimes and takes advantage of both parallel processing and streaming full tracks of inodes. So it is good at finding files (inodes) that satifsy criteria that can be described by an SQL expression over the attributes stored in the inode. BUT to change the attributes of any particular file we must use APIs and code that respect all required locks, log changes, etc, etc. Those changes can be "driven" by the execution phase of mmapplypolicy, in parallel - but overheads are significantly higher per file, than during the scanning phases of operation. NOW to the problem at hand. It might be possible to improve ACL updates somewhat by writing a command that processes multiple files at once, still using the same APIs used by the mmputacl command. Hmmm.... it wouldn't be very hard for GPFS development team to modify the mmputacl command to accept a list of files... I see that the Linux command setfacl does accept multiple files in its argument list. Finally and not officially supported nor promised nor especially efficient .... try getAcl() as a GPFS SQL policy function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Tue Aug 4 08:32:31 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Tue, 04 Aug 2015 08:32:31 +0100 Subject: [gpfsug-discuss] GPFS UG User Group@USA Message-ID: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. Short Bio from Kristy: "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. LinkedIn Profile: www.linkedin.com/in/kristykallbackrose " We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. Kristy will be following up later with some announcements about the USA group activities. Simon GPFS UG Chair From kraemerf at de.ibm.com Tue Aug 4 12:28:24 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Tue, 4 Aug 2015 13:28:24 +0200 Subject: [gpfsug-discuss] Whitepaper Spectrum Scale and ownCloud + plus Webinar on large scale ownCloud project In-Reply-To: References: Message-ID: 1) Here is the link for latest ISV solution with IBM Spectrum Scale and ownCloud... https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_on-premise-file-syn-share-owncloud 2) Webinar on large scale ownCloud+GPFS project running in Germany Sciebo Scales Enterprise File Sync and Share for 500K Users: A Proven Solution from ownCloud and IBM Spectrum Storage. https://cc.readytalk.com/cc/s/registrations/new?cid=y5gn9c445u2k -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany From kallbac at iu.edu Wed Aug 5 03:56:32 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Tue, 4 Aug 2015 22:56:32 -0400 Subject: [gpfsug-discuss] GPFS UG User Group@USA In-Reply-To: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> References: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> Message-ID: <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> Hello, Thanks Simon and all for moving the USA-based group forward. You?ve got a great user group in the UK and am grateful it?s being extended. I?m looking forward to increased opportunities for the US user community to interact with GPFS developers and for us to interact with each other as users of GPFS as well. Having said that, here are some initial plans: We propose the first "Meet the Developers" session be in New York City at the IBM 590 Madison office during 2H of September (3-4 hours and lunch will be provided). [Personally, I want to avoid the week of September 28th which is the HPSS Users Forum. Let us know of any date preferences you have.] The rough agenda will include a session by a Spectrum Scale development architect followed by a demo of one of the upcoming functions. We would also like to include a user lead session --sharing their experiences or use case scenarios with Spectrum Scale. For this go round, those who are interested in attending this event should write to Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Please also chime in if you are interested in sharing an experience or use case scenario for this event or a future event. Lastly, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy On Aug 4, 2015, at 3:32 AM, GPFS UG Chair (Simon Thompson) wrote: > > As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. > > We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. > > Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. > > Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). > > I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. > > Short Bio from Kristy: > > "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. > > LinkedIn Profile: www.linkedin.com/in/kristykallbackrose > " > > We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: > > A paragraph covering their credentials; > A paragraph covering what they would bring to the group; > A paragraph setting out their vision for the group for the next two years. > > Note that this should be a GPFS customer based in the USA. > > If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. > > Kristy will be following up later with some announcements about the USA group activities. > > Simon > GPFS UG Chair > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From Robert.Oesterlin at nuance.com Wed Aug 5 12:12:17 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 5 Aug 2015 11:12:17 +0000 Subject: [gpfsug-discuss] GPFS UG User Group@USA In-Reply-To: <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> References: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> Message-ID: <315FAEF7-DEC0-4252-BA3B-D318DE05933C@nuance.com> Hi Kristy Thanks for stepping up to the duties for the USA based user group! Getting the group organized is going to be a challenge and I?m happy to help out where I can. Regarding some of the planning for SC15, I wonder if you could drop me a note off the mailing list to discuss this, since I have been working with some others at IBM on a BOF proposal for SC15 and these two items definitely overlap. My email is robert.oesterlin at nuance.com (probably end up regretting putting that out on the mailing list at some point ? sigh) Bob Oesterlin Sr Storage Engineer, Nuance Communications From: > on behalf of Kristy Kallback-Rose Reply-To: gpfsug main discussion list Date: Tuesday, August 4, 2015 at 9:56 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFS UG User Group at USA Hello, Thanks Simon and all for moving the USA-based group forward. You?ve got a great user group in the UK and am grateful it?s being extended. I?m looking forward to increased opportunities for the US user community to interact with GPFS developers and for us to interact with each other as users of GPFS as well. Having said that, here are some initial plans: We propose the first "Meet the Developers" session be in New York City at the IBM 590 Madison office during 2H of September (3-4 hours and lunch will be provided). [Personally, I want to avoid the week of September 28th which is the HPSS Users Forum. Let us know of any date preferences you have.] The rough agenda will include a session by a Spectrum Scale development architect followed by a demo of one of the upcoming functions. We would also like to include a user lead session --sharing their experiences or use case scenarios with Spectrum Scale. For this go round, those who are interested in attending this event should write to Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Please also chime in if you are interested in sharing an experience or use case scenario for this event or a future event. Lastly, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy On Aug 4, 2015, at 3:32 AM, GPFS UG Chair (Simon Thompson) > wrote: As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. Short Bio from Kristy: "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. LinkedIn Profile: www.linkedin.com/in/kristykallbackrose " We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. Kristy will be following up later with some announcements about the USA group activities. Simon GPFS UG Chair _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 5 20:23:45 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 5 Aug 2015 19:23:45 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: , Message-ID: Just picking this topic back up. Does anyone have any comments/thoughts on these questions? Thanks Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Luke Raimbach [Luke.Raimbach at crick.ac.uk] Sent: 20 July 2015 08:02 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.1.1 immutable filesets Can I add to this list of questions? Apparently, one cannot set immutable, or append-only attributes on files / directories within an AFM cache. However, if I have an independent writer and set immutability at home, what does the AFM IW cache do about this? Or does this restriction just apply to entire filesets (which would make more sense)? Cheers, Luke. -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: 19 July 2015 11:45 To: gpfsug main discussion list Subject: [gpfsug-discuss] 4.1.1 immutable filesets I was wondering if anyone had looked at the immutable fileset features in 4.1.1? In particular I was looking at the iam compliant mode, but I've a couple of questions. * if I have an iam compliant fileset, and it contains immutable files or directories, can I still unlink and delete the filset? * will HSM work with immutable files? I.e. Can I migrate files to tape and restore them? The docs mention that extended attributes can be updated internally by dmapi, so I guess HSM might work? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Aug 7 14:46:04 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 13:46:04 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets Message-ID: On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" wrote: >* if I have an iam compliant fileset, and it contains immutable files or >directories, can I still unlink and delete the filset? So just to answer my own questions here. (Actually I tried in non-compliant mode, rather than full compliance, but I figured this was the mode I actually need as I might need to reset the immutable time back earlier to allow me to delete something that shouldn't have gone in). Yes, I can both unlink and delete an immutable fileset which has immutable files which are non expired in it. >* will HSM work with immutable files? I.e. Can I migrate files to tape >and restore them? The docs mention that extended attributes can be >updated internally by dmapi, so I guess HSM might work? And yes, HSM files work. I created a file, made it immutable, backed up, migrated it: $ mmlsattr -L BHAM_DATASHARE_10.zip file name: BHAM_DATASHARE_10.zip metadata replication: 2 max 2 data replication: 2 max 2 immutable: yes appendOnly: no indefiniteRetention: no expiration Time: Fri Aug 7 14:45:00 2015 flags: storage pool name: tier2 fileset name: rds-projects-2015-thompssj-01 snapshot name: creation time: Fri Aug 7 14:38:30 2015 Windows attributes: ARCHIVE OFFLINE READONLY Encrypted: no I was then able to recall the file. Simon From wsawdon at us.ibm.com Fri Aug 7 16:13:31 2015 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 7 Aug 2015 08:13:31 -0700 Subject: [gpfsug-discuss] Hello Message-ID: Hello, Although I am new to this user group, I've worked on GPFS at IBM since before it was a product.! I am interested in hearing from the group about the features you like or don't like and of course, what features you would like to see. Wayne Sawdon STSM; IBM Research Manager | Cloud Data Management Phone: 1-408-927-1848 E-mail: wsawdon at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Fri Aug 7 16:27:33 2015 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 7 Aug 2015 08:27:33 -0700 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: Message-ID: > On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" > wrote: > > >* if I have an iam compliant fileset, and it contains immutable files or > >directories, can I still unlink and delete the filset? > > So just to answer my own questions here. (Actually I tried in > non-compliant mode, rather than full compliance, but I figured this was > the mode I actually need as I might need to reset the immutable time back > earlier to allow me to delete something that shouldn't have gone in). > > Yes, I can both unlink and delete an immutable fileset which has immutable > files which are non expired in it. > It was decided that deleting a fileset with compliant data is a "hole", but apparently it was not closed before the GA. The same rule should apply to unlinking the fileset. HSM on compliant data should be fine. I don't know what happens when you combine compliance and AFM, but I would suggest not mixing the two. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Aug 7 16:36:03 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 15:36:03 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: , Message-ID: I did only try in nc mode, so possibly if its fully compliant it wouldn't have let me delete the fileset. One other observation. As a user Id set the atime and chmod -w the file. Once it had expired, I was then unable to reset the atime into the future. (I could as root). I'm not sure what the expected behaviour should be, but I was sorta surprised that I could initially set the time as the user, but then not be able to extend even once it had expired. Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Wayne Sawdon [wsawdon at us.ibm.com] Sent: 07 August 2015 16:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.1.1 immutable filesets > On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" > wrote: > > >* if I have an iam compliant fileset, and it contains immutable files or > >directories, can I still unlink and delete the filset? > > So just to answer my own questions here. (Actually I tried in > non-compliant mode, rather than full compliance, but I figured this was > the mode I actually need as I might need to reset the immutable time back > earlier to allow me to delete something that shouldn't have gone in). > > Yes, I can both unlink and delete an immutable fileset which has immutable > files which are non expired in it. > It was decided that deleting a fileset with compliant data is a "hole", but apparently it was not closed before the GA. The same rule should apply to unlinking the fileset. HSM on compliant data should be fine. I don't know what happens when you combine compliance and AFM, but I would suggest not mixing the two. -Wayne From S.J.Thompson at bham.ac.uk Fri Aug 7 16:56:17 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 15:56:17 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes Message-ID: I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. Does anyone have a script to do this already? Surely there is a better way? Thanks Simon From rclee at lbl.gov Fri Aug 7 17:30:21 2015 From: rclee at lbl.gov (Rei Lee) Date: Fri, 7 Aug 2015 09:30:21 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: Message-ID: <55C4DD1D.7000402@lbl.gov> We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Aug 7 17:49:03 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 16:49:03 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: <55C4DD1D.7000402@lbl.gov> References: , <55C4DD1D.7000402@lbl.gov> Message-ID: Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From ckerner at ncsa.uiuc.edu Fri Aug 7 17:41:14 2015 From: ckerner at ncsa.uiuc.edu (Chad Kerner) Date: Fri, 7 Aug 2015 11:41:14 -0500 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: <55C4DD1D.7000402@lbl.gov> References: <55C4DD1D.7000402@lbl.gov> Message-ID: <20150807164114.GA29652@logos.ncsa.illinois.edu> You can use the mmlsfileset DEVICE -L option to see the maxinodes and allocated inodes. I have a perl script that loops through all of our file systems every hour and scans for it. If one is nearing capacity(tunable threshold in the code), it automatically expands it by a set amount(also tunable). We add 10% currently. This also works on file systems that have no filesets as it appears as the root fileset. I can check with my boss to see if its ok to post it if you want it. Its about 40 lines of perl. Chad -- Chad Kerner, Systems Engineer Storage Enabling Technologies National Center for Supercomputing Applications On Fri, Aug 07, 2015 at 09:30:21AM -0700, Rei Lee wrote: > We have the same problem when we started using independent fileset. I think > this should be a RFE item that IBM should provide a tool similar to 'mmdf > -F' to show the number of free/used inodes for an independent fileset. > > Rei > > On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > >I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > > >We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > > >mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > > >The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > > >Does anyone have a script to do this already? > > > >Surely there is a better way? > > > >Thanks > > > >Simon > >_______________________________________________ > >gpfsug-discuss mailing list > >gpfsug-discuss at gpfsug.org > >http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From makaplan at us.ibm.com Fri Aug 7 21:12:05 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 7 Aug 2015 16:12:05 -0400 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: , <55C4DD1D.7000402@lbl.gov> Message-ID: Try mmlsfileset filesystem_name -i From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From martin.gasthuber at desy.de Fri Aug 7 21:41:08 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Fri, 7 Aug 2015 22:41:08 +0200 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Hi Marc, your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-) best regards, Martin > On 3 Aug, 2015, at 19:05, Marc A Kaplan wrote: > > Reality check on GPFS ACLs. > > I think it would be helpful to understand how ACLs are implemented in GPFS - > > - All ACLs for a file sytem are stored as records in a special file. > - Each inode that has an ACL (more than just the classic Posix mode bits) has a non-NULL offset to the governing ACL in the special acl file. > - Yes, inodes with identical ACLs will have the same ACL offset value. Hence in many (most?) use cases, the ACL file can be relatively small - > it's size is proportional to the number of unique ACLs, not the number of files. > > And how and what mmapplypolicy can do for you - > > mmapplypolicy can rapidly scan the directories and inodes of a file system. > This scanning bypasses most locking regimes and takes advantage of both parallel processing > and streaming full tracks of inodes. So it is good at finding files (inodes) that satifsy criteria that can > be described by an SQL expression over the attributes stored in the inode. > > BUT to change the attributes of any particular file we must use APIs and code that respect all required locks, > log changes, etc, etc. > > Those changes can be "driven" by the execution phase of mmapplypolicy, in parallel - but overheads are significantly higher per file, > than during the scanning phases of operation. > > NOW to the problem at hand. It might be possible to improve ACL updates somewhat by writing a command that processes > multiple files at once, still using the same APIs used by the mmputacl command. > > Hmmm.... it wouldn't be very hard for GPFS development team to modify the mmputacl command to accept a list of files... > I see that the Linux command setfacl does accept multiple files in its argument list. > > Finally and not officially supported nor promised nor especially efficient .... try getAcl() as a GPFS SQL policy function._______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From rclee at lbl.gov Fri Aug 7 21:44:23 2015 From: rclee at lbl.gov (Rei Lee) Date: Fri, 7 Aug 2015 13:44:23 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: <55C518A7.6020605@lbl.gov> We have tried that command but it took a very long time like it was hanging so I killed the command before it finished. I was not sure if it was a bug in early 4.1.0 software but I did not open a PMR. I just ran the command again on a quiet file system and it has been 5 minutes and the command is still not showing any output. 'mmdf -F' returns very fast. 'mmlsfileset -l' does not report the number of free inodes. Rei On 8/7/15 1:12 PM, Marc A Kaplan wrote: > Try > > mmlsfileset filesystem_name -i > > > Marc A Kaplan > > > > From: "Simon Thompson (Research Computing - IT Services)" > > To: gpfsug main discussion list > Date: 08/07/2015 12:49 PM > Subject: Re: [gpfsug-discuss] Independent fileset free inodes > Sent by: gpfsug-discuss-bounces at gpfsug.org > ------------------------------------------------------------------------ > > > > > Hmm. I'll create an RFE next week then. (just in case someone comes > back with a magic flag we don't know about!). > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at gpfsug.org > [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] > Sent: 07 August 2015 17:30 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] Independent fileset free inodes > > We have the same problem when we started using independent fileset. I > think this should be a RFE item that IBM should provide a tool similar > to 'mmdf -F' to show the number of free/used inodes for an independent > fileset. > > Rei > > On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) > wrote: > > I was just wondering if anyone had a way to return the number of > free/used inodes for an independent fileset and all its children. > > > > We recently had a case where we were unable to create new files in a > child file-set, and it turns out the independent parent had run out of > inodes. > > > > mmsf however only lists the inodes used directly in the parent > fileset, I.e. About 8 as that was the number of child filesets. > > > > The suggestion from IBM support is that we use mmdf and then add up > the numbers from all the child filesets to workout how many are > free/used in the independent fileset. > > > > Does anyone have a script to do this already? > > > > Surely there is a better way? > > > > Thanks > > > > Simon > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From bevans at pixitmedia.com Fri Aug 7 21:44:44 2015 From: bevans at pixitmedia.com (Barry Evans) Date: Fri, 7 Aug 2015 21:44:44 +0100 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: <-2676389644758800053@unknownmsgid> -i will give you the exact used number but... Avoid running it during peak usage on most setups. It's pretty heavy, like running a -d on lssnapshot. Your best bet is from earlier posts: '-L' gives you the max and alloc. If they match, you know you're in bother soon. It's not accurate, of course, but prevention is typically the best medicine in this case. Cheers, Barry ArcaStream/Pixit On 7 Aug 2015, at 21:12, Marc A Kaplan wrote: Try mmlsfileset filesystem_name -i From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org ------------------------------ Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Aug 7 22:21:28 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 7 Aug 2015 17:21:28 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: You asked: "your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-) " Perhaps one could hack/patch that - but I can't recommend it. Would you routinely hack/patch the GPFS metadata that comprises a directory? Consider replicated and logged metadata ... Consider you've corrupted the hash table of all ACL values... -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Aug 10 08:13:43 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 10 Aug 2015 07:13:43 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: Hi Marc, Thanks for this. Just to clarify the output when it mentions allocated inodes, does that mean the number used or the number allocated? I.e. If I pre-create a bunch of inodes will they appear as allocated? Or is that only when they are used by a file etc? Thanks Simon From: Marc A Kaplan > Reply-To: gpfsug main discussion list > Date: Friday, 7 August 2015 21:12 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Independent fileset free inodes Try mmlsfileset filesystem_name -i [Marc A Kaplan] From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00002.gif Type: image/gif Size: 21994 bytes Desc: ATT00002.gif URL: From makaplan at us.ibm.com Mon Aug 10 19:14:58 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 10 Aug 2015 14:14:58 -0400 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: mmlsfileset xxx -i 1. Yes it is slow. I don't know the reasons. Perhaps someone more familiar with the implementation can comment. It's surprising to me that it is sooo much slower than mmdf EVEN ON a filesystem that only has the root fileset! 2. used: how many inodes (files) currently exist in the given fileset or fileset allocated: number of inodes "pre"allocated in the (special) file of all inodes. maximum: number of inodes that GPFS might allocate on demand, with current --inode-limit settings from mmchfileset and mmchfs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From taylorm at us.ibm.com Mon Aug 10 22:23:02 2015 From: taylorm at us.ibm.com (Michael L Taylor) Date: Mon, 10 Aug 2015 14:23:02 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes Message-ID: <201508102123.t7ALNZDV012260@d01av01.pok.ibm.com> This capability is available in Storage Insights, which is a Software as a Service (SaaS) storage management solution. You can play with a live demo and try a free 30 day trial here: https://www.ibmserviceengage.com/storage-insights/learn I could also provide a screen shot of what IBM Spectrum Control looks like when managing Spectrum Scale and how you can easily see fileset relationships and used space and inodes per fileset if interested. -------------- next part -------------- An HTML attachment was scrubbed... URL: From GARWOODM at uk.ibm.com Tue Aug 11 17:05:52 2015 From: GARWOODM at uk.ibm.com (Michael Garwood7) Date: Tue, 11 Aug 2015 16:05:52 +0000 Subject: [gpfsug-discuss] Developer Works forum post on Spectrum Scale and Spark work Message-ID: <201508111606.t7BG6Vt6005368@d06av01.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Tue Aug 11 17:53:32 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Tue, 11 Aug 2015 18:53:32 +0200 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Hi Marc, this was meant to be more a joke than a 'wish' - but it would be interesting for us (with the case of several millions of files having the same ACL) if there are ways/plans to treat ACLs more referenced from each of these files and having a mechanism to treat all of them in a single operation. -- Martin > On 7 Aug, 2015, at 23:21, Marc A Kaplan wrote: > > You asked: > > "your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-)" > > > Perhaps one could hack/patch that - but I can't recommend it. Would you routinely hack/patch the GPFS metadata that comprises a directory? > Consider replicated and logged metadata ... Consider you've corrupted the hash table of all ACL values... > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Tue Aug 11 18:59:08 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 13:59:08 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: We (myself and a few other GPFS people) are reading this and considering... Of course we can't promise anything here. I can see some ways to improve and make easier the job of finding and changing the ACLs of many files. But I think whatever we end up doing will still be, at best, a matter of changing every inode, rather than changing on ACL that all those inodes happen to point to. IOW, as a lower bound, we're talking at least as much overhead as doing chmod on the chosen files. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Tue Aug 11 19:11:26 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Tue, 11 Aug 2015 18:11:26 +0000 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , Message-ID: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Aug 11 20:45:56 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 15:45:56 -0400 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: The mmfind command/script you may find in samples/ilm of 4.1.1 (July 2015) is completely revamped and immensely improved compared to any previous mmfind script you may have seen shipped in an older samples/ilm/mmfind. If you have a classic "find" job that you'd like to easily parallelize, give the new mmfind a shot and let us know how you make out! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Tue Aug 11 21:56:34 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 11 Aug 2015 21:56:34 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: <55CA6182.9010507@buzzard.me.uk> On 11/08/15 19:11, James Davis wrote: > If trying the naive approach, a la > find /fs ... -exec changeMyACL {} \; > or > /usr/lpp/mmfs/samples/ilm/mmfind /fs ... -exec changeMyACL {} \; > #shameless plug for my mmfind tool, available in the latest release of > GPFS. See the associated README. > I think the cost will be prohibitive. I believe a relatively strong > internal lock is required to do ACL changes, and consequently I think > the performance of modifying the ACL on a bunch of files will be painful > at best. I am not sure what it is like in 4.x but up to 3.5 the mmputacl was some sort of abomination of a command. It could only set the ACL for a single file and if you wanted to edit rather than set you had to call mmgetacl first, manipulate the text file output and then feed that into mmputacl. So if you need to set the ACL's on a directory hierarchy over loads of files then mmputacl is going to be exec'd potentially millions of times, which is a massive overhead just there. If only because mmputacl is a ksh wrapper around tsputacl. Execution time doing this was god dam awful. So I instead wrote a simple C program that used the ntfw library call and the gpfs API to set the ACL's it was way way faster. Of course I was setting a very limited number of different ACL's that where required to support a handful of Samba share types after the data had been copied onto a GPFS file system. As I said previously what is needed is an "mm" version of the FreeBSD setfacl command http://www.freebsd.org/cgi/man.cgi?format=html&query=setfacl(1) That has the -R/--recursive option of the Linux setfacl command which uses the fast inode scanning GPFS API. You want to be able to type something like mmsetfacl -mR g:www:rpaRc::allow foo What you don't want to be doing is calling the abomination of a command that is mmputacl. Frankly whoever is responsible for that command needs taking out the back and given a good kicking. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From makaplan at us.ibm.com Tue Aug 11 23:11:24 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 18:11:24 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <55CA6182.9010507@buzzard.me.uk> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> Message-ID: On Linux you are free to use setfacl and getfacl commands on GPFS files. Works for me. As you say, at least you can avoid the overhead of shell interpretation and forking and whatnot for each file. Or use the APIs, see /usr/include/sys/acl.h. May need to install libacl-devel package and co. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Tue Aug 11 23:27:13 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 11 Aug 2015 23:27:13 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> Message-ID: <55CA76C1.4050109@buzzard.me.uk> On 11/08/15 23:11, Marc A Kaplan wrote: > On Linux you are free to use setfacl and getfacl commands on GPFS files. > Works for me. Really, for NFSv4 ACL's? Given the RichACL kernel patches are only carried by SuSE I somewhat doubt that you can. http://www.bestbits.at/richacl/ People what to set NFSv4 ACL's on GPFS because when used with vfs_gpfs you can get Windows server/NTFS like rich permissions on your Windows SMB clients. You don't get that with Posix ACL's. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From usa-principal at gpfsug.org Tue Aug 11 23:36:11 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Tue, 11 Aug 2015 18:36:11 -0400 Subject: [gpfsug-discuss] Additional Details for Fall 2015 GPFS UG Meet Up in NYC Message-ID: <7d3395cb2575576c30ba55919124e44d@webmail.gpfsug.org> Hello, We are working on some additional information regarding the proposed NYC meet up. Below is the draft agenda for the "Meet the Developers" session. We are still working on closing on an exact date, and will communicate that soon --targeting September or October. Please e-mail Janet Ellsworth (janetell at us.ibm.com) if you are interested in attending. Janet is coordinating the logistics of the event. ? IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. ? IBM developer to demo future Graphical User Interface ? Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this !) ? Open Q&A with the development team Thoughts? Ideas? Best, Kristy GPFS UG - USA Principal PS - I believe we're still looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. From chair at gpfsug.org Wed Aug 12 10:20:40 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Wed, 12 Aug 2015 10:20:40 +0100 Subject: [gpfsug-discuss] USA Co-Principal Message-ID: Hi All, We only had 1 self nomination for the co-principal of the USA side of the group. I've very much like to thank Bob Oesterlin for nominating himself to help Kristy with the USA side of things. I've spoken a few times with Bob "off-list" and he's helped me out with a few bits and pieces. As you may have seen, Kristy has been posting from usa-principal at gpfsug.org, I'll sort another address out for the co-principal role today. Both Kristy and Bob seem determined to get the USA group off the ground and I wish them every success with this. Simon Bob's profile follows: LinkedIn Profile: https://www.linkedin.com/in/boboesterlin Short Profile: I have over 15 years experience with GPFS. Prior to 2013 I was with IBM and wa actively involved with developing solutions for customers using GPFS both inside and outside IBM. Prior to my work with GPFS, I was active in the AFS and OpenAFS community where I served as one of founding Elder members of that group. I am well know inside IBM and have worked to maintain my contacts with development. After 2013, I joined Nuance Communications where I am the Sr Storage Engineer for the HPC grid. I have been active in the GPFS DeveloperWorks Forum and the mailing list, presented multiple times at IBM Edge and IBM Interconnect. I'm active in multiple IBM Beta programs, providing active feedback on new products and future directions. For the user group, my vision is to build an active user community where we can share expertise and skills to help each other. I'd also like to see this group be more active in shaping the future direction of GPFS. I would also like to foster broader co-operation and discussion with users and administrators of other clustered file systems. (Lustre and OpenAFS) From makaplan at us.ibm.com Wed Aug 12 15:43:03 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 12 Aug 2015 10:43:03 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <55CA76C1.4050109@buzzard.me.uk> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> Message-ID: On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work fine for me. nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today not at all, at least not for me ;-( [root at n2 ~]# setfacl -m u:wsawdon:r-x /mak/sil/x [root at n2 ~]# echo $? 0 [root at n2 ~]# getfacl /mak/sil/x getfacl: Removing leading '/' from absolute path names # file: mak/sil/x # owner: root # group: root user::--- user:makaplan:rwx user:wsawdon:r-x group::--- mask::rwx other::--- [root at n2 ~]# nfs4_getfacl /mak/sil/x Operation to request attribute not supported. [root at n2 ~]# echo $? 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ross.keeping at uk.ibm.com Wed Aug 12 15:44:38 2015 From: ross.keeping at uk.ibm.com (Ross Keeping3) Date: Wed, 12 Aug 2015 15:44:38 +0100 Subject: [gpfsug-discuss] Q4 Meet the devs location? Message-ID: Hey I was discussing with Simon and Claire where and when to run our Q4 meet the dev session. We'd like to take the next sessions up towards Scotland to give our Edinburgh/Dundee users a chance to participate sometime in November (around the 4.2 release date). I'm keen to hear from people who would be interested in attending an event in or near Scotland and is there anyone who can offer up a small meeting space for the day? Best regards, Ross Keeping IBM Spectrum Scale - Development Manager, People Manager IBM Systems UK - Manchester Development Lab Phone: (+44 161) 8362381-Line: 37642381 E-mail: ross.keeping at uk.ibm.com 3rd Floor, Maybrook House Manchester, M3 2EG United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Wed Aug 12 15:49:27 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 12 Aug 2015 14:49:27 +0000 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk>, Message-ID: I thought acls could either be posix or nfd4, but not both. Set when creating the file-system? Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 12 August 2015 15:43 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] fast ACL alter solution On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work fine for me. nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today not at all, at least not for me ;-( [root at n2 ~]# setfacl -m u:wsawdon:r-x /mak/sil/x [root at n2 ~]# echo $? 0 [root at n2 ~]# getfacl /mak/sil/x getfacl: Removing leading '/' from absolute path names # file: mak/sil/x # owner: root # group: root user::--- user:makaplan:rwx user:wsawdon:r-x group::--- mask::rwx other::--- [root at n2 ~]# nfs4_getfacl /mak/sil/x Operation to request attribute not supported. [root at n2 ~]# echo $? 1 From jonathan at buzzard.me.uk Wed Aug 12 17:29:00 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 12 Aug 2015 17:29:00 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> Message-ID: <1439396940.3856.4.camel@buzzard.phy.strath.ac.uk> On Wed, 2015-08-12 at 10:43 -0400, Marc A Kaplan wrote: > On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work > fine for me. > Yes they do, but they only set POSIX ACL's, and well most people are wanting to set NFSv4 ACL's so the getfacl and setfacl commands are of no use. > nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today > not at all, at least not for me ;-( Yep they only work against an NFSv4 mounted file system with NFSv4 ACL's. So if you NFSv4 exported a GPFS file system from an AIX node and mounted it on a Linux node that would work for you. It might also work if you NFSv4 exported a GPFS file system using the userspace ganesha NFS server with an appropriate VFS backend for GPFS and mounted on Linux https://github.com/nfs-ganesha/nfs-ganesha However last time I checked such a GPFS VFS backend for ganesha was still under development. The RichACL stuff might also in theory work except it is not in mainline kernels and there is certainly no advertised support by IBM for GPFS using it. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jonathan at buzzard.me.uk Wed Aug 12 17:35:55 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 12 Aug 2015 17:35:55 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> , Message-ID: <1439397355.3856.11.camel@buzzard.phy.strath.ac.uk> On Wed, 2015-08-12 at 14:49 +0000, Simon Thompson (Research Computing - IT Services) wrote: > I thought acls could either be posix or nfd4, but not both. Set when creating the file-system? > The options for ACL's on GPFS are POSIX, NFSv4, all which is mixed NFSv4/POSIX and finally Samba. The first two are self explanatory. The mixed mode is best given a wide berth in my opinion. The fourth is well lets say "undocumented" last time I checked. You can set it, and it shows up when you query the file system but what it does I can only speculate. Take a look at the Korn shell of mmchfs if you doubt it exists. Try it out on a test file system with mmchfs -k samba My guess though I have never verified it, is that it changes the schematics of the NFSv4 ACL's to more closely match those of NTFS ACL's. A bit like some of the other GPFS settings you can fiddle with to make GPFS behave more like an NTFS file system. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From C.J.Walker at qmul.ac.uk Thu Aug 13 15:23:07 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Thu, 13 Aug 2015 16:23:07 +0200 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host Message-ID: <55CCA84B.1080600@qmul.ac.uk> I've set up a couple of VM hosts to export some of its GPFS filesystem via NFS to machines on that VM host[1,2]. Is live migration of VMs likely to work? Live migration isn't a hard requirement, but if it will work, it could make our life easier. Chris [1] AIUI, this is explicitly permitted by the licencing FAQ. [2] For those wondering why we are doing this, it's that some users want docker - and they can probably easily escape to become root on the VM. Doing it this way permits us (we hope) to only export certain bits of the GPFS filesystem. From S.J.Thompson at bham.ac.uk Thu Aug 13 15:32:18 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 13 Aug 2015 14:32:18 +0000 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: <55CCA84B.1080600@qmul.ac.uk> References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: >I've set up a couple of VM hosts to export some of its GPFS filesystem >via NFS to machines on that VM host[1,2]. Provided all your sockets no the VM host are licensed. >Is live migration of VMs likely to work? > >Live migration isn't a hard requirement, but if it will work, it could >make our life easier. Live migration using a GPFS file-system on the hypervisor node should work (subject to the usual caveats of live migration). Whether live migration and your VM instances would still be able to NFS mount (assuming loopback address?) if they moved to a different hypervisor, pass, you might get weird NFS locks. And if they are still mounting from the original VM host, then you are not doing what the FAQ says you can do. Simon From dhildeb at us.ibm.com Fri Aug 14 18:54:59 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 14 Aug 2015 10:54:59 -0700 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: Thanks for the replies Simon... Chris, are you using -v to give the container access to the nfs subdir (and hence to a gpfs subdir) (and hence achieve a level of multi-tenancy)? Even without containers, I wonder if this could allow users to run their own VMs as root as well...and preventing them from becoming root on gpfs... I'd love for you to share your experience (mgmt and perf) with this architecture once you get it up and running. Some side benefits of this architecture that we have been thinking about as well is that it allows both the containers and VMs to be somewhat ephemeral, while the gpfs continues to run in the hypervisor... To ensure VMotion works relatively smoothly, just ensure each VM is given a hostname to mount that always routes back to the localhost nfs server on each machine...and I think things should work relatively smoothly. Note you'll need to maintain the same set of nfs exports across the entire cluster as well, so taht when a VM moves to another machine it immediately has an available export to mount. Dean Hildebrand IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/13/2015 07:33 AM Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host Sent by: gpfsug-discuss-bounces at gpfsug.org >I've set up a couple of VM hosts to export some of its GPFS filesystem >via NFS to machines on that VM host[1,2]. Provided all your sockets no the VM host are licensed. >Is live migration of VMs likely to work? > >Live migration isn't a hard requirement, but if it will work, it could >make our life easier. Live migration using a GPFS file-system on the hypervisor node should work (subject to the usual caveats of live migration). Whether live migration and your VM instances would still be able to NFS mount (assuming loopback address?) if they moved to a different hypervisor, pass, you might get weird NFS locks. And if they are still mounting from the original VM host, then you are not doing what the FAQ says you can do. Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Mon Aug 17 13:50:17 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 17 Aug 2015 12:50:17 +0000 Subject: [gpfsug-discuss] Metadata compression Message-ID: <2D1E2C5B-499D-46D3-AC27-765E3B40E340@nuance.com> Anyone have any practical experience here, especially using Flash, compressing GPFS metadata? IBM points out that they specifically DON?T support it on there devices (SVC/V9000/StoreWize) Spectrum Scale FAQ: https://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html?lang=en (look for the word compressed) But ? I could not find any blanket statements that it?s not supported outright. They don?t mention anything about data, and since the default for GPFS is mixing data and metadata on the same LUNs you?re more than likely compressing the metadata as well. :-) Also, no statements that you must split metadata from data when using compression. Bob Oesterlin Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Wed Aug 19 11:53:39 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Wed, 19 Aug 2015 12:53:39 +0200 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: References: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Hi Marc, maybe a stupid question - is it expected that the 4.1.1 mmfind set of tools also works on a 4.1.0.8 environment ? -- Martin > On 11 Aug, 2015, at 21:45, Marc A Kaplan wrote: > > The mmfind command/script you may find in samples/ilm of 4.1.1 (July 2015) is completely revamped and immensely improved compared to any previous mmfind script you may have seen shipped in an older samples/ilm/mmfind. > > If you have a classic "find" job that you'd like to easily parallelize, give the new mmfind a shot and let us know how you make out! > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Wed Aug 19 14:18:14 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 19 Aug 2015 09:18:14 -0400 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> References: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Message-ID: mmfind in 4.1.1 depends on some new functionality added to mmapplypolicy in 4.1.1. Depending which find predicates you happen to use, the new functions in mmapplypolicy will be invoked (or not.) If you'd like to try it out - go ahead - it either works or it doesn't. If it doesn't you can also try using the new mmapplypolicy script and the new tsapolicy binary on the old GPFS system. BUT of course that's not supported. AFAIK, nothing bad will happen, but it's not supported. mmfind in 4.1.1 ships as a "sample", so it is not completely supported, but we will take bug reports and constructive criticism seriously, when you run it on a GPFS cluster that has been completely upgraded to 4.1.1. (Please don't complain that it does not work on a back level system.) For testing this kind of functionality, GPFS can be run on a single node or VM. You can emulate an NSD volume by "giving" mmcrnsd a GB sized file (or larger) instead of a block device. (Also not supported and not very swift but it works.) So there's no need to even "provision" a disk. --marc of GPFS -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Wed Aug 19 14:25:35 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Wed, 19 Aug 2015 13:25:35 +0000 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com><09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Message-ID: <201508191343.t7JDhlaU022402@d01av04.pok.ibm.com> An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Thu Aug 20 14:23:41 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Thu, 20 Aug 2015 09:23:41 -0400 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Message-ID: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal From bbanister at jumptrading.com Thu Aug 20 16:42:09 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 20 Aug 2015 15:42:09 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From Kevin.Buterbaugh at Vanderbilt.Edu Thu Aug 20 17:37:37 2015 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 20 Aug 2015 16:37:37 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> Hi All, I feel sorry for Kristy, as she just simply isn?t going to be able to meet everyones? needs here. For example, I had already e-mailed Kristy off list expressing my hope that the GPFS US UG meeting could be on Tuesday the 17th. Why? Because, as Bryan points out, the DDN User Group meeting is typically on Monday. We have limited travel funds and so if the two meetings were on consecutive days that would allow me to attend both (we have both non-DDN and DDN GPFS storage here). I?d prefer Tuesday over Sunday because that would at least allow me to grab a few minutes on the conference show floor. If the meeting is on the Friday or Saturday before or after SC 15 then I will have to choose ? or possibly not go at all. But I think that Bryan is right ? everyone should express their preferences as soon as possible and then Kristy can have the unenviable task of trying to disappoint the least number of people! :-O Kevin On Aug 20, 2015, at 10:42 AM, Bryan Banister > wrote: Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Aug 20 19:09:27 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 20 Aug 2015 18:09:27 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com>, <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> Message-ID: With my uk hat on, id suggest its also important to factor in IBM's ability to ship people in as well. I know last year there was an IBM GPFS event on the Monday at SC as I spoke there, I'm assuming the GPFS UG will really be an extended version of that, and there were quite a a lot in the audience for that. I know I made some really good contacts with both users and IBM at the event (and I encourage people to speak as its a great way of meeting people!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 20 August 2015 17:37 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Hi All, I feel sorry for Kristy, as she just simply isn?t going to be able to meet everyones? needs here. For example, I had already e-mailed Kristy off list expressing my hope that the GPFS US UG meeting could be on Tuesday the 17th. Why? Because, as Bryan points out, the DDN User Group meeting is typically on Monday. We have limited travel funds and so if the two meetings were on consecutive days that would allow me to attend both (we have both non-DDN and DDN GPFS storage here). I?d prefer Tuesday over Sunday because that would at least allow me to grab a few minutes on the conference show floor. If the meeting is on the Friday or Saturday before or after SC 15 then I will have to choose ? or possibly not go at all. But I think that Bryan is right ? everyone should express their preferences as soon as possible and then Kristy can have the unenviable task of trying to disappoint the least number of people! :-O Kevin On Aug 20, 2015, at 10:42 AM, Bryan Banister > wrote: Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 From dhildeb at us.ibm.com Thu Aug 20 17:12:09 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 20 Aug 2015 09:12:09 -0700 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center From: Bryan Banister To: gpfsug main discussion list Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [ mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From kallbac at iu.edu Thu Aug 20 20:00:21 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Thu, 20 Aug 2015 19:00:21 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 12:26:47 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 11:26:47 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. The docs are a little lacking in detail of how you create NSD disks on clients, I've tried using: %nsd: device=sdb2 nsd=cl0901u17_hawc_sdb2 servers=cl0901u17 pool=system.log failureGroup=90117 (and also with usage=metadataOnly as well), however mmcrsnd -F tells me "mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license designation" Which is correct as its a client system, though HAWC is supposed to be able to run on client systems. I know for LROC you have to set usage=localCache, is there a new value for using HAWC? I'm also a little unclear about failureGroups for this. The docs suggest setting the HAWC to be replicated for client systems, so I guess that means putting each client node into its own failure group? Thanks Simon From Robert.Oesterlin at nuance.com Wed Aug 26 12:46:59 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 26 Aug 2015 11:46:59 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Not directly related to HWAC, but I found a bug in 4.1.1 that results in LROC NSDs not being properly formatted (they don?t work) - Reference APAR IV76242 . Still waiting for a fix. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 6:26 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] Using HAWC (write cache) Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 13:23:36 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 12:23:36 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon From: , Robert > Reply-To: gpfsug main discussion list > Date: Wednesday, 26 August 2015 12:46 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Not directly related to HWAC, but I found a bug in 4.1.1 that results in LROC NSDs not being properly formatted (they don?t work) - Reference APAR IV76242 . Still waiting for a fix. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 6:26 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] Using HAWC (write cache) Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Aug 26 13:27:36 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 26 Aug 2015 12:27:36 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Aug 26 13:50:44 2015 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 26 Aug 2015 12:50:44 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> References: , <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: <201D6001C896B846A9CFC2E841986AC1454FFB0B@mailnycmb2a.winmail.deshaw.com> There is a more severe issue with LROC enabled in saveInodePtrs() which results in segfaults and loss of acknowledged writes, which has caused us to roll back all LROC for now. We are testing an efix (ref Defect 970773, IV76155) now which addresses this. But I would advise against running with LROC/HAWC in production without this fix. We experienced this on 4.1.0-6, but had the efix built against 4.1.1-1, so the exposure seems likely to be all 4.1 versions. Thx Paul Sent with Good (www.good.com) ________________________________ From: gpfsug-discuss-bounces at gpfsug.org on behalf of Oesterlin, Robert Sent: Wednesday, August 26, 2015 8:27:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 13:57:56 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 12:57:56 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From C.J.Walker at qmul.ac.uk Wed Aug 26 14:46:56 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Wed, 26 Aug 2015 14:46:56 +0100 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: <55DDC350.8010603@qmul.ac.uk> On 13/08/15 15:32, Simon Thompson (Research Computing - IT Services) wrote: > >> I've set up a couple of VM hosts to export some of its GPFS filesystem >> via NFS to machines on that VM host[1,2]. > > Provided all your sockets no the VM host are licensed. Yes, they are. > >> Is live migration of VMs likely to work? >> >> Live migration isn't a hard requirement, but if it will work, it could >> make our life easier. > > Live migration using a GPFS file-system on the hypervisor node should work > (subject to the usual caveats of live migration). > > Whether live migration and your VM instances would still be able to NFS > mount (assuming loopback address?) if they moved to a different > hypervisor, pass, you might get weird NFS locks. And if they are still > mounting from the original VM host, then you are not doing what the FAQ > says you can do. > Yes, that's the intent - VMs get access to GPFS from the hypervisor - that complies with the licence and, presumably, should get better performance. It sounds like our problem would be the NFS end of this if we try a live migrate. Chris From C.J.Walker at qmul.ac.uk Wed Aug 26 15:15:48 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Wed, 26 Aug 2015 15:15:48 +0100 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: <55DDCA14.8010103@qmul.ac.uk> On 14/08/15 18:54, Dean Hildebrand wrote: > Thanks for the replies Simon... > > Chris, are you using -v to give the container access to the nfs subdir > (and hence to a gpfs subdir) (and hence achieve a level of > multi-tenancy)? -v option to what? > Even without containers, I wonder if this could allow > users to run their own VMs as root as well...and preventing them from > becoming root on gpfs... > > I'd love for you to share your experience (mgmt and perf) with this > architecture once you get it up and running. A quick and dirty test: From a VM: -bash-4.1$ time dd if=/dev/zero of=cjwtestfile2 bs=1M count=10240 real 0m20.411s 0m22.137s 0m21.431s 0m21.730s 0m22.056s 0m21.759s user 0m0.005s 0m0.007s 0m0.006s 0m0.003s 0m0.002s 0m0.004s sys 0m11.710s 0m10.615s 0m10.399s 0m10.474s 0m10.682s 0m9.965s From the underlying hypervisor. real 0m11.138s 0m9.813s 0m9.761s 0m9.793s 0m9.773s 0m9.723s user 0m0.006s 0m0.013s 0m0.009s 0m0.008s 0m0.008s 0m0.009s sys 0m5.447s 0m5.529s 0m5.802s 0m5.580s 0m6.190s 0m5.516s So there's a factor of just over 2 slowdown. As it's still 500MB/s, I think it's good enough for now. The machine has a 10Gbit/s network connection and both hypervisor and VM are running SL6. > Some side benefits of this > architecture that we have been thinking about as well is that it allows > both the containers and VMs to be somewhat ephemeral, while the gpfs > continues to run in the hypervisor... Indeed. This is another advantage. If we were running Debian, it would be possible to export part of a filesystem to a VM. Which would presumably work. In redhat derived OSs (we are currently using Scientific Linux), I don't believe it is - hence exporting via NFS. > > To ensure VMotion works relatively smoothly, just ensure each VM is > given a hostname to mount that always routes back to the localhost nfs > server on each machine...and I think things should work relatively > smoothly. Note you'll need to maintain the same set of nfs exports > across the entire cluster as well, so taht when a VM moves to another > machine it immediately has an available export to mount. Yes, we are doing this. Simon alludes to potential problems at the NFS layer on live migration. Otherwise, yes indeed the setup should be fine. I'm not familiar enough with the details of NFS - but I have heard NFS described as "a stateless filesystem with state". It's the stateful bits I'm concerned about. Chris > > Dean Hildebrand > IBM Almaden Research Center > > > Inactive hide details for "Simon Thompson (Research Computing - IT > Services)" ---08/13/2015 07:33:16 AM--->I've set up a couple"Simon > Thompson (Research Computing - IT Services)" ---08/13/2015 07:33:16 > AM--->I've set up a couple of VM hosts to export some of its GPFS > filesystem >via NFS to machines on that > > From: "Simon Thompson (Research Computing - IT Services)" > > To: gpfsug main discussion list > Date: 08/13/2015 07:33 AM > Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host > Sent by: gpfsug-discuss-bounces at gpfsug.org > > ------------------------------------------------------------------------ > > > > > >I've set up a couple of VM hosts to export some of its GPFS filesystem > >via NFS to machines on that VM host[1,2]. > > Provided all your sockets no the VM host are licensed. > > >Is live migration of VMs likely to work? > > > >Live migration isn't a hard requirement, but if it will work, it could > >make our life easier. > > Live migration using a GPFS file-system on the hypervisor node should work > (subject to the usual caveats of live migration). > > Whether live migration and your VM instances would still be able to NFS > mount (assuming loopback address?) if they moved to a different > hypervisor, pass, you might get weird NFS locks. And if they are still > mounting from the original VM host, then you are not doing what the FAQ > says you can do. > > Simon > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From tpathare at sidra.org Wed Aug 26 16:43:51 2015 From: tpathare at sidra.org (Tushar Pathare) Date: Wed, 26 Aug 2015 15:43:51 +0000 Subject: [gpfsug-discuss] Welcome to the "gpfsug-discuss" mailing list In-Reply-To: References: Message-ID: <06133E83-2DCB-4A1C-868A-CD4FDAC61A27@sidra.org> Hello Folks, This is Tushar Pathare from Sidra Medical & Research Centre.I am a HPC Administrator at Sidra. Before joining Sidra I worked with IBM for about 7 years with GPFS Test Team,Pune,India with partner lab being IBM Poughkeepsie,USA Sidra has total GPFS storage of about 1.5PB and growing.Compute power about 5000 cores acquired and growing. Sidra is into Next Generation Sequencing and medical research related to it. Its a pleasure being part of this group. Thank you. Tushar B Pathare High Performance Computing (HPC) Administrator General Parallel File System Scientific Computing Bioinformatics Division Research Sidra Medical and Research Centre PO Box 26999 | Doha, Qatar Burj Doha Tower,Floor 8 D +974 44042250 | M +974 74793547 tpathare at sidra.org | www.sidra.org On 8/26/15, 5:04 PM, "gpfsug-discuss-bounces at gpfsug.org on behalf of gpfsug-discuss-request at gpfsug.org" wrote: >Welcome to the gpfsug-discuss at gpfsug.org mailing list! Hello and >welcome. > > Please introduce yourself to the members with your first post. > > A quick hello with an overview of how you use GPFS, your company >name, market sector and any other interesting information would be >most welcomed. > >Please let us know which City and Country you live in. > >Many thanks. > >GPFS UG Chair > > >To post to this list, send your email to: > > > >General information about the mailing list is at: > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >If you ever want to unsubscribe or change your options (eg, switch to >or from digest mode, change your password, etc.), visit your >subscription page at: > > http://gpfsug.org/mailman/options/gpfsug-discuss/tpathare%40sidra.org > > >You can also make such adjustments via email by sending a message to: > > gpfsug-discuss-request at gpfsug.org > >with the word `help' in the subject or body (don't include the >quotes), and you will get back a message with instructions. > >You must know your password to change your options (including changing >the password, itself) or to unsubscribe. It is: > > p3nguins > >Normally, Mailman will remind you of your gpfsug.org mailing list >passwords once every month, although you can disable this if you >prefer. This reminder will also include instructions on how to >unsubscribe or change your account options. There is also a button on >your options page that will email your current password to you. Disclaimer: This email and its attachments may be confidential and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, any reading, printing, storage, disclosure, copying or any other action taken in respect of this e-mail is prohibited and may be unlawful. If you are not the intended recipient, please notify the sender immediately by using the reply function and then permanently delete what you have received. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Sidra Medical and Research Center. From dhildeb at us.ibm.com Thu Aug 27 01:22:52 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Wed, 26 Aug 2015 17:22:52 -0700 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Thu Aug 27 08:42:34 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 27 Aug 2015 07:42:34 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "valid" as they are essentially NSDs in a different cluster from where the storage cluster would be, but it sounds like it is. Now if I can just get it working ... Looking in mmfsfuncs: if [[ $diskUsage != "localCache" ]] then combinedList=${primaryAdminNodeList},${backupAdminNodeList} IFS="," for server in $combinedList do IFS="$IFS_sv" [[ -z $server ]] && continue $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1 if [[ $? -ne 0 ]] then # The node does not have a server license. printErrorMsg 118 $mmcmd $server return 1 fi IFS="," done # end for server in ${primaryAdminNodeList},${backupAdminNodeList} IFS="$IFS_sv" fi # end of if [[ $diskUsage != "localCache" ]] So unless the NSD device usage=localCache, then it requires a server License when you try and create the NSD, but localCache cannot have a storage pool assigned. I've opened a PMR with IBM. Simon From: Dean Hildebrand > Reply-To: gpfsug main discussion list > Date: Thursday, 27 August 2015 01:22 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center [Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other ques]"Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" > wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From ckrafft at de.ibm.com Thu Aug 27 10:36:27 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Thu, 27 Aug 2015 11:36:27 +0200 Subject: [gpfsug-discuss] Best Practices using GPFS with SVC (and XiV) Message-ID: <201508270936.t7R9asQI012288@d06av08.portsmouth.uk.ibm.com> Dear GPFS folks, I know - it may not be an optimal setup for GPFS ... but is someone willing to share technical best practices when using GPFS with SVC (and XiV). >From the past I remember some recommendations concerning the nr of vDisks in SVC and certainly block size (XiV=1M) could be an issue. Thank you very much for sharing any insights with me. Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 06057114.gif Type: image/gif Size: 1851 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Thu Aug 27 12:58:12 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 27 Aug 2015 11:58:12 +0000 Subject: [gpfsug-discuss] Best Practices using GPFS with SVC (and XiV) In-Reply-To: <201508270936.t7R9asQI012288@d06av08.portsmouth.uk.ibm.com> References: <201508270936.t7R9asQI012288@d06av08.portsmouth.uk.ibm.com> Message-ID: IBM in general doesn?t have a comprehensive set of best practices around Spectrum Scale (trying to get used to that!) and SVC or storage system like XIV (or HP 3PAR). From my IBM days (a few years back) I used both with GPFS successfully. I do recall some discussion regarding a larger block size, but haven?t seen any recent updates. (Scott Fadden, are you listening?) Larger block sizes are problematic for file systems with lots of small files. (like ours) - Since SVC is striping data across multiple storage LUNs, and GPFS is striping as well, what?s the possible impact? My thought would be to use image mode vdisks, but that sort of defeats the purpose/utility of SVC. - IBM specifically points out not to use compression on the SVC/V9000 with GPFS metadata, so if you use these features be careful. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of Christoph Krafft Reply-To: gpfsug main discussion list Date: Thursday, August 27, 2015 at 4:36 AM To: "gpfsug-discuss at gpfsug.org" Subject: [gpfsug-discuss] Best Practices using GPFS with SVC (and XiV) Dear GPFS folks, I know - it may not be an optimal setup for GPFS ... but is someone willing to share technical best practices when using GPFS with SVC (and XiV). From the past I remember some recommendations concerning the nr of vDisks in SVC and certainly block size (XiV=1M) could be an issue. Thank you very much for sharing any insights with me. Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group ________________________________ Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH [cid:2__=8FBBF43DDFA7F6638f9e8a93df938690918c8FB@] Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany ________________________________ IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: ecblank.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 06057114.gif Type: image/gif Size: 1851 bytes Desc: 06057114.gif URL: From S.J.Thompson at bham.ac.uk Thu Aug 27 15:17:19 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 27 Aug 2015 14:17:19 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> References: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: Oh yeah, I see what you mean, I've just looking on another cluster with LROC drives and they have all disappeared. They are still listed in mmlsnsd, but mmdiag --lroc shows the drive as "NULL"/Idle. Simon From: , Robert > Reply-To: gpfsug main discussion list > Date: Wednesday, 26 August 2015 13:27 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Aug 27 15:30:14 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 27 Aug 2015 14:30:14 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: <3B636593-906F-4AEC-A3DF-1A24376B4841@nuance.com> What do they say on that side of the pond? ?Bob?s your uncle!? :-) Yea, same for me. Pretty big oops if you ask me. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Thursday, August 27, 2015 at 9:17 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Oh yeah, I see what you mean, I've just looking on another cluster with LROC drives and they have all disappeared. They are still listed in mmlsnsd, but mmdiag --lroc shows the drive as "NULL"/Idle. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Thu Aug 27 20:24:50 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 27 Aug 2015 12:24:50 -0700 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hi Simon, This appears to be a mistake, as using clients for the System.log pool should not require a server license (should be similar to lroc).... thanks for opening the PMR... Dean Hildebrand IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/27/2015 12:42 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "valid" as they are essentially NSDs in a different cluster from where the storage cluster would be, but it sounds like it is. Now if I can just get it working ... Looking in mmfsfuncs: if [[ $diskUsage != "localCache" ]] then combinedList=${primaryAdminNodeList},${backupAdminNodeList} IFS="," for server in $combinedList do IFS="$IFS_sv" [[ -z $server ]] && continue $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1 if [[ $? -ne 0 ]] then # The node does not have a server license. printErrorMsg 118 $mmcmd $server return 1 fi IFS="," done # end for server in ${primaryAdminNodeList},$ {backupAdminNodeList} IFS="$IFS_sv" fi # end of if [[ $diskUsage != "localCache" ]] So unless the NSD device usage=localCache, then it requires a server License when you try and create the NSD, but localCache cannot have a storage pool assigned. I've opened a PMR with IBM. Simon From: Dean Hildebrand Reply-To: gpfsug main discussion list Date: Thursday, 27 August 2015 01:22 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other ques"Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [attachment "graycol.gif" deleted by Dean Hildebrand/Almaden/IBM] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From dhildeb at us.ibm.com Thu Aug 27 21:36:26 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 27 Aug 2015 13:36:26 -0700 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: <55DDCA14.8010103@qmul.ac.uk> References: <55CCA84B.1080600@qmul.ac.uk> <55DDCA14.8010103@qmul.ac.uk> Message-ID: Hi Christopher, > > > > Chris, are you using -v to give the container access to the nfs subdir > > (and hence to a gpfs subdir) (and hence achieve a level of > > multi-tenancy)? > > -v option to what? I was referring to how you were using docker/containers to expose the NFS storage to the container...there are several different ways to do it and one way is to simply expose a directory to the container via the -v option https://docs.docker.com/userguide/dockervolumes/ > > > Even without containers, I wonder if this could allow > > users to run their own VMs as root as well...and preventing them from > > becoming root on gpfs... > > > > > > I'd love for you to share your experience (mgmt and perf) with this > > architecture once you get it up and running. > > A quick and dirty test: > > From a VM: > -bash-4.1$ time dd if=/dev/zero of=cjwtestfile2 bs=1M count=10240 > real 0m20.411s 0m22.137s 0m21.431s 0m21.730s 0m22.056s 0m21.759s > user 0m0.005s 0m0.007s 0m0.006s 0m0.003s 0m0.002s 0m0.004s > sys 0m11.710s 0m10.615s 0m10.399s 0m10.474s 0m10.682s 0m9.965s > > From the underlying hypervisor. > > real 0m11.138s 0m9.813s 0m9.761s 0m9.793s 0m9.773s 0m9.723s > user 0m0.006s 0m0.013s 0m0.009s 0m0.008s 0m0.008s 0m0.009s > sys 0m5.447s 0m5.529s 0m5.802s 0m5.580s 0m6.190s 0m5.516s > > So there's a factor of just over 2 slowdown. > > As it's still 500MB/s, I think it's good enough for now. Interesting test... I assume you have VLANs setup so that the data doesn't leave the VM, go to the network switch, and then back to the nfs server in the hypervisor again? Also, there might be a few NFS tuning options you could try, like increasing the number of nfsd threads, etc...but there are extra data copies occuring so the perf will never match... > > The machine has a 10Gbit/s network connection and both hypervisor and VM > are running SL6. > > > Some side benefits of this > > architecture that we have been thinking about as well is that it allows > > both the containers and VMs to be somewhat ephemeral, while the gpfs > > continues to run in the hypervisor... > > Indeed. This is another advantage. > > If we were running Debian, it would be possible to export part of a > filesystem to a VM. Which would presumably work. I'm not aware of this...is this through VirtFS or something else? In redhat derived OSs > (we are currently using Scientific Linux), I don't believe it is - hence > exporting via NFS. > > > > > To ensure VMotion works relatively smoothly, just ensure each VM is > > given a hostname to mount that always routes back to the localhost nfs > > server on each machine...and I think things should work relatively > > smoothly. Note you'll need to maintain the same set of nfs exports > > across the entire cluster as well, so taht when a VM moves to another > > machine it immediately has an available export to mount. > > Yes, we are doing this. > > Simon alludes to potential problems at the NFS layer on live migration. > Otherwise, yes indeed the setup should be fine. I'm not familiar enough > with the details of NFS - but I have heard NFS described as "a stateless > filesystem with state". It's the stateful bits I'm concerned about. Are you using v3 or v4? It doesn't really matter though, as in either case, gpfs would handle the state failover parts... Ideally the vM would umount the local nfs server, do VMotion, and then mount the new local nfs server, but given there might be open files...it makes sense that this may not be possible... Dean > > Chris > > > > > Dean Hildebrand > > IBM Almaden Research Center > > > > > > Inactive hide details for "Simon Thompson (Research Computing - IT > > Services)" ---08/13/2015 07:33:16 AM--->I've set up a couple"Simon > > Thompson (Research Computing - IT Services)" ---08/13/2015 07:33:16 > > AM--->I've set up a couple of VM hosts to export some of its GPFS > > filesystem >via NFS to machines on that > > > > From: "Simon Thompson (Research Computing - IT Services)" > > > > To: gpfsug main discussion list > > Date: 08/13/2015 07:33 AM > > Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host > > Sent by: gpfsug-discuss-bounces at gpfsug.org > > > > ------------------------------------------------------------------------ > > > > > > > > > > >I've set up a couple of VM hosts to export some of its GPFS filesystem > > >via NFS to machines on that VM host[1,2]. > > > > Provided all your sockets no the VM host are licensed. > > > > >Is live migration of VMs likely to work? > > > > > >Live migration isn't a hard requirement, but if it will work, it could > > >make our life easier. > > > > Live migration using a GPFS file-system on the hypervisor node should work > > (subject to the usual caveats of live migration). > > > > Whether live migration and your VM instances would still be able to NFS > > mount (assuming loopback address?) if they moved to a different > > hypervisor, pass, you might get weird NFS locks. And if they are still > > mounting from the original VM host, then you are not doing what the FAQ > > says you can do. > > > > Simon > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aquan at o2.pl Fri Aug 28 16:12:23 2015 From: aquan at o2.pl (=?UTF-8?Q?aquan?=) Date: Fri, 28 Aug 2015 17:12:23 +0200 Subject: [gpfsug-discuss] =?utf-8?q?Unix_mode_bits_and_mmapplypolicy?= Message-ID: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Hello, This is my first time here. I'm a computer science student from Poland and I use GPFS during my internship at DESY. GPFS is a completely new experience to me, I don't know much about file systems and especially those used on clusters. I would like to ask about the unix mode bits and mmapplypolicy. What I noticed is that when I do the following: 1. Recursively call chmod on some directory (i.e. chmod -R 0777 some_directory) 2. Call mmapplypolicy to list mode (permissions), the listed modes of files don't correspond exactly to the modes that I set with chmod. However, if I wait a bit between step 1 and 2, the listed modes are correct. It seems that the mode bits are updated somewhat asynchronically and if I run mmapplypolicy too soon, they will contain old values. I would like to ask if it is possible to make sure that before calling mmputacl, the mode bits of that directory will be up to date on the list generated by a policy? - Omer Sakarya -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Aug 28 17:55:21 2015 From: makaplan at us.ibm.com (makaplan at us.ibm.com) Date: Fri, 28 Aug 2015 16:55:21 +0000 Subject: [gpfsug-discuss] Unix mode bits and mmapplypolicy In-Reply-To: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> References: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Message-ID: An HTML attachment was scrubbed... URL: From kallbac at iu.edu Sat Aug 29 09:23:45 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Sat, 29 Aug 2015 04:23:45 -0400 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> Message-ID: <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> OK, here?s what I?ve heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you?ll note the known conflicts on that date. What I?m asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I?ll setup a poll for that, so I can quickly tally answers. I value your feedback, but don?t want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG ?email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I?ll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A wrote: > It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. > > Best, > Kristy > > On Aug 20, 2015, at 12:12 PM, Dean Hildebrand wrote: > >> Hi Bryan, >> >> Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) >> >> Dean Hildebrand >> IBM Almaden Research Center >> >> >> Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi >> >> From: Bryan Banister >> To: gpfsug main discussion list >> Date: 08/20/2015 08:42 AM >> Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location >> Sent by: gpfsug-discuss-bounces at gpfsug.org >> >> >> >> Hi Kristy, >> >> Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! >> >> I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule >> >> I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. >> >> Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: >> 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) >> 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? >> 2) Will IBM presenters be available on the Saturday before or after? >> 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? >> 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? >> 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? >> >> As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. >> >> I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! >> >> Cheers, >> -Bryan >> >> -----Original Message----- >> From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org >> Sent: Thursday, August 20, 2015 8:24 AM >> To: gpfsug-discuss at gpfsug.org >> Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location >> >> Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. >> >> Many thanks to Janet for her efforts in organizing the venue and speakers. >> >> Date: Wednesday, October 7th >> Place: IBM building at 590 Madison Avenue, New York City >> Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well >> :-) >> >> Agenda >> >> IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. >> IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team >> >> We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. >> >> We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. >> >> As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. >> >> Best, >> Kristy >> GPFS UG - USA Principal >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> ________________________________ >> >> Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From bbanister at jumptrading.com Sat Aug 29 22:17:44 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Sat, 29 Aug 2015 21:17:44 +0000 Subject: [gpfsug-discuss] Unix mode bits and mmapplypolicy In-Reply-To: References: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com> Before I try these mmfsctl commands, what are the implications of suspending writes? I assume the entire file system will be quiesced? What if NSD clients are non responsive to this operation? Does a deadlock occur or is there a risk of a deadlock? Thanks in advance! -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of makaplan at us.ibm.com Sent: Friday, August 28, 2015 11:55 AM To: gpfsug-discuss at gpfsug.org Cc: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Unix mode bits and mmapplypolicy This is due to a design trade-off in mmapplypolicy. Mmapplypolicy bypasses locks and caches - so it doesn't "see" inode&metadata changes until they are flushed to disk. I believe this is hinted at in our publications. You can force a flush with`mmfsctl fsname suspend-write; mmfsctl fsname resume` ----- Original message ----- From: aquan > Sent by: gpfsug-discuss-bounces at gpfsug.org To: gpfsug-discuss at gpfsug.org Cc: Subject: [gpfsug-discuss] Unix mode bits and mmapplypolicy Date: Fri, Aug 28, 2015 11:12 AM Hello, This is my first time here. I'm a computer science student from Poland and I use GPFS during my internship at DESY. GPFS is a completely new experience to me, I don't know much about file systems and especially those used on clusters. I would like to ask about the unix mode bits and mmapplypolicy. What I noticed is that when I do the following: 1. Recursively call chmod on some directory (i.e. chmod -R 0777 some_directory) 2. Call mmapplypolicy to list mode (permissions), the listed modes of files don't correspond exactly to the modes that I set with chmod. However, if I wait a bit between step 1 and 2, the listed modes are correct. It seems that the mode bits are updated somewhat asynchronically and if I run mmapplypolicy too soon, they will contain old values. I would like to ask if it is possible to make sure that before calling mmputacl, the mode bits of that directory will be up to date on the list generated by a policy? - Omer Sakarya _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Sun Aug 30 01:16:02 2015 From: makaplan at us.ibm.com (makaplan at us.ibm.com) Date: Sun, 30 Aug 2015 00:16:02 +0000 Subject: [gpfsug-discuss] mmfsctl fs suspend-write Unix mode bits and mmapplypolicy In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com>, <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Message-ID: <201508300016.t7U0Gxi9001977@d01av04.pok.ibm.com> An HTML attachment was scrubbed... URL: From aquan at o2.pl Mon Aug 31 16:49:06 2015 From: aquan at o2.pl (=?UTF-8?Q?aquan?=) Date: Mon, 31 Aug 2015 17:49:06 +0200 Subject: [gpfsug-discuss] =?utf-8?q?mmfsctl_fs_suspend-write_Unix_mode_bit?= =?utf-8?q?s_andmmapplypolicy?= In-Reply-To: <201508300016.t7U0Gxi9001977@d01av04.pok.ibm.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com> <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> <201508300016.t7U0Gxi9001977@d01av04.pok.ibm.com> Message-ID: <1834e8cf.3c47fde.55e47772.d9226@o2.pl> Thank you for responding to my post. Is there any other way to make sure, that the mode bits are up-to-date when applying a policy? What would happen if a user changed mode bits when the policy that executes mmputacl is run? Which change will be the result in the end, the mmputacl mode bits or chmod mode bits? Dnia 30 sierpnia 2015 2:16 makaplan at us.ibm.com napisa?(a): I don't know exactly how suspend-write works.? But I am NOT suggesting that is be used lightly.It's there for special situations.? Obviously any process trying to change anything in the filesystemis going to be blocked until mmfsctl fs resume.?? That should not cause a GPFS deadlock, but systems thatdepend on GPFS responding may be unhappy... -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuartb at 4gh.net Sat Aug 1 22:45:40 2015 From: stuartb at 4gh.net (Stuart Barkley) Date: Sat, 1 Aug 2015 17:45:40 -0400 (EDT) Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: On Tue, 28 Jul 2015 at 12:28 -0000, Martin Gasthuber wrote: > In our setup, the files gets copied to a (user accessible) GPFS > instance which controls the access by NFSv4 ACLs (only !) and from > time to time, we had to modify these ACLs (add/remove user/group > etc.). Doing a (non policy-run based) simple approach, changing 9 > million files requires ~200 hours to run - which we consider not > really a good option. Just a thought, but instead of applying the ACLs to the files individually, could you apply the ACLs on a few parent directories instead? There are certainly issues to consider (current directory structure, actual security model, any write permissions, etc), but this might simplify things considerably. Stuart -- I've never been lost; I was once bewildered for three days, but never lost! -- Daniel Boone From makaplan at us.ibm.com Mon Aug 3 18:05:51 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 3 Aug 2015 13:05:51 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Reality check on GPFS ACLs. I think it would be helpful to understand how ACLs are implemented in GPFS - - All ACLs for a file sytem are stored as records in a special file. - Each inode that has an ACL (more than just the classic Posix mode bits) has a non-NULL offset to the governing ACL in the special acl file. - Yes, inodes with identical ACLs will have the same ACL offset value. Hence in many (most?) use cases, the ACL file can be relatively small - it's size is proportional to the number of unique ACLs, not the number of files. And how and what mmapplypolicy can do for you - mmapplypolicy can rapidly scan the directories and inodes of a file system. This scanning bypasses most locking regimes and takes advantage of both parallel processing and streaming full tracks of inodes. So it is good at finding files (inodes) that satifsy criteria that can be described by an SQL expression over the attributes stored in the inode. BUT to change the attributes of any particular file we must use APIs and code that respect all required locks, log changes, etc, etc. Those changes can be "driven" by the execution phase of mmapplypolicy, in parallel - but overheads are significantly higher per file, than during the scanning phases of operation. NOW to the problem at hand. It might be possible to improve ACL updates somewhat by writing a command that processes multiple files at once, still using the same APIs used by the mmputacl command. Hmmm.... it wouldn't be very hard for GPFS development team to modify the mmputacl command to accept a list of files... I see that the Linux command setfacl does accept multiple files in its argument list. Finally and not officially supported nor promised nor especially efficient .... try getAcl() as a GPFS SQL policy function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Tue Aug 4 08:32:31 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Tue, 04 Aug 2015 08:32:31 +0100 Subject: [gpfsug-discuss] GPFS UG User Group@USA Message-ID: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. Short Bio from Kristy: "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. LinkedIn Profile: www.linkedin.com/in/kristykallbackrose " We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. Kristy will be following up later with some announcements about the USA group activities. Simon GPFS UG Chair From kraemerf at de.ibm.com Tue Aug 4 12:28:24 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Tue, 4 Aug 2015 13:28:24 +0200 Subject: [gpfsug-discuss] Whitepaper Spectrum Scale and ownCloud + plus Webinar on large scale ownCloud project In-Reply-To: References: Message-ID: 1) Here is the link for latest ISV solution with IBM Spectrum Scale and ownCloud... https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_on-premise-file-syn-share-owncloud 2) Webinar on large scale ownCloud+GPFS project running in Germany Sciebo Scales Enterprise File Sync and Share for 500K Users: A Proven Solution from ownCloud and IBM Spectrum Storage. https://cc.readytalk.com/cc/s/registrations/new?cid=y5gn9c445u2k -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany From kallbac at iu.edu Wed Aug 5 03:56:32 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Tue, 4 Aug 2015 22:56:32 -0400 Subject: [gpfsug-discuss] GPFS UG User Group@USA In-Reply-To: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> References: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> Message-ID: <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> Hello, Thanks Simon and all for moving the USA-based group forward. You?ve got a great user group in the UK and am grateful it?s being extended. I?m looking forward to increased opportunities for the US user community to interact with GPFS developers and for us to interact with each other as users of GPFS as well. Having said that, here are some initial plans: We propose the first "Meet the Developers" session be in New York City at the IBM 590 Madison office during 2H of September (3-4 hours and lunch will be provided). [Personally, I want to avoid the week of September 28th which is the HPSS Users Forum. Let us know of any date preferences you have.] The rough agenda will include a session by a Spectrum Scale development architect followed by a demo of one of the upcoming functions. We would also like to include a user lead session --sharing their experiences or use case scenarios with Spectrum Scale. For this go round, those who are interested in attending this event should write to Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Please also chime in if you are interested in sharing an experience or use case scenario for this event or a future event. Lastly, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy On Aug 4, 2015, at 3:32 AM, GPFS UG Chair (Simon Thompson) wrote: > > As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. > > We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. > > Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. > > Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). > > I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. > > Short Bio from Kristy: > > "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. > > LinkedIn Profile: www.linkedin.com/in/kristykallbackrose > " > > We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: > > A paragraph covering their credentials; > A paragraph covering what they would bring to the group; > A paragraph setting out their vision for the group for the next two years. > > Note that this should be a GPFS customer based in the USA. > > If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. > > Kristy will be following up later with some announcements about the USA group activities. > > Simon > GPFS UG Chair > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From Robert.Oesterlin at nuance.com Wed Aug 5 12:12:17 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 5 Aug 2015 11:12:17 +0000 Subject: [gpfsug-discuss] GPFS UG User Group@USA In-Reply-To: <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> References: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> Message-ID: <315FAEF7-DEC0-4252-BA3B-D318DE05933C@nuance.com> Hi Kristy Thanks for stepping up to the duties for the USA based user group! Getting the group organized is going to be a challenge and I?m happy to help out where I can. Regarding some of the planning for SC15, I wonder if you could drop me a note off the mailing list to discuss this, since I have been working with some others at IBM on a BOF proposal for SC15 and these two items definitely overlap. My email is robert.oesterlin at nuance.com (probably end up regretting putting that out on the mailing list at some point ? sigh) Bob Oesterlin Sr Storage Engineer, Nuance Communications From: > on behalf of Kristy Kallback-Rose Reply-To: gpfsug main discussion list Date: Tuesday, August 4, 2015 at 9:56 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFS UG User Group at USA Hello, Thanks Simon and all for moving the USA-based group forward. You?ve got a great user group in the UK and am grateful it?s being extended. I?m looking forward to increased opportunities for the US user community to interact with GPFS developers and for us to interact with each other as users of GPFS as well. Having said that, here are some initial plans: We propose the first "Meet the Developers" session be in New York City at the IBM 590 Madison office during 2H of September (3-4 hours and lunch will be provided). [Personally, I want to avoid the week of September 28th which is the HPSS Users Forum. Let us know of any date preferences you have.] The rough agenda will include a session by a Spectrum Scale development architect followed by a demo of one of the upcoming functions. We would also like to include a user lead session --sharing their experiences or use case scenarios with Spectrum Scale. For this go round, those who are interested in attending this event should write to Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Please also chime in if you are interested in sharing an experience or use case scenario for this event or a future event. Lastly, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy On Aug 4, 2015, at 3:32 AM, GPFS UG Chair (Simon Thompson) > wrote: As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. Short Bio from Kristy: "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. LinkedIn Profile: www.linkedin.com/in/kristykallbackrose " We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. Kristy will be following up later with some announcements about the USA group activities. Simon GPFS UG Chair _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 5 20:23:45 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 5 Aug 2015 19:23:45 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: , Message-ID: Just picking this topic back up. Does anyone have any comments/thoughts on these questions? Thanks Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Luke Raimbach [Luke.Raimbach at crick.ac.uk] Sent: 20 July 2015 08:02 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.1.1 immutable filesets Can I add to this list of questions? Apparently, one cannot set immutable, or append-only attributes on files / directories within an AFM cache. However, if I have an independent writer and set immutability at home, what does the AFM IW cache do about this? Or does this restriction just apply to entire filesets (which would make more sense)? Cheers, Luke. -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: 19 July 2015 11:45 To: gpfsug main discussion list Subject: [gpfsug-discuss] 4.1.1 immutable filesets I was wondering if anyone had looked at the immutable fileset features in 4.1.1? In particular I was looking at the iam compliant mode, but I've a couple of questions. * if I have an iam compliant fileset, and it contains immutable files or directories, can I still unlink and delete the filset? * will HSM work with immutable files? I.e. Can I migrate files to tape and restore them? The docs mention that extended attributes can be updated internally by dmapi, so I guess HSM might work? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Aug 7 14:46:04 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 13:46:04 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets Message-ID: On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" wrote: >* if I have an iam compliant fileset, and it contains immutable files or >directories, can I still unlink and delete the filset? So just to answer my own questions here. (Actually I tried in non-compliant mode, rather than full compliance, but I figured this was the mode I actually need as I might need to reset the immutable time back earlier to allow me to delete something that shouldn't have gone in). Yes, I can both unlink and delete an immutable fileset which has immutable files which are non expired in it. >* will HSM work with immutable files? I.e. Can I migrate files to tape >and restore them? The docs mention that extended attributes can be >updated internally by dmapi, so I guess HSM might work? And yes, HSM files work. I created a file, made it immutable, backed up, migrated it: $ mmlsattr -L BHAM_DATASHARE_10.zip file name: BHAM_DATASHARE_10.zip metadata replication: 2 max 2 data replication: 2 max 2 immutable: yes appendOnly: no indefiniteRetention: no expiration Time: Fri Aug 7 14:45:00 2015 flags: storage pool name: tier2 fileset name: rds-projects-2015-thompssj-01 snapshot name: creation time: Fri Aug 7 14:38:30 2015 Windows attributes: ARCHIVE OFFLINE READONLY Encrypted: no I was then able to recall the file. Simon From wsawdon at us.ibm.com Fri Aug 7 16:13:31 2015 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 7 Aug 2015 08:13:31 -0700 Subject: [gpfsug-discuss] Hello Message-ID: Hello, Although I am new to this user group, I've worked on GPFS at IBM since before it was a product.! I am interested in hearing from the group about the features you like or don't like and of course, what features you would like to see. Wayne Sawdon STSM; IBM Research Manager | Cloud Data Management Phone: 1-408-927-1848 E-mail: wsawdon at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Fri Aug 7 16:27:33 2015 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 7 Aug 2015 08:27:33 -0700 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: Message-ID: > On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" > wrote: > > >* if I have an iam compliant fileset, and it contains immutable files or > >directories, can I still unlink and delete the filset? > > So just to answer my own questions here. (Actually I tried in > non-compliant mode, rather than full compliance, but I figured this was > the mode I actually need as I might need to reset the immutable time back > earlier to allow me to delete something that shouldn't have gone in). > > Yes, I can both unlink and delete an immutable fileset which has immutable > files which are non expired in it. > It was decided that deleting a fileset with compliant data is a "hole", but apparently it was not closed before the GA. The same rule should apply to unlinking the fileset. HSM on compliant data should be fine. I don't know what happens when you combine compliance and AFM, but I would suggest not mixing the two. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Aug 7 16:36:03 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 15:36:03 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: , Message-ID: I did only try in nc mode, so possibly if its fully compliant it wouldn't have let me delete the fileset. One other observation. As a user Id set the atime and chmod -w the file. Once it had expired, I was then unable to reset the atime into the future. (I could as root). I'm not sure what the expected behaviour should be, but I was sorta surprised that I could initially set the time as the user, but then not be able to extend even once it had expired. Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Wayne Sawdon [wsawdon at us.ibm.com] Sent: 07 August 2015 16:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.1.1 immutable filesets > On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" > wrote: > > >* if I have an iam compliant fileset, and it contains immutable files or > >directories, can I still unlink and delete the filset? > > So just to answer my own questions here. (Actually I tried in > non-compliant mode, rather than full compliance, but I figured this was > the mode I actually need as I might need to reset the immutable time back > earlier to allow me to delete something that shouldn't have gone in). > > Yes, I can both unlink and delete an immutable fileset which has immutable > files which are non expired in it. > It was decided that deleting a fileset with compliant data is a "hole", but apparently it was not closed before the GA. The same rule should apply to unlinking the fileset. HSM on compliant data should be fine. I don't know what happens when you combine compliance and AFM, but I would suggest not mixing the two. -Wayne From S.J.Thompson at bham.ac.uk Fri Aug 7 16:56:17 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 15:56:17 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes Message-ID: I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. Does anyone have a script to do this already? Surely there is a better way? Thanks Simon From rclee at lbl.gov Fri Aug 7 17:30:21 2015 From: rclee at lbl.gov (Rei Lee) Date: Fri, 7 Aug 2015 09:30:21 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: Message-ID: <55C4DD1D.7000402@lbl.gov> We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Aug 7 17:49:03 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 16:49:03 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: <55C4DD1D.7000402@lbl.gov> References: , <55C4DD1D.7000402@lbl.gov> Message-ID: Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From ckerner at ncsa.uiuc.edu Fri Aug 7 17:41:14 2015 From: ckerner at ncsa.uiuc.edu (Chad Kerner) Date: Fri, 7 Aug 2015 11:41:14 -0500 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: <55C4DD1D.7000402@lbl.gov> References: <55C4DD1D.7000402@lbl.gov> Message-ID: <20150807164114.GA29652@logos.ncsa.illinois.edu> You can use the mmlsfileset DEVICE -L option to see the maxinodes and allocated inodes. I have a perl script that loops through all of our file systems every hour and scans for it. If one is nearing capacity(tunable threshold in the code), it automatically expands it by a set amount(also tunable). We add 10% currently. This also works on file systems that have no filesets as it appears as the root fileset. I can check with my boss to see if its ok to post it if you want it. Its about 40 lines of perl. Chad -- Chad Kerner, Systems Engineer Storage Enabling Technologies National Center for Supercomputing Applications On Fri, Aug 07, 2015 at 09:30:21AM -0700, Rei Lee wrote: > We have the same problem when we started using independent fileset. I think > this should be a RFE item that IBM should provide a tool similar to 'mmdf > -F' to show the number of free/used inodes for an independent fileset. > > Rei > > On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > >I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > > >We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > > >mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > > >The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > > >Does anyone have a script to do this already? > > > >Surely there is a better way? > > > >Thanks > > > >Simon > >_______________________________________________ > >gpfsug-discuss mailing list > >gpfsug-discuss at gpfsug.org > >http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From makaplan at us.ibm.com Fri Aug 7 21:12:05 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 7 Aug 2015 16:12:05 -0400 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: , <55C4DD1D.7000402@lbl.gov> Message-ID: Try mmlsfileset filesystem_name -i From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From martin.gasthuber at desy.de Fri Aug 7 21:41:08 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Fri, 7 Aug 2015 22:41:08 +0200 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Hi Marc, your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-) best regards, Martin > On 3 Aug, 2015, at 19:05, Marc A Kaplan wrote: > > Reality check on GPFS ACLs. > > I think it would be helpful to understand how ACLs are implemented in GPFS - > > - All ACLs for a file sytem are stored as records in a special file. > - Each inode that has an ACL (more than just the classic Posix mode bits) has a non-NULL offset to the governing ACL in the special acl file. > - Yes, inodes with identical ACLs will have the same ACL offset value. Hence in many (most?) use cases, the ACL file can be relatively small - > it's size is proportional to the number of unique ACLs, not the number of files. > > And how and what mmapplypolicy can do for you - > > mmapplypolicy can rapidly scan the directories and inodes of a file system. > This scanning bypasses most locking regimes and takes advantage of both parallel processing > and streaming full tracks of inodes. So it is good at finding files (inodes) that satifsy criteria that can > be described by an SQL expression over the attributes stored in the inode. > > BUT to change the attributes of any particular file we must use APIs and code that respect all required locks, > log changes, etc, etc. > > Those changes can be "driven" by the execution phase of mmapplypolicy, in parallel - but overheads are significantly higher per file, > than during the scanning phases of operation. > > NOW to the problem at hand. It might be possible to improve ACL updates somewhat by writing a command that processes > multiple files at once, still using the same APIs used by the mmputacl command. > > Hmmm.... it wouldn't be very hard for GPFS development team to modify the mmputacl command to accept a list of files... > I see that the Linux command setfacl does accept multiple files in its argument list. > > Finally and not officially supported nor promised nor especially efficient .... try getAcl() as a GPFS SQL policy function._______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From rclee at lbl.gov Fri Aug 7 21:44:23 2015 From: rclee at lbl.gov (Rei Lee) Date: Fri, 7 Aug 2015 13:44:23 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: <55C518A7.6020605@lbl.gov> We have tried that command but it took a very long time like it was hanging so I killed the command before it finished. I was not sure if it was a bug in early 4.1.0 software but I did not open a PMR. I just ran the command again on a quiet file system and it has been 5 minutes and the command is still not showing any output. 'mmdf -F' returns very fast. 'mmlsfileset -l' does not report the number of free inodes. Rei On 8/7/15 1:12 PM, Marc A Kaplan wrote: > Try > > mmlsfileset filesystem_name -i > > > Marc A Kaplan > > > > From: "Simon Thompson (Research Computing - IT Services)" > > To: gpfsug main discussion list > Date: 08/07/2015 12:49 PM > Subject: Re: [gpfsug-discuss] Independent fileset free inodes > Sent by: gpfsug-discuss-bounces at gpfsug.org > ------------------------------------------------------------------------ > > > > > Hmm. I'll create an RFE next week then. (just in case someone comes > back with a magic flag we don't know about!). > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at gpfsug.org > [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] > Sent: 07 August 2015 17:30 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] Independent fileset free inodes > > We have the same problem when we started using independent fileset. I > think this should be a RFE item that IBM should provide a tool similar > to 'mmdf -F' to show the number of free/used inodes for an independent > fileset. > > Rei > > On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) > wrote: > > I was just wondering if anyone had a way to return the number of > free/used inodes for an independent fileset and all its children. > > > > We recently had a case where we were unable to create new files in a > child file-set, and it turns out the independent parent had run out of > inodes. > > > > mmsf however only lists the inodes used directly in the parent > fileset, I.e. About 8 as that was the number of child filesets. > > > > The suggestion from IBM support is that we use mmdf and then add up > the numbers from all the child filesets to workout how many are > free/used in the independent fileset. > > > > Does anyone have a script to do this already? > > > > Surely there is a better way? > > > > Thanks > > > > Simon > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From bevans at pixitmedia.com Fri Aug 7 21:44:44 2015 From: bevans at pixitmedia.com (Barry Evans) Date: Fri, 7 Aug 2015 21:44:44 +0100 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: <-2676389644758800053@unknownmsgid> -i will give you the exact used number but... Avoid running it during peak usage on most setups. It's pretty heavy, like running a -d on lssnapshot. Your best bet is from earlier posts: '-L' gives you the max and alloc. If they match, you know you're in bother soon. It's not accurate, of course, but prevention is typically the best medicine in this case. Cheers, Barry ArcaStream/Pixit On 7 Aug 2015, at 21:12, Marc A Kaplan wrote: Try mmlsfileset filesystem_name -i From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org ------------------------------ Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Aug 7 22:21:28 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 7 Aug 2015 17:21:28 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: You asked: "your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-) " Perhaps one could hack/patch that - but I can't recommend it. Would you routinely hack/patch the GPFS metadata that comprises a directory? Consider replicated and logged metadata ... Consider you've corrupted the hash table of all ACL values... -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Aug 10 08:13:43 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 10 Aug 2015 07:13:43 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: Hi Marc, Thanks for this. Just to clarify the output when it mentions allocated inodes, does that mean the number used or the number allocated? I.e. If I pre-create a bunch of inodes will they appear as allocated? Or is that only when they are used by a file etc? Thanks Simon From: Marc A Kaplan > Reply-To: gpfsug main discussion list > Date: Friday, 7 August 2015 21:12 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Independent fileset free inodes Try mmlsfileset filesystem_name -i [Marc A Kaplan] From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00002.gif Type: image/gif Size: 21994 bytes Desc: ATT00002.gif URL: From makaplan at us.ibm.com Mon Aug 10 19:14:58 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 10 Aug 2015 14:14:58 -0400 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: mmlsfileset xxx -i 1. Yes it is slow. I don't know the reasons. Perhaps someone more familiar with the implementation can comment. It's surprising to me that it is sooo much slower than mmdf EVEN ON a filesystem that only has the root fileset! 2. used: how many inodes (files) currently exist in the given fileset or fileset allocated: number of inodes "pre"allocated in the (special) file of all inodes. maximum: number of inodes that GPFS might allocate on demand, with current --inode-limit settings from mmchfileset and mmchfs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From taylorm at us.ibm.com Mon Aug 10 22:23:02 2015 From: taylorm at us.ibm.com (Michael L Taylor) Date: Mon, 10 Aug 2015 14:23:02 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes Message-ID: <201508102123.t7ALNZDV012260@d01av01.pok.ibm.com> This capability is available in Storage Insights, which is a Software as a Service (SaaS) storage management solution. You can play with a live demo and try a free 30 day trial here: https://www.ibmserviceengage.com/storage-insights/learn I could also provide a screen shot of what IBM Spectrum Control looks like when managing Spectrum Scale and how you can easily see fileset relationships and used space and inodes per fileset if interested. -------------- next part -------------- An HTML attachment was scrubbed... URL: From GARWOODM at uk.ibm.com Tue Aug 11 17:05:52 2015 From: GARWOODM at uk.ibm.com (Michael Garwood7) Date: Tue, 11 Aug 2015 16:05:52 +0000 Subject: [gpfsug-discuss] Developer Works forum post on Spectrum Scale and Spark work Message-ID: <201508111606.t7BG6Vt6005368@d06av01.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Tue Aug 11 17:53:32 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Tue, 11 Aug 2015 18:53:32 +0200 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Hi Marc, this was meant to be more a joke than a 'wish' - but it would be interesting for us (with the case of several millions of files having the same ACL) if there are ways/plans to treat ACLs more referenced from each of these files and having a mechanism to treat all of them in a single operation. -- Martin > On 7 Aug, 2015, at 23:21, Marc A Kaplan wrote: > > You asked: > > "your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-)" > > > Perhaps one could hack/patch that - but I can't recommend it. Would you routinely hack/patch the GPFS metadata that comprises a directory? > Consider replicated and logged metadata ... Consider you've corrupted the hash table of all ACL values... > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Tue Aug 11 18:59:08 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 13:59:08 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: We (myself and a few other GPFS people) are reading this and considering... Of course we can't promise anything here. I can see some ways to improve and make easier the job of finding and changing the ACLs of many files. But I think whatever we end up doing will still be, at best, a matter of changing every inode, rather than changing on ACL that all those inodes happen to point to. IOW, as a lower bound, we're talking at least as much overhead as doing chmod on the chosen files. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Tue Aug 11 19:11:26 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Tue, 11 Aug 2015 18:11:26 +0000 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , Message-ID: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Aug 11 20:45:56 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 15:45:56 -0400 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: The mmfind command/script you may find in samples/ilm of 4.1.1 (July 2015) is completely revamped and immensely improved compared to any previous mmfind script you may have seen shipped in an older samples/ilm/mmfind. If you have a classic "find" job that you'd like to easily parallelize, give the new mmfind a shot and let us know how you make out! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Tue Aug 11 21:56:34 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 11 Aug 2015 21:56:34 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: <55CA6182.9010507@buzzard.me.uk> On 11/08/15 19:11, James Davis wrote: > If trying the naive approach, a la > find /fs ... -exec changeMyACL {} \; > or > /usr/lpp/mmfs/samples/ilm/mmfind /fs ... -exec changeMyACL {} \; > #shameless plug for my mmfind tool, available in the latest release of > GPFS. See the associated README. > I think the cost will be prohibitive. I believe a relatively strong > internal lock is required to do ACL changes, and consequently I think > the performance of modifying the ACL on a bunch of files will be painful > at best. I am not sure what it is like in 4.x but up to 3.5 the mmputacl was some sort of abomination of a command. It could only set the ACL for a single file and if you wanted to edit rather than set you had to call mmgetacl first, manipulate the text file output and then feed that into mmputacl. So if you need to set the ACL's on a directory hierarchy over loads of files then mmputacl is going to be exec'd potentially millions of times, which is a massive overhead just there. If only because mmputacl is a ksh wrapper around tsputacl. Execution time doing this was god dam awful. So I instead wrote a simple C program that used the ntfw library call and the gpfs API to set the ACL's it was way way faster. Of course I was setting a very limited number of different ACL's that where required to support a handful of Samba share types after the data had been copied onto a GPFS file system. As I said previously what is needed is an "mm" version of the FreeBSD setfacl command http://www.freebsd.org/cgi/man.cgi?format=html&query=setfacl(1) That has the -R/--recursive option of the Linux setfacl command which uses the fast inode scanning GPFS API. You want to be able to type something like mmsetfacl -mR g:www:rpaRc::allow foo What you don't want to be doing is calling the abomination of a command that is mmputacl. Frankly whoever is responsible for that command needs taking out the back and given a good kicking. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From makaplan at us.ibm.com Tue Aug 11 23:11:24 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 18:11:24 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <55CA6182.9010507@buzzard.me.uk> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> Message-ID: On Linux you are free to use setfacl and getfacl commands on GPFS files. Works for me. As you say, at least you can avoid the overhead of shell interpretation and forking and whatnot for each file. Or use the APIs, see /usr/include/sys/acl.h. May need to install libacl-devel package and co. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Tue Aug 11 23:27:13 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 11 Aug 2015 23:27:13 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> Message-ID: <55CA76C1.4050109@buzzard.me.uk> On 11/08/15 23:11, Marc A Kaplan wrote: > On Linux you are free to use setfacl and getfacl commands on GPFS files. > Works for me. Really, for NFSv4 ACL's? Given the RichACL kernel patches are only carried by SuSE I somewhat doubt that you can. http://www.bestbits.at/richacl/ People what to set NFSv4 ACL's on GPFS because when used with vfs_gpfs you can get Windows server/NTFS like rich permissions on your Windows SMB clients. You don't get that with Posix ACL's. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From usa-principal at gpfsug.org Tue Aug 11 23:36:11 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Tue, 11 Aug 2015 18:36:11 -0400 Subject: [gpfsug-discuss] Additional Details for Fall 2015 GPFS UG Meet Up in NYC Message-ID: <7d3395cb2575576c30ba55919124e44d@webmail.gpfsug.org> Hello, We are working on some additional information regarding the proposed NYC meet up. Below is the draft agenda for the "Meet the Developers" session. We are still working on closing on an exact date, and will communicate that soon --targeting September or October. Please e-mail Janet Ellsworth (janetell at us.ibm.com) if you are interested in attending. Janet is coordinating the logistics of the event. ? IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. ? IBM developer to demo future Graphical User Interface ? Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this !) ? Open Q&A with the development team Thoughts? Ideas? Best, Kristy GPFS UG - USA Principal PS - I believe we're still looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. From chair at gpfsug.org Wed Aug 12 10:20:40 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Wed, 12 Aug 2015 10:20:40 +0100 Subject: [gpfsug-discuss] USA Co-Principal Message-ID: Hi All, We only had 1 self nomination for the co-principal of the USA side of the group. I've very much like to thank Bob Oesterlin for nominating himself to help Kristy with the USA side of things. I've spoken a few times with Bob "off-list" and he's helped me out with a few bits and pieces. As you may have seen, Kristy has been posting from usa-principal at gpfsug.org, I'll sort another address out for the co-principal role today. Both Kristy and Bob seem determined to get the USA group off the ground and I wish them every success with this. Simon Bob's profile follows: LinkedIn Profile: https://www.linkedin.com/in/boboesterlin Short Profile: I have over 15 years experience with GPFS. Prior to 2013 I was with IBM and wa actively involved with developing solutions for customers using GPFS both inside and outside IBM. Prior to my work with GPFS, I was active in the AFS and OpenAFS community where I served as one of founding Elder members of that group. I am well know inside IBM and have worked to maintain my contacts with development. After 2013, I joined Nuance Communications where I am the Sr Storage Engineer for the HPC grid. I have been active in the GPFS DeveloperWorks Forum and the mailing list, presented multiple times at IBM Edge and IBM Interconnect. I'm active in multiple IBM Beta programs, providing active feedback on new products and future directions. For the user group, my vision is to build an active user community where we can share expertise and skills to help each other. I'd also like to see this group be more active in shaping the future direction of GPFS. I would also like to foster broader co-operation and discussion with users and administrators of other clustered file systems. (Lustre and OpenAFS) From makaplan at us.ibm.com Wed Aug 12 15:43:03 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 12 Aug 2015 10:43:03 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <55CA76C1.4050109@buzzard.me.uk> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> Message-ID: On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work fine for me. nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today not at all, at least not for me ;-( [root at n2 ~]# setfacl -m u:wsawdon:r-x /mak/sil/x [root at n2 ~]# echo $? 0 [root at n2 ~]# getfacl /mak/sil/x getfacl: Removing leading '/' from absolute path names # file: mak/sil/x # owner: root # group: root user::--- user:makaplan:rwx user:wsawdon:r-x group::--- mask::rwx other::--- [root at n2 ~]# nfs4_getfacl /mak/sil/x Operation to request attribute not supported. [root at n2 ~]# echo $? 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ross.keeping at uk.ibm.com Wed Aug 12 15:44:38 2015 From: ross.keeping at uk.ibm.com (Ross Keeping3) Date: Wed, 12 Aug 2015 15:44:38 +0100 Subject: [gpfsug-discuss] Q4 Meet the devs location? Message-ID: Hey I was discussing with Simon and Claire where and when to run our Q4 meet the dev session. We'd like to take the next sessions up towards Scotland to give our Edinburgh/Dundee users a chance to participate sometime in November (around the 4.2 release date). I'm keen to hear from people who would be interested in attending an event in or near Scotland and is there anyone who can offer up a small meeting space for the day? Best regards, Ross Keeping IBM Spectrum Scale - Development Manager, People Manager IBM Systems UK - Manchester Development Lab Phone: (+44 161) 8362381-Line: 37642381 E-mail: ross.keeping at uk.ibm.com 3rd Floor, Maybrook House Manchester, M3 2EG United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Wed Aug 12 15:49:27 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 12 Aug 2015 14:49:27 +0000 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk>, Message-ID: I thought acls could either be posix or nfd4, but not both. Set when creating the file-system? Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 12 August 2015 15:43 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] fast ACL alter solution On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work fine for me. nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today not at all, at least not for me ;-( [root at n2 ~]# setfacl -m u:wsawdon:r-x /mak/sil/x [root at n2 ~]# echo $? 0 [root at n2 ~]# getfacl /mak/sil/x getfacl: Removing leading '/' from absolute path names # file: mak/sil/x # owner: root # group: root user::--- user:makaplan:rwx user:wsawdon:r-x group::--- mask::rwx other::--- [root at n2 ~]# nfs4_getfacl /mak/sil/x Operation to request attribute not supported. [root at n2 ~]# echo $? 1 From jonathan at buzzard.me.uk Wed Aug 12 17:29:00 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 12 Aug 2015 17:29:00 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> Message-ID: <1439396940.3856.4.camel@buzzard.phy.strath.ac.uk> On Wed, 2015-08-12 at 10:43 -0400, Marc A Kaplan wrote: > On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work > fine for me. > Yes they do, but they only set POSIX ACL's, and well most people are wanting to set NFSv4 ACL's so the getfacl and setfacl commands are of no use. > nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today > not at all, at least not for me ;-( Yep they only work against an NFSv4 mounted file system with NFSv4 ACL's. So if you NFSv4 exported a GPFS file system from an AIX node and mounted it on a Linux node that would work for you. It might also work if you NFSv4 exported a GPFS file system using the userspace ganesha NFS server with an appropriate VFS backend for GPFS and mounted on Linux https://github.com/nfs-ganesha/nfs-ganesha However last time I checked such a GPFS VFS backend for ganesha was still under development. The RichACL stuff might also in theory work except it is not in mainline kernels and there is certainly no advertised support by IBM for GPFS using it. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jonathan at buzzard.me.uk Wed Aug 12 17:35:55 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 12 Aug 2015 17:35:55 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> , Message-ID: <1439397355.3856.11.camel@buzzard.phy.strath.ac.uk> On Wed, 2015-08-12 at 14:49 +0000, Simon Thompson (Research Computing - IT Services) wrote: > I thought acls could either be posix or nfd4, but not both. Set when creating the file-system? > The options for ACL's on GPFS are POSIX, NFSv4, all which is mixed NFSv4/POSIX and finally Samba. The first two are self explanatory. The mixed mode is best given a wide berth in my opinion. The fourth is well lets say "undocumented" last time I checked. You can set it, and it shows up when you query the file system but what it does I can only speculate. Take a look at the Korn shell of mmchfs if you doubt it exists. Try it out on a test file system with mmchfs -k samba My guess though I have never verified it, is that it changes the schematics of the NFSv4 ACL's to more closely match those of NTFS ACL's. A bit like some of the other GPFS settings you can fiddle with to make GPFS behave more like an NTFS file system. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From C.J.Walker at qmul.ac.uk Thu Aug 13 15:23:07 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Thu, 13 Aug 2015 16:23:07 +0200 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host Message-ID: <55CCA84B.1080600@qmul.ac.uk> I've set up a couple of VM hosts to export some of its GPFS filesystem via NFS to machines on that VM host[1,2]. Is live migration of VMs likely to work? Live migration isn't a hard requirement, but if it will work, it could make our life easier. Chris [1] AIUI, this is explicitly permitted by the licencing FAQ. [2] For those wondering why we are doing this, it's that some users want docker - and they can probably easily escape to become root on the VM. Doing it this way permits us (we hope) to only export certain bits of the GPFS filesystem. From S.J.Thompson at bham.ac.uk Thu Aug 13 15:32:18 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 13 Aug 2015 14:32:18 +0000 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: <55CCA84B.1080600@qmul.ac.uk> References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: >I've set up a couple of VM hosts to export some of its GPFS filesystem >via NFS to machines on that VM host[1,2]. Provided all your sockets no the VM host are licensed. >Is live migration of VMs likely to work? > >Live migration isn't a hard requirement, but if it will work, it could >make our life easier. Live migration using a GPFS file-system on the hypervisor node should work (subject to the usual caveats of live migration). Whether live migration and your VM instances would still be able to NFS mount (assuming loopback address?) if they moved to a different hypervisor, pass, you might get weird NFS locks. And if they are still mounting from the original VM host, then you are not doing what the FAQ says you can do. Simon From dhildeb at us.ibm.com Fri Aug 14 18:54:59 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 14 Aug 2015 10:54:59 -0700 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: Thanks for the replies Simon... Chris, are you using -v to give the container access to the nfs subdir (and hence to a gpfs subdir) (and hence achieve a level of multi-tenancy)? Even without containers, I wonder if this could allow users to run their own VMs as root as well...and preventing them from becoming root on gpfs... I'd love for you to share your experience (mgmt and perf) with this architecture once you get it up and running. Some side benefits of this architecture that we have been thinking about as well is that it allows both the containers and VMs to be somewhat ephemeral, while the gpfs continues to run in the hypervisor... To ensure VMotion works relatively smoothly, just ensure each VM is given a hostname to mount that always routes back to the localhost nfs server on each machine...and I think things should work relatively smoothly. Note you'll need to maintain the same set of nfs exports across the entire cluster as well, so taht when a VM moves to another machine it immediately has an available export to mount. Dean Hildebrand IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/13/2015 07:33 AM Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host Sent by: gpfsug-discuss-bounces at gpfsug.org >I've set up a couple of VM hosts to export some of its GPFS filesystem >via NFS to machines on that VM host[1,2]. Provided all your sockets no the VM host are licensed. >Is live migration of VMs likely to work? > >Live migration isn't a hard requirement, but if it will work, it could >make our life easier. Live migration using a GPFS file-system on the hypervisor node should work (subject to the usual caveats of live migration). Whether live migration and your VM instances would still be able to NFS mount (assuming loopback address?) if they moved to a different hypervisor, pass, you might get weird NFS locks. And if they are still mounting from the original VM host, then you are not doing what the FAQ says you can do. Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Mon Aug 17 13:50:17 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 17 Aug 2015 12:50:17 +0000 Subject: [gpfsug-discuss] Metadata compression Message-ID: <2D1E2C5B-499D-46D3-AC27-765E3B40E340@nuance.com> Anyone have any practical experience here, especially using Flash, compressing GPFS metadata? IBM points out that they specifically DON?T support it on there devices (SVC/V9000/StoreWize) Spectrum Scale FAQ: https://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html?lang=en (look for the word compressed) But ? I could not find any blanket statements that it?s not supported outright. They don?t mention anything about data, and since the default for GPFS is mixing data and metadata on the same LUNs you?re more than likely compressing the metadata as well. :-) Also, no statements that you must split metadata from data when using compression. Bob Oesterlin Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Wed Aug 19 11:53:39 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Wed, 19 Aug 2015 12:53:39 +0200 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: References: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Hi Marc, maybe a stupid question - is it expected that the 4.1.1 mmfind set of tools also works on a 4.1.0.8 environment ? -- Martin > On 11 Aug, 2015, at 21:45, Marc A Kaplan wrote: > > The mmfind command/script you may find in samples/ilm of 4.1.1 (July 2015) is completely revamped and immensely improved compared to any previous mmfind script you may have seen shipped in an older samples/ilm/mmfind. > > If you have a classic "find" job that you'd like to easily parallelize, give the new mmfind a shot and let us know how you make out! > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Wed Aug 19 14:18:14 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 19 Aug 2015 09:18:14 -0400 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> References: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Message-ID: mmfind in 4.1.1 depends on some new functionality added to mmapplypolicy in 4.1.1. Depending which find predicates you happen to use, the new functions in mmapplypolicy will be invoked (or not.) If you'd like to try it out - go ahead - it either works or it doesn't. If it doesn't you can also try using the new mmapplypolicy script and the new tsapolicy binary on the old GPFS system. BUT of course that's not supported. AFAIK, nothing bad will happen, but it's not supported. mmfind in 4.1.1 ships as a "sample", so it is not completely supported, but we will take bug reports and constructive criticism seriously, when you run it on a GPFS cluster that has been completely upgraded to 4.1.1. (Please don't complain that it does not work on a back level system.) For testing this kind of functionality, GPFS can be run on a single node or VM. You can emulate an NSD volume by "giving" mmcrnsd a GB sized file (or larger) instead of a block device. (Also not supported and not very swift but it works.) So there's no need to even "provision" a disk. --marc of GPFS -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Wed Aug 19 14:25:35 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Wed, 19 Aug 2015 13:25:35 +0000 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com><09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Message-ID: <201508191343.t7JDhlaU022402@d01av04.pok.ibm.com> An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Thu Aug 20 14:23:41 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Thu, 20 Aug 2015 09:23:41 -0400 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Message-ID: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal From bbanister at jumptrading.com Thu Aug 20 16:42:09 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 20 Aug 2015 15:42:09 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From Kevin.Buterbaugh at Vanderbilt.Edu Thu Aug 20 17:37:37 2015 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 20 Aug 2015 16:37:37 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> Hi All, I feel sorry for Kristy, as she just simply isn?t going to be able to meet everyones? needs here. For example, I had already e-mailed Kristy off list expressing my hope that the GPFS US UG meeting could be on Tuesday the 17th. Why? Because, as Bryan points out, the DDN User Group meeting is typically on Monday. We have limited travel funds and so if the two meetings were on consecutive days that would allow me to attend both (we have both non-DDN and DDN GPFS storage here). I?d prefer Tuesday over Sunday because that would at least allow me to grab a few minutes on the conference show floor. If the meeting is on the Friday or Saturday before or after SC 15 then I will have to choose ? or possibly not go at all. But I think that Bryan is right ? everyone should express their preferences as soon as possible and then Kristy can have the unenviable task of trying to disappoint the least number of people! :-O Kevin On Aug 20, 2015, at 10:42 AM, Bryan Banister > wrote: Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Aug 20 19:09:27 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 20 Aug 2015 18:09:27 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com>, <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> Message-ID: With my uk hat on, id suggest its also important to factor in IBM's ability to ship people in as well. I know last year there was an IBM GPFS event on the Monday at SC as I spoke there, I'm assuming the GPFS UG will really be an extended version of that, and there were quite a a lot in the audience for that. I know I made some really good contacts with both users and IBM at the event (and I encourage people to speak as its a great way of meeting people!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 20 August 2015 17:37 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Hi All, I feel sorry for Kristy, as she just simply isn?t going to be able to meet everyones? needs here. For example, I had already e-mailed Kristy off list expressing my hope that the GPFS US UG meeting could be on Tuesday the 17th. Why? Because, as Bryan points out, the DDN User Group meeting is typically on Monday. We have limited travel funds and so if the two meetings were on consecutive days that would allow me to attend both (we have both non-DDN and DDN GPFS storage here). I?d prefer Tuesday over Sunday because that would at least allow me to grab a few minutes on the conference show floor. If the meeting is on the Friday or Saturday before or after SC 15 then I will have to choose ? or possibly not go at all. But I think that Bryan is right ? everyone should express their preferences as soon as possible and then Kristy can have the unenviable task of trying to disappoint the least number of people! :-O Kevin On Aug 20, 2015, at 10:42 AM, Bryan Banister > wrote: Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 From dhildeb at us.ibm.com Thu Aug 20 17:12:09 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 20 Aug 2015 09:12:09 -0700 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center From: Bryan Banister To: gpfsug main discussion list Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [ mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From kallbac at iu.edu Thu Aug 20 20:00:21 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Thu, 20 Aug 2015 19:00:21 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 12:26:47 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 11:26:47 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. The docs are a little lacking in detail of how you create NSD disks on clients, I've tried using: %nsd: device=sdb2 nsd=cl0901u17_hawc_sdb2 servers=cl0901u17 pool=system.log failureGroup=90117 (and also with usage=metadataOnly as well), however mmcrsnd -F tells me "mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license designation" Which is correct as its a client system, though HAWC is supposed to be able to run on client systems. I know for LROC you have to set usage=localCache, is there a new value for using HAWC? I'm also a little unclear about failureGroups for this. The docs suggest setting the HAWC to be replicated for client systems, so I guess that means putting each client node into its own failure group? Thanks Simon From Robert.Oesterlin at nuance.com Wed Aug 26 12:46:59 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 26 Aug 2015 11:46:59 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Not directly related to HWAC, but I found a bug in 4.1.1 that results in LROC NSDs not being properly formatted (they don?t work) - Reference APAR IV76242 . Still waiting for a fix. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 6:26 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] Using HAWC (write cache) Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 13:23:36 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 12:23:36 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon From: , Robert > Reply-To: gpfsug main discussion list > Date: Wednesday, 26 August 2015 12:46 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Not directly related to HWAC, but I found a bug in 4.1.1 that results in LROC NSDs not being properly formatted (they don?t work) - Reference APAR IV76242 . Still waiting for a fix. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 6:26 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] Using HAWC (write cache) Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Aug 26 13:27:36 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 26 Aug 2015 12:27:36 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Aug 26 13:50:44 2015 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 26 Aug 2015 12:50:44 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> References: , <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: <201D6001C896B846A9CFC2E841986AC1454FFB0B@mailnycmb2a.winmail.deshaw.com> There is a more severe issue with LROC enabled in saveInodePtrs() which results in segfaults and loss of acknowledged writes, which has caused us to roll back all LROC for now. We are testing an efix (ref Defect 970773, IV76155) now which addresses this. But I would advise against running with LROC/HAWC in production without this fix. We experienced this on 4.1.0-6, but had the efix built against 4.1.1-1, so the exposure seems likely to be all 4.1 versions. Thx Paul Sent with Good (www.good.com) ________________________________ From: gpfsug-discuss-bounces at gpfsug.org on behalf of Oesterlin, Robert Sent: Wednesday, August 26, 2015 8:27:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 13:57:56 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 12:57:56 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From C.J.Walker at qmul.ac.uk Wed Aug 26 14:46:56 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Wed, 26 Aug 2015 14:46:56 +0100 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: <55DDC350.8010603@qmul.ac.uk> On 13/08/15 15:32, Simon Thompson (Research Computing - IT Services) wrote: > >> I've set up a couple of VM hosts to export some of its GPFS filesystem >> via NFS to machines on that VM host[1,2]. > > Provided all your sockets no the VM host are licensed. Yes, they are. > >> Is live migration of VMs likely to work? >> >> Live migration isn't a hard requirement, but if it will work, it could >> make our life easier. > > Live migration using a GPFS file-system on the hypervisor node should work > (subject to the usual caveats of live migration). > > Whether live migration and your VM instances would still be able to NFS > mount (assuming loopback address?) if they moved to a different > hypervisor, pass, you might get weird NFS locks. And if they are still > mounting from the original VM host, then you are not doing what the FAQ > says you can do. > Yes, that's the intent - VMs get access to GPFS from the hypervisor - that complies with the licence and, presumably, should get better performance. It sounds like our problem would be the NFS end of this if we try a live migrate. Chris From C.J.Walker at qmul.ac.uk Wed Aug 26 15:15:48 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Wed, 26 Aug 2015 15:15:48 +0100 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: <55DDCA14.8010103@qmul.ac.uk> On 14/08/15 18:54, Dean Hildebrand wrote: > Thanks for the replies Simon... > > Chris, are you using -v to give the container access to the nfs subdir > (and hence to a gpfs subdir) (and hence achieve a level of > multi-tenancy)? -v option to what? > Even without containers, I wonder if this could allow > users to run their own VMs as root as well...and preventing them from > becoming root on gpfs... > > I'd love for you to share your experience (mgmt and perf) with this > architecture once you get it up and running. A quick and dirty test: From a VM: -bash-4.1$ time dd if=/dev/zero of=cjwtestfile2 bs=1M count=10240 real 0m20.411s 0m22.137s 0m21.431s 0m21.730s 0m22.056s 0m21.759s user 0m0.005s 0m0.007s 0m0.006s 0m0.003s 0m0.002s 0m0.004s sys 0m11.710s 0m10.615s 0m10.399s 0m10.474s 0m10.682s 0m9.965s From the underlying hypervisor. real 0m11.138s 0m9.813s 0m9.761s 0m9.793s 0m9.773s 0m9.723s user 0m0.006s 0m0.013s 0m0.009s 0m0.008s 0m0.008s 0m0.009s sys 0m5.447s 0m5.529s 0m5.802s 0m5.580s 0m6.190s 0m5.516s So there's a factor of just over 2 slowdown. As it's still 500MB/s, I think it's good enough for now. The machine has a 10Gbit/s network connection and both hypervisor and VM are running SL6. > Some side benefits of this > architecture that we have been thinking about as well is that it allows > both the containers and VMs to be somewhat ephemeral, while the gpfs > continues to run in the hypervisor... Indeed. This is another advantage. If we were running Debian, it would be possible to export part of a filesystem to a VM. Which would presumably work. In redhat derived OSs (we are currently using Scientific Linux), I don't believe it is - hence exporting via NFS. > > To ensure VMotion works relatively smoothly, just ensure each VM is > given a hostname to mount that always routes back to the localhost nfs > server on each machine...and I think things should work relatively > smoothly. Note you'll need to maintain the same set of nfs exports > across the entire cluster as well, so taht when a VM moves to another > machine it immediately has an available export to mount. Yes, we are doing this. Simon alludes to potential problems at the NFS layer on live migration. Otherwise, yes indeed the setup should be fine. I'm not familiar enough with the details of NFS - but I have heard NFS described as "a stateless filesystem with state". It's the stateful bits I'm concerned about. Chris > > Dean Hildebrand > IBM Almaden Research Center > > > Inactive hide details for "Simon Thompson (Research Computing - IT > Services)" ---08/13/2015 07:33:16 AM--->I've set up a couple"Simon > Thompson (Research Computing - IT Services)" ---08/13/2015 07:33:16 > AM--->I've set up a couple of VM hosts to export some of its GPFS > filesystem >via NFS to machines on that > > From: "Simon Thompson (Research Computing - IT Services)" > > To: gpfsug main discussion list > Date: 08/13/2015 07:33 AM > Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host > Sent by: gpfsug-discuss-bounces at gpfsug.org > > ------------------------------------------------------------------------ > > > > > >I've set up a couple of VM hosts to export some of its GPFS filesystem > >via NFS to machines on that VM host[1,2]. > > Provided all your sockets no the VM host are licensed. > > >Is live migration of VMs likely to work? > > > >Live migration isn't a hard requirement, but if it will work, it could > >make our life easier. > > Live migration using a GPFS file-system on the hypervisor node should work > (subject to the usual caveats of live migration). > > Whether live migration and your VM instances would still be able to NFS > mount (assuming loopback address?) if they moved to a different > hypervisor, pass, you might get weird NFS locks. And if they are still > mounting from the original VM host, then you are not doing what the FAQ > says you can do. > > Simon > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From tpathare at sidra.org Wed Aug 26 16:43:51 2015 From: tpathare at sidra.org (Tushar Pathare) Date: Wed, 26 Aug 2015 15:43:51 +0000 Subject: [gpfsug-discuss] Welcome to the "gpfsug-discuss" mailing list In-Reply-To: References: Message-ID: <06133E83-2DCB-4A1C-868A-CD4FDAC61A27@sidra.org> Hello Folks, This is Tushar Pathare from Sidra Medical & Research Centre.I am a HPC Administrator at Sidra. Before joining Sidra I worked with IBM for about 7 years with GPFS Test Team,Pune,India with partner lab being IBM Poughkeepsie,USA Sidra has total GPFS storage of about 1.5PB and growing.Compute power about 5000 cores acquired and growing. Sidra is into Next Generation Sequencing and medical research related to it. Its a pleasure being part of this group. Thank you. Tushar B Pathare High Performance Computing (HPC) Administrator General Parallel File System Scientific Computing Bioinformatics Division Research Sidra Medical and Research Centre PO Box 26999 | Doha, Qatar Burj Doha Tower,Floor 8 D +974 44042250 | M +974 74793547 tpathare at sidra.org | www.sidra.org On 8/26/15, 5:04 PM, "gpfsug-discuss-bounces at gpfsug.org on behalf of gpfsug-discuss-request at gpfsug.org" wrote: >Welcome to the gpfsug-discuss at gpfsug.org mailing list! Hello and >welcome. > > Please introduce yourself to the members with your first post. > > A quick hello with an overview of how you use GPFS, your company >name, market sector and any other interesting information would be >most welcomed. > >Please let us know which City and Country you live in. > >Many thanks. > >GPFS UG Chair > > >To post to this list, send your email to: > > > >General information about the mailing list is at: > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >If you ever want to unsubscribe or change your options (eg, switch to >or from digest mode, change your password, etc.), visit your >subscription page at: > > http://gpfsug.org/mailman/options/gpfsug-discuss/tpathare%40sidra.org > > >You can also make such adjustments via email by sending a message to: > > gpfsug-discuss-request at gpfsug.org > >with the word `help' in the subject or body (don't include the >quotes), and you will get back a message with instructions. > >You must know your password to change your options (including changing >the password, itself) or to unsubscribe. It is: > > p3nguins > >Normally, Mailman will remind you of your gpfsug.org mailing list >passwords once every month, although you can disable this if you >prefer. This reminder will also include instructions on how to >unsubscribe or change your account options. There is also a button on >your options page that will email your current password to you. Disclaimer: This email and its attachments may be confidential and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, any reading, printing, storage, disclosure, copying or any other action taken in respect of this e-mail is prohibited and may be unlawful. If you are not the intended recipient, please notify the sender immediately by using the reply function and then permanently delete what you have received. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Sidra Medical and Research Center. From dhildeb at us.ibm.com Thu Aug 27 01:22:52 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Wed, 26 Aug 2015 17:22:52 -0700 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Thu Aug 27 08:42:34 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 27 Aug 2015 07:42:34 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "valid" as they are essentially NSDs in a different cluster from where the storage cluster would be, but it sounds like it is. Now if I can just get it working ... Looking in mmfsfuncs: if [[ $diskUsage != "localCache" ]] then combinedList=${primaryAdminNodeList},${backupAdminNodeList} IFS="," for server in $combinedList do IFS="$IFS_sv" [[ -z $server ]] && continue $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1 if [[ $? -ne 0 ]] then # The node does not have a server license. printErrorMsg 118 $mmcmd $server return 1 fi IFS="," done # end for server in ${primaryAdminNodeList},${backupAdminNodeList} IFS="$IFS_sv" fi # end of if [[ $diskUsage != "localCache" ]] So unless the NSD device usage=localCache, then it requires a server License when you try and create the NSD, but localCache cannot have a storage pool assigned. I've opened a PMR with IBM. Simon From: Dean Hildebrand > Reply-To: gpfsug main discussion list > Date: Thursday, 27 August 2015 01:22 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center [Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other ques]"Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" > wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From ckrafft at de.ibm.com Thu Aug 27 10:36:27 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Thu, 27 Aug 2015 11:36:27 +0200 Subject: [gpfsug-discuss] Best Practices using GPFS with SVC (and XiV) Message-ID: <201508270936.t7R9asQI012288@d06av08.portsmouth.uk.ibm.com> Dear GPFS folks, I know - it may not be an optimal setup for GPFS ... but is someone willing to share technical best practices when using GPFS with SVC (and XiV). >From the past I remember some recommendations concerning the nr of vDisks in SVC and certainly block size (XiV=1M) could be an issue. Thank you very much for sharing any insights with me. Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 06057114.gif Type: image/gif Size: 1851 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Thu Aug 27 12:58:12 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 27 Aug 2015 11:58:12 +0000 Subject: [gpfsug-discuss] Best Practices using GPFS with SVC (and XiV) In-Reply-To: <201508270936.t7R9asQI012288@d06av08.portsmouth.uk.ibm.com> References: <201508270936.t7R9asQI012288@d06av08.portsmouth.uk.ibm.com> Message-ID: IBM in general doesn?t have a comprehensive set of best practices around Spectrum Scale (trying to get used to that!) and SVC or storage system like XIV (or HP 3PAR). From my IBM days (a few years back) I used both with GPFS successfully. I do recall some discussion regarding a larger block size, but haven?t seen any recent updates. (Scott Fadden, are you listening?) Larger block sizes are problematic for file systems with lots of small files. (like ours) - Since SVC is striping data across multiple storage LUNs, and GPFS is striping as well, what?s the possible impact? My thought would be to use image mode vdisks, but that sort of defeats the purpose/utility of SVC. - IBM specifically points out not to use compression on the SVC/V9000 with GPFS metadata, so if you use these features be careful. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of Christoph Krafft Reply-To: gpfsug main discussion list Date: Thursday, August 27, 2015 at 4:36 AM To: "gpfsug-discuss at gpfsug.org" Subject: [gpfsug-discuss] Best Practices using GPFS with SVC (and XiV) Dear GPFS folks, I know - it may not be an optimal setup for GPFS ... but is someone willing to share technical best practices when using GPFS with SVC (and XiV). From the past I remember some recommendations concerning the nr of vDisks in SVC and certainly block size (XiV=1M) could be an issue. Thank you very much for sharing any insights with me. Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group ________________________________ Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH [cid:2__=8FBBF43DDFA7F6638f9e8a93df938690918c8FB@] Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany ________________________________ IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: ecblank.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 06057114.gif Type: image/gif Size: 1851 bytes Desc: 06057114.gif URL: From S.J.Thompson at bham.ac.uk Thu Aug 27 15:17:19 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 27 Aug 2015 14:17:19 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> References: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: Oh yeah, I see what you mean, I've just looking on another cluster with LROC drives and they have all disappeared. They are still listed in mmlsnsd, but mmdiag --lroc shows the drive as "NULL"/Idle. Simon From: , Robert > Reply-To: gpfsug main discussion list > Date: Wednesday, 26 August 2015 13:27 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Aug 27 15:30:14 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 27 Aug 2015 14:30:14 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: <3B636593-906F-4AEC-A3DF-1A24376B4841@nuance.com> What do they say on that side of the pond? ?Bob?s your uncle!? :-) Yea, same for me. Pretty big oops if you ask me. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Thursday, August 27, 2015 at 9:17 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Oh yeah, I see what you mean, I've just looking on another cluster with LROC drives and they have all disappeared. They are still listed in mmlsnsd, but mmdiag --lroc shows the drive as "NULL"/Idle. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Thu Aug 27 20:24:50 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 27 Aug 2015 12:24:50 -0700 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hi Simon, This appears to be a mistake, as using clients for the System.log pool should not require a server license (should be similar to lroc).... thanks for opening the PMR... Dean Hildebrand IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/27/2015 12:42 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "valid" as they are essentially NSDs in a different cluster from where the storage cluster would be, but it sounds like it is. Now if I can just get it working ... Looking in mmfsfuncs: if [[ $diskUsage != "localCache" ]] then combinedList=${primaryAdminNodeList},${backupAdminNodeList} IFS="," for server in $combinedList do IFS="$IFS_sv" [[ -z $server ]] && continue $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1 if [[ $? -ne 0 ]] then # The node does not have a server license. printErrorMsg 118 $mmcmd $server return 1 fi IFS="," done # end for server in ${primaryAdminNodeList},$ {backupAdminNodeList} IFS="$IFS_sv" fi # end of if [[ $diskUsage != "localCache" ]] So unless the NSD device usage=localCache, then it requires a server License when you try and create the NSD, but localCache cannot have a storage pool assigned. I've opened a PMR with IBM. Simon From: Dean Hildebrand Reply-To: gpfsug main discussion list Date: Thursday, 27 August 2015 01:22 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other ques"Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [attachment "graycol.gif" deleted by Dean Hildebrand/Almaden/IBM] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From dhildeb at us.ibm.com Thu Aug 27 21:36:26 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 27 Aug 2015 13:36:26 -0700 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: <55DDCA14.8010103@qmul.ac.uk> References: <55CCA84B.1080600@qmul.ac.uk> <55DDCA14.8010103@qmul.ac.uk> Message-ID: Hi Christopher, > > > > Chris, are you using -v to give the container access to the nfs subdir > > (and hence to a gpfs subdir) (and hence achieve a level of > > multi-tenancy)? > > -v option to what? I was referring to how you were using docker/containers to expose the NFS storage to the container...there are several different ways to do it and one way is to simply expose a directory to the container via the -v option https://docs.docker.com/userguide/dockervolumes/ > > > Even without containers, I wonder if this could allow > > users to run their own VMs as root as well...and preventing them from > > becoming root on gpfs... > > > > > > I'd love for you to share your experience (mgmt and perf) with this > > architecture once you get it up and running. > > A quick and dirty test: > > From a VM: > -bash-4.1$ time dd if=/dev/zero of=cjwtestfile2 bs=1M count=10240 > real 0m20.411s 0m22.137s 0m21.431s 0m21.730s 0m22.056s 0m21.759s > user 0m0.005s 0m0.007s 0m0.006s 0m0.003s 0m0.002s 0m0.004s > sys 0m11.710s 0m10.615s 0m10.399s 0m10.474s 0m10.682s 0m9.965s > > From the underlying hypervisor. > > real 0m11.138s 0m9.813s 0m9.761s 0m9.793s 0m9.773s 0m9.723s > user 0m0.006s 0m0.013s 0m0.009s 0m0.008s 0m0.008s 0m0.009s > sys 0m5.447s 0m5.529s 0m5.802s 0m5.580s 0m6.190s 0m5.516s > > So there's a factor of just over 2 slowdown. > > As it's still 500MB/s, I think it's good enough for now. Interesting test... I assume you have VLANs setup so that the data doesn't leave the VM, go to the network switch, and then back to the nfs server in the hypervisor again? Also, there might be a few NFS tuning options you could try, like increasing the number of nfsd threads, etc...but there are extra data copies occuring so the perf will never match... > > The machine has a 10Gbit/s network connection and both hypervisor and VM > are running SL6. > > > Some side benefits of this > > architecture that we have been thinking about as well is that it allows > > both the containers and VMs to be somewhat ephemeral, while the gpfs > > continues to run in the hypervisor... > > Indeed. This is another advantage. > > If we were running Debian, it would be possible to export part of a > filesystem to a VM. Which would presumably work. I'm not aware of this...is this through VirtFS or something else? In redhat derived OSs > (we are currently using Scientific Linux), I don't believe it is - hence > exporting via NFS. > > > > > To ensure VMotion works relatively smoothly, just ensure each VM is > > given a hostname to mount that always routes back to the localhost nfs > > server on each machine...and I think things should work relatively > > smoothly. Note you'll need to maintain the same set of nfs exports > > across the entire cluster as well, so taht when a VM moves to another > > machine it immediately has an available export to mount. > > Yes, we are doing this. > > Simon alludes to potential problems at the NFS layer on live migration. > Otherwise, yes indeed the setup should be fine. I'm not familiar enough > with the details of NFS - but I have heard NFS described as "a stateless > filesystem with state". It's the stateful bits I'm concerned about. Are you using v3 or v4? It doesn't really matter though, as in either case, gpfs would handle the state failover parts... Ideally the vM would umount the local nfs server, do VMotion, and then mount the new local nfs server, but given there might be open files...it makes sense that this may not be possible... Dean > > Chris > > > > > Dean Hildebrand > > IBM Almaden Research Center > > > > > > Inactive hide details for "Simon Thompson (Research Computing - IT > > Services)" ---08/13/2015 07:33:16 AM--->I've set up a couple"Simon > > Thompson (Research Computing - IT Services)" ---08/13/2015 07:33:16 > > AM--->I've set up a couple of VM hosts to export some of its GPFS > > filesystem >via NFS to machines on that > > > > From: "Simon Thompson (Research Computing - IT Services)" > > > > To: gpfsug main discussion list > > Date: 08/13/2015 07:33 AM > > Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host > > Sent by: gpfsug-discuss-bounces at gpfsug.org > > > > ------------------------------------------------------------------------ > > > > > > > > > > >I've set up a couple of VM hosts to export some of its GPFS filesystem > > >via NFS to machines on that VM host[1,2]. > > > > Provided all your sockets no the VM host are licensed. > > > > >Is live migration of VMs likely to work? > > > > > >Live migration isn't a hard requirement, but if it will work, it could > > >make our life easier. > > > > Live migration using a GPFS file-system on the hypervisor node should work > > (subject to the usual caveats of live migration). > > > > Whether live migration and your VM instances would still be able to NFS > > mount (assuming loopback address?) if they moved to a different > > hypervisor, pass, you might get weird NFS locks. And if they are still > > mounting from the original VM host, then you are not doing what the FAQ > > says you can do. > > > > Simon > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aquan at o2.pl Fri Aug 28 16:12:23 2015 From: aquan at o2.pl (=?UTF-8?Q?aquan?=) Date: Fri, 28 Aug 2015 17:12:23 +0200 Subject: [gpfsug-discuss] =?utf-8?q?Unix_mode_bits_and_mmapplypolicy?= Message-ID: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Hello, This is my first time here. I'm a computer science student from Poland and I use GPFS during my internship at DESY. GPFS is a completely new experience to me, I don't know much about file systems and especially those used on clusters. I would like to ask about the unix mode bits and mmapplypolicy. What I noticed is that when I do the following: 1. Recursively call chmod on some directory (i.e. chmod -R 0777 some_directory) 2. Call mmapplypolicy to list mode (permissions), the listed modes of files don't correspond exactly to the modes that I set with chmod. However, if I wait a bit between step 1 and 2, the listed modes are correct. It seems that the mode bits are updated somewhat asynchronically and if I run mmapplypolicy too soon, they will contain old values. I would like to ask if it is possible to make sure that before calling mmputacl, the mode bits of that directory will be up to date on the list generated by a policy? - Omer Sakarya -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Aug 28 17:55:21 2015 From: makaplan at us.ibm.com (makaplan at us.ibm.com) Date: Fri, 28 Aug 2015 16:55:21 +0000 Subject: [gpfsug-discuss] Unix mode bits and mmapplypolicy In-Reply-To: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> References: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Message-ID: An HTML attachment was scrubbed... URL: From kallbac at iu.edu Sat Aug 29 09:23:45 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Sat, 29 Aug 2015 04:23:45 -0400 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> Message-ID: <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> OK, here?s what I?ve heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you?ll note the known conflicts on that date. What I?m asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I?ll setup a poll for that, so I can quickly tally answers. I value your feedback, but don?t want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG ?email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I?ll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A wrote: > It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. > > Best, > Kristy > > On Aug 20, 2015, at 12:12 PM, Dean Hildebrand wrote: > >> Hi Bryan, >> >> Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) >> >> Dean Hildebrand >> IBM Almaden Research Center >> >> >> Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi >> >> From: Bryan Banister >> To: gpfsug main discussion list >> Date: 08/20/2015 08:42 AM >> Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location >> Sent by: gpfsug-discuss-bounces at gpfsug.org >> >> >> >> Hi Kristy, >> >> Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! >> >> I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule >> >> I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. >> >> Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: >> 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) >> 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? >> 2) Will IBM presenters be available on the Saturday before or after? >> 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? >> 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? >> 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? >> >> As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. >> >> I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! >> >> Cheers, >> -Bryan >> >> -----Original Message----- >> From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org >> Sent: Thursday, August 20, 2015 8:24 AM >> To: gpfsug-discuss at gpfsug.org >> Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location >> >> Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. >> >> Many thanks to Janet for her efforts in organizing the venue and speakers. >> >> Date: Wednesday, October 7th >> Place: IBM building at 590 Madison Avenue, New York City >> Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well >> :-) >> >> Agenda >> >> IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. >> IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team >> >> We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. >> >> We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. >> >> As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. >> >> Best, >> Kristy >> GPFS UG - USA Principal >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> ________________________________ >> >> Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From bbanister at jumptrading.com Sat Aug 29 22:17:44 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Sat, 29 Aug 2015 21:17:44 +0000 Subject: [gpfsug-discuss] Unix mode bits and mmapplypolicy In-Reply-To: References: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com> Before I try these mmfsctl commands, what are the implications of suspending writes? I assume the entire file system will be quiesced? What if NSD clients are non responsive to this operation? Does a deadlock occur or is there a risk of a deadlock? Thanks in advance! -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of makaplan at us.ibm.com Sent: Friday, August 28, 2015 11:55 AM To: gpfsug-discuss at gpfsug.org Cc: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Unix mode bits and mmapplypolicy This is due to a design trade-off in mmapplypolicy. Mmapplypolicy bypasses locks and caches - so it doesn't "see" inode&metadata changes until they are flushed to disk. I believe this is hinted at in our publications. You can force a flush with`mmfsctl fsname suspend-write; mmfsctl fsname resume` ----- Original message ----- From: aquan > Sent by: gpfsug-discuss-bounces at gpfsug.org To: gpfsug-discuss at gpfsug.org Cc: Subject: [gpfsug-discuss] Unix mode bits and mmapplypolicy Date: Fri, Aug 28, 2015 11:12 AM Hello, This is my first time here. I'm a computer science student from Poland and I use GPFS during my internship at DESY. GPFS is a completely new experience to me, I don't know much about file systems and especially those used on clusters. I would like to ask about the unix mode bits and mmapplypolicy. What I noticed is that when I do the following: 1. Recursively call chmod on some directory (i.e. chmod -R 0777 some_directory) 2. Call mmapplypolicy to list mode (permissions), the listed modes of files don't correspond exactly to the modes that I set with chmod. However, if I wait a bit between step 1 and 2, the listed modes are correct. It seems that the mode bits are updated somewhat asynchronically and if I run mmapplypolicy too soon, they will contain old values. I would like to ask if it is possible to make sure that before calling mmputacl, the mode bits of that directory will be up to date on the list generated by a policy? - Omer Sakarya _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Sun Aug 30 01:16:02 2015 From: makaplan at us.ibm.com (makaplan at us.ibm.com) Date: Sun, 30 Aug 2015 00:16:02 +0000 Subject: [gpfsug-discuss] mmfsctl fs suspend-write Unix mode bits and mmapplypolicy In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com>, <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Message-ID: <201508300016.t7U0Gxi9001977@d01av04.pok.ibm.com> An HTML attachment was scrubbed... URL: From aquan at o2.pl Mon Aug 31 16:49:06 2015 From: aquan at o2.pl (=?UTF-8?Q?aquan?=) Date: Mon, 31 Aug 2015 17:49:06 +0200 Subject: [gpfsug-discuss] =?utf-8?q?mmfsctl_fs_suspend-write_Unix_mode_bit?= =?utf-8?q?s_andmmapplypolicy?= In-Reply-To: <201508300016.t7U0Gxi9001977@d01av04.pok.ibm.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com> <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> <201508300016.t7U0Gxi9001977@d01av04.pok.ibm.com> Message-ID: <1834e8cf.3c47fde.55e47772.d9226@o2.pl> Thank you for responding to my post. Is there any other way to make sure, that the mode bits are up-to-date when applying a policy? What would happen if a user changed mode bits when the policy that executes mmputacl is run? Which change will be the result in the end, the mmputacl mode bits or chmod mode bits? Dnia 30 sierpnia 2015 2:16 makaplan at us.ibm.com napisa?(a): I don't know exactly how suspend-write works.? But I am NOT suggesting that is be used lightly.It's there for special situations.? Obviously any process trying to change anything in the filesystemis going to be blocked until mmfsctl fs resume.?? That should not cause a GPFS deadlock, but systems thatdepend on GPFS responding may be unhappy... -------------- next part -------------- An HTML attachment was scrubbed... URL: From stuartb at 4gh.net Sat Aug 1 22:45:40 2015 From: stuartb at 4gh.net (Stuart Barkley) Date: Sat, 1 Aug 2015 17:45:40 -0400 (EDT) Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: On Tue, 28 Jul 2015 at 12:28 -0000, Martin Gasthuber wrote: > In our setup, the files gets copied to a (user accessible) GPFS > instance which controls the access by NFSv4 ACLs (only !) and from > time to time, we had to modify these ACLs (add/remove user/group > etc.). Doing a (non policy-run based) simple approach, changing 9 > million files requires ~200 hours to run - which we consider not > really a good option. Just a thought, but instead of applying the ACLs to the files individually, could you apply the ACLs on a few parent directories instead? There are certainly issues to consider (current directory structure, actual security model, any write permissions, etc), but this might simplify things considerably. Stuart -- I've never been lost; I was once bewildered for three days, but never lost! -- Daniel Boone From makaplan at us.ibm.com Mon Aug 3 18:05:51 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 3 Aug 2015 13:05:51 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Reality check on GPFS ACLs. I think it would be helpful to understand how ACLs are implemented in GPFS - - All ACLs for a file sytem are stored as records in a special file. - Each inode that has an ACL (more than just the classic Posix mode bits) has a non-NULL offset to the governing ACL in the special acl file. - Yes, inodes with identical ACLs will have the same ACL offset value. Hence in many (most?) use cases, the ACL file can be relatively small - it's size is proportional to the number of unique ACLs, not the number of files. And how and what mmapplypolicy can do for you - mmapplypolicy can rapidly scan the directories and inodes of a file system. This scanning bypasses most locking regimes and takes advantage of both parallel processing and streaming full tracks of inodes. So it is good at finding files (inodes) that satifsy criteria that can be described by an SQL expression over the attributes stored in the inode. BUT to change the attributes of any particular file we must use APIs and code that respect all required locks, log changes, etc, etc. Those changes can be "driven" by the execution phase of mmapplypolicy, in parallel - but overheads are significantly higher per file, than during the scanning phases of operation. NOW to the problem at hand. It might be possible to improve ACL updates somewhat by writing a command that processes multiple files at once, still using the same APIs used by the mmputacl command. Hmmm.... it wouldn't be very hard for GPFS development team to modify the mmputacl command to accept a list of files... I see that the Linux command setfacl does accept multiple files in its argument list. Finally and not officially supported nor promised nor especially efficient .... try getAcl() as a GPFS SQL policy function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Tue Aug 4 08:32:31 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Tue, 04 Aug 2015 08:32:31 +0100 Subject: [gpfsug-discuss] GPFS UG User Group@USA Message-ID: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. Short Bio from Kristy: "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. LinkedIn Profile: www.linkedin.com/in/kristykallbackrose " We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. Kristy will be following up later with some announcements about the USA group activities. Simon GPFS UG Chair From kraemerf at de.ibm.com Tue Aug 4 12:28:24 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Tue, 4 Aug 2015 13:28:24 +0200 Subject: [gpfsug-discuss] Whitepaper Spectrum Scale and ownCloud + plus Webinar on large scale ownCloud project In-Reply-To: References: Message-ID: 1) Here is the link for latest ISV solution with IBM Spectrum Scale and ownCloud... https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_on-premise-file-syn-share-owncloud 2) Webinar on large scale ownCloud+GPFS project running in Germany Sciebo Scales Enterprise File Sync and Share for 500K Users: A Proven Solution from ownCloud and IBM Spectrum Storage. https://cc.readytalk.com/cc/s/registrations/new?cid=y5gn9c445u2k -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany From kallbac at iu.edu Wed Aug 5 03:56:32 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Tue, 4 Aug 2015 22:56:32 -0400 Subject: [gpfsug-discuss] GPFS UG User Group@USA In-Reply-To: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> References: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> Message-ID: <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> Hello, Thanks Simon and all for moving the USA-based group forward. You?ve got a great user group in the UK and am grateful it?s being extended. I?m looking forward to increased opportunities for the US user community to interact with GPFS developers and for us to interact with each other as users of GPFS as well. Having said that, here are some initial plans: We propose the first "Meet the Developers" session be in New York City at the IBM 590 Madison office during 2H of September (3-4 hours and lunch will be provided). [Personally, I want to avoid the week of September 28th which is the HPSS Users Forum. Let us know of any date preferences you have.] The rough agenda will include a session by a Spectrum Scale development architect followed by a demo of one of the upcoming functions. We would also like to include a user lead session --sharing their experiences or use case scenarios with Spectrum Scale. For this go round, those who are interested in attending this event should write to Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Please also chime in if you are interested in sharing an experience or use case scenario for this event or a future event. Lastly, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy On Aug 4, 2015, at 3:32 AM, GPFS UG Chair (Simon Thompson) wrote: > > As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. > > We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. > > Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. > > Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). > > I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. > > Short Bio from Kristy: > > "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. > > LinkedIn Profile: www.linkedin.com/in/kristykallbackrose > " > > We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: > > A paragraph covering their credentials; > A paragraph covering what they would bring to the group; > A paragraph setting out their vision for the group for the next two years. > > Note that this should be a GPFS customer based in the USA. > > If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. > > Kristy will be following up later with some announcements about the USA group activities. > > Simon > GPFS UG Chair > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From Robert.Oesterlin at nuance.com Wed Aug 5 12:12:17 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 5 Aug 2015 11:12:17 +0000 Subject: [gpfsug-discuss] GPFS UG User Group@USA In-Reply-To: <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> References: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> Message-ID: <315FAEF7-DEC0-4252-BA3B-D318DE05933C@nuance.com> Hi Kristy Thanks for stepping up to the duties for the USA based user group! Getting the group organized is going to be a challenge and I?m happy to help out where I can. Regarding some of the planning for SC15, I wonder if you could drop me a note off the mailing list to discuss this, since I have been working with some others at IBM on a BOF proposal for SC15 and these two items definitely overlap. My email is robert.oesterlin at nuance.com (probably end up regretting putting that out on the mailing list at some point ? sigh) Bob Oesterlin Sr Storage Engineer, Nuance Communications From: > on behalf of Kristy Kallback-Rose Reply-To: gpfsug main discussion list Date: Tuesday, August 4, 2015 at 9:56 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFS UG User Group at USA Hello, Thanks Simon and all for moving the USA-based group forward. You?ve got a great user group in the UK and am grateful it?s being extended. I?m looking forward to increased opportunities for the US user community to interact with GPFS developers and for us to interact with each other as users of GPFS as well. Having said that, here are some initial plans: We propose the first "Meet the Developers" session be in New York City at the IBM 590 Madison office during 2H of September (3-4 hours and lunch will be provided). [Personally, I want to avoid the week of September 28th which is the HPSS Users Forum. Let us know of any date preferences you have.] The rough agenda will include a session by a Spectrum Scale development architect followed by a demo of one of the upcoming functions. We would also like to include a user lead session --sharing their experiences or use case scenarios with Spectrum Scale. For this go round, those who are interested in attending this event should write to Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Please also chime in if you are interested in sharing an experience or use case scenario for this event or a future event. Lastly, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy On Aug 4, 2015, at 3:32 AM, GPFS UG Chair (Simon Thompson) > wrote: As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. Short Bio from Kristy: "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. LinkedIn Profile: www.linkedin.com/in/kristykallbackrose " We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. Kristy will be following up later with some announcements about the USA group activities. Simon GPFS UG Chair _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 5 20:23:45 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 5 Aug 2015 19:23:45 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: , Message-ID: Just picking this topic back up. Does anyone have any comments/thoughts on these questions? Thanks Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Luke Raimbach [Luke.Raimbach at crick.ac.uk] Sent: 20 July 2015 08:02 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.1.1 immutable filesets Can I add to this list of questions? Apparently, one cannot set immutable, or append-only attributes on files / directories within an AFM cache. However, if I have an independent writer and set immutability at home, what does the AFM IW cache do about this? Or does this restriction just apply to entire filesets (which would make more sense)? Cheers, Luke. -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: 19 July 2015 11:45 To: gpfsug main discussion list Subject: [gpfsug-discuss] 4.1.1 immutable filesets I was wondering if anyone had looked at the immutable fileset features in 4.1.1? In particular I was looking at the iam compliant mode, but I've a couple of questions. * if I have an iam compliant fileset, and it contains immutable files or directories, can I still unlink and delete the filset? * will HSM work with immutable files? I.e. Can I migrate files to tape and restore them? The docs mention that extended attributes can be updated internally by dmapi, so I guess HSM might work? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Aug 7 14:46:04 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 13:46:04 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets Message-ID: On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" wrote: >* if I have an iam compliant fileset, and it contains immutable files or >directories, can I still unlink and delete the filset? So just to answer my own questions here. (Actually I tried in non-compliant mode, rather than full compliance, but I figured this was the mode I actually need as I might need to reset the immutable time back earlier to allow me to delete something that shouldn't have gone in). Yes, I can both unlink and delete an immutable fileset which has immutable files which are non expired in it. >* will HSM work with immutable files? I.e. Can I migrate files to tape >and restore them? The docs mention that extended attributes can be >updated internally by dmapi, so I guess HSM might work? And yes, HSM files work. I created a file, made it immutable, backed up, migrated it: $ mmlsattr -L BHAM_DATASHARE_10.zip file name: BHAM_DATASHARE_10.zip metadata replication: 2 max 2 data replication: 2 max 2 immutable: yes appendOnly: no indefiniteRetention: no expiration Time: Fri Aug 7 14:45:00 2015 flags: storage pool name: tier2 fileset name: rds-projects-2015-thompssj-01 snapshot name: creation time: Fri Aug 7 14:38:30 2015 Windows attributes: ARCHIVE OFFLINE READONLY Encrypted: no I was then able to recall the file. Simon From wsawdon at us.ibm.com Fri Aug 7 16:13:31 2015 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 7 Aug 2015 08:13:31 -0700 Subject: [gpfsug-discuss] Hello Message-ID: Hello, Although I am new to this user group, I've worked on GPFS at IBM since before it was a product.! I am interested in hearing from the group about the features you like or don't like and of course, what features you would like to see. Wayne Sawdon STSM; IBM Research Manager | Cloud Data Management Phone: 1-408-927-1848 E-mail: wsawdon at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Fri Aug 7 16:27:33 2015 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 7 Aug 2015 08:27:33 -0700 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: Message-ID: > On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" > wrote: > > >* if I have an iam compliant fileset, and it contains immutable files or > >directories, can I still unlink and delete the filset? > > So just to answer my own questions here. (Actually I tried in > non-compliant mode, rather than full compliance, but I figured this was > the mode I actually need as I might need to reset the immutable time back > earlier to allow me to delete something that shouldn't have gone in). > > Yes, I can both unlink and delete an immutable fileset which has immutable > files which are non expired in it. > It was decided that deleting a fileset with compliant data is a "hole", but apparently it was not closed before the GA. The same rule should apply to unlinking the fileset. HSM on compliant data should be fine. I don't know what happens when you combine compliance and AFM, but I would suggest not mixing the two. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Aug 7 16:36:03 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 15:36:03 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: , Message-ID: I did only try in nc mode, so possibly if its fully compliant it wouldn't have let me delete the fileset. One other observation. As a user Id set the atime and chmod -w the file. Once it had expired, I was then unable to reset the atime into the future. (I could as root). I'm not sure what the expected behaviour should be, but I was sorta surprised that I could initially set the time as the user, but then not be able to extend even once it had expired. Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Wayne Sawdon [wsawdon at us.ibm.com] Sent: 07 August 2015 16:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.1.1 immutable filesets > On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" > wrote: > > >* if I have an iam compliant fileset, and it contains immutable files or > >directories, can I still unlink and delete the filset? > > So just to answer my own questions here. (Actually I tried in > non-compliant mode, rather than full compliance, but I figured this was > the mode I actually need as I might need to reset the immutable time back > earlier to allow me to delete something that shouldn't have gone in). > > Yes, I can both unlink and delete an immutable fileset which has immutable > files which are non expired in it. > It was decided that deleting a fileset with compliant data is a "hole", but apparently it was not closed before the GA. The same rule should apply to unlinking the fileset. HSM on compliant data should be fine. I don't know what happens when you combine compliance and AFM, but I would suggest not mixing the two. -Wayne From S.J.Thompson at bham.ac.uk Fri Aug 7 16:56:17 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 15:56:17 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes Message-ID: I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. Does anyone have a script to do this already? Surely there is a better way? Thanks Simon From rclee at lbl.gov Fri Aug 7 17:30:21 2015 From: rclee at lbl.gov (Rei Lee) Date: Fri, 7 Aug 2015 09:30:21 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: Message-ID: <55C4DD1D.7000402@lbl.gov> We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Aug 7 17:49:03 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 16:49:03 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: <55C4DD1D.7000402@lbl.gov> References: , <55C4DD1D.7000402@lbl.gov> Message-ID: Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From ckerner at ncsa.uiuc.edu Fri Aug 7 17:41:14 2015 From: ckerner at ncsa.uiuc.edu (Chad Kerner) Date: Fri, 7 Aug 2015 11:41:14 -0500 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: <55C4DD1D.7000402@lbl.gov> References: <55C4DD1D.7000402@lbl.gov> Message-ID: <20150807164114.GA29652@logos.ncsa.illinois.edu> You can use the mmlsfileset DEVICE -L option to see the maxinodes and allocated inodes. I have a perl script that loops through all of our file systems every hour and scans for it. If one is nearing capacity(tunable threshold in the code), it automatically expands it by a set amount(also tunable). We add 10% currently. This also works on file systems that have no filesets as it appears as the root fileset. I can check with my boss to see if its ok to post it if you want it. Its about 40 lines of perl. Chad -- Chad Kerner, Systems Engineer Storage Enabling Technologies National Center for Supercomputing Applications On Fri, Aug 07, 2015 at 09:30:21AM -0700, Rei Lee wrote: > We have the same problem when we started using independent fileset. I think > this should be a RFE item that IBM should provide a tool similar to 'mmdf > -F' to show the number of free/used inodes for an independent fileset. > > Rei > > On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > >I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > > >We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > > >mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > > >The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > > >Does anyone have a script to do this already? > > > >Surely there is a better way? > > > >Thanks > > > >Simon > >_______________________________________________ > >gpfsug-discuss mailing list > >gpfsug-discuss at gpfsug.org > >http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From makaplan at us.ibm.com Fri Aug 7 21:12:05 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 7 Aug 2015 16:12:05 -0400 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: , <55C4DD1D.7000402@lbl.gov> Message-ID: Try mmlsfileset filesystem_name -i From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From martin.gasthuber at desy.de Fri Aug 7 21:41:08 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Fri, 7 Aug 2015 22:41:08 +0200 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Hi Marc, your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-) best regards, Martin > On 3 Aug, 2015, at 19:05, Marc A Kaplan wrote: > > Reality check on GPFS ACLs. > > I think it would be helpful to understand how ACLs are implemented in GPFS - > > - All ACLs for a file sytem are stored as records in a special file. > - Each inode that has an ACL (more than just the classic Posix mode bits) has a non-NULL offset to the governing ACL in the special acl file. > - Yes, inodes with identical ACLs will have the same ACL offset value. Hence in many (most?) use cases, the ACL file can be relatively small - > it's size is proportional to the number of unique ACLs, not the number of files. > > And how and what mmapplypolicy can do for you - > > mmapplypolicy can rapidly scan the directories and inodes of a file system. > This scanning bypasses most locking regimes and takes advantage of both parallel processing > and streaming full tracks of inodes. So it is good at finding files (inodes) that satifsy criteria that can > be described by an SQL expression over the attributes stored in the inode. > > BUT to change the attributes of any particular file we must use APIs and code that respect all required locks, > log changes, etc, etc. > > Those changes can be "driven" by the execution phase of mmapplypolicy, in parallel - but overheads are significantly higher per file, > than during the scanning phases of operation. > > NOW to the problem at hand. It might be possible to improve ACL updates somewhat by writing a command that processes > multiple files at once, still using the same APIs used by the mmputacl command. > > Hmmm.... it wouldn't be very hard for GPFS development team to modify the mmputacl command to accept a list of files... > I see that the Linux command setfacl does accept multiple files in its argument list. > > Finally and not officially supported nor promised nor especially efficient .... try getAcl() as a GPFS SQL policy function._______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From rclee at lbl.gov Fri Aug 7 21:44:23 2015 From: rclee at lbl.gov (Rei Lee) Date: Fri, 7 Aug 2015 13:44:23 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: <55C518A7.6020605@lbl.gov> We have tried that command but it took a very long time like it was hanging so I killed the command before it finished. I was not sure if it was a bug in early 4.1.0 software but I did not open a PMR. I just ran the command again on a quiet file system and it has been 5 minutes and the command is still not showing any output. 'mmdf -F' returns very fast. 'mmlsfileset -l' does not report the number of free inodes. Rei On 8/7/15 1:12 PM, Marc A Kaplan wrote: > Try > > mmlsfileset filesystem_name -i > > > Marc A Kaplan > > > > From: "Simon Thompson (Research Computing - IT Services)" > > To: gpfsug main discussion list > Date: 08/07/2015 12:49 PM > Subject: Re: [gpfsug-discuss] Independent fileset free inodes > Sent by: gpfsug-discuss-bounces at gpfsug.org > ------------------------------------------------------------------------ > > > > > Hmm. I'll create an RFE next week then. (just in case someone comes > back with a magic flag we don't know about!). > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at gpfsug.org > [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] > Sent: 07 August 2015 17:30 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] Independent fileset free inodes > > We have the same problem when we started using independent fileset. I > think this should be a RFE item that IBM should provide a tool similar > to 'mmdf -F' to show the number of free/used inodes for an independent > fileset. > > Rei > > On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) > wrote: > > I was just wondering if anyone had a way to return the number of > free/used inodes for an independent fileset and all its children. > > > > We recently had a case where we were unable to create new files in a > child file-set, and it turns out the independent parent had run out of > inodes. > > > > mmsf however only lists the inodes used directly in the parent > fileset, I.e. About 8 as that was the number of child filesets. > > > > The suggestion from IBM support is that we use mmdf and then add up > the numbers from all the child filesets to workout how many are > free/used in the independent fileset. > > > > Does anyone have a script to do this already? > > > > Surely there is a better way? > > > > Thanks > > > > Simon > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From bevans at pixitmedia.com Fri Aug 7 21:44:44 2015 From: bevans at pixitmedia.com (Barry Evans) Date: Fri, 7 Aug 2015 21:44:44 +0100 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: <-2676389644758800053@unknownmsgid> -i will give you the exact used number but... Avoid running it during peak usage on most setups. It's pretty heavy, like running a -d on lssnapshot. Your best bet is from earlier posts: '-L' gives you the max and alloc. If they match, you know you're in bother soon. It's not accurate, of course, but prevention is typically the best medicine in this case. Cheers, Barry ArcaStream/Pixit On 7 Aug 2015, at 21:12, Marc A Kaplan wrote: Try mmlsfileset filesystem_name -i From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org ------------------------------ Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Aug 7 22:21:28 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 7 Aug 2015 17:21:28 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: You asked: "your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-) " Perhaps one could hack/patch that - but I can't recommend it. Would you routinely hack/patch the GPFS metadata that comprises a directory? Consider replicated and logged metadata ... Consider you've corrupted the hash table of all ACL values... -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Aug 10 08:13:43 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 10 Aug 2015 07:13:43 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: Hi Marc, Thanks for this. Just to clarify the output when it mentions allocated inodes, does that mean the number used or the number allocated? I.e. If I pre-create a bunch of inodes will they appear as allocated? Or is that only when they are used by a file etc? Thanks Simon From: Marc A Kaplan > Reply-To: gpfsug main discussion list > Date: Friday, 7 August 2015 21:12 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Independent fileset free inodes Try mmlsfileset filesystem_name -i [Marc A Kaplan] From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00002.gif Type: image/gif Size: 21994 bytes Desc: ATT00002.gif URL: From makaplan at us.ibm.com Mon Aug 10 19:14:58 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 10 Aug 2015 14:14:58 -0400 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: mmlsfileset xxx -i 1. Yes it is slow. I don't know the reasons. Perhaps someone more familiar with the implementation can comment. It's surprising to me that it is sooo much slower than mmdf EVEN ON a filesystem that only has the root fileset! 2. used: how many inodes (files) currently exist in the given fileset or fileset allocated: number of inodes "pre"allocated in the (special) file of all inodes. maximum: number of inodes that GPFS might allocate on demand, with current --inode-limit settings from mmchfileset and mmchfs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From taylorm at us.ibm.com Mon Aug 10 22:23:02 2015 From: taylorm at us.ibm.com (Michael L Taylor) Date: Mon, 10 Aug 2015 14:23:02 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes Message-ID: <201508102123.t7ALNZDV012260@d01av01.pok.ibm.com> This capability is available in Storage Insights, which is a Software as a Service (SaaS) storage management solution. You can play with a live demo and try a free 30 day trial here: https://www.ibmserviceengage.com/storage-insights/learn I could also provide a screen shot of what IBM Spectrum Control looks like when managing Spectrum Scale and how you can easily see fileset relationships and used space and inodes per fileset if interested. -------------- next part -------------- An HTML attachment was scrubbed... URL: From GARWOODM at uk.ibm.com Tue Aug 11 17:05:52 2015 From: GARWOODM at uk.ibm.com (Michael Garwood7) Date: Tue, 11 Aug 2015 16:05:52 +0000 Subject: [gpfsug-discuss] Developer Works forum post on Spectrum Scale and Spark work Message-ID: <201508111606.t7BG6Vt6005368@d06av01.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Tue Aug 11 17:53:32 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Tue, 11 Aug 2015 18:53:32 +0200 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Hi Marc, this was meant to be more a joke than a 'wish' - but it would be interesting for us (with the case of several millions of files having the same ACL) if there are ways/plans to treat ACLs more referenced from each of these files and having a mechanism to treat all of them in a single operation. -- Martin > On 7 Aug, 2015, at 23:21, Marc A Kaplan wrote: > > You asked: > > "your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-)" > > > Perhaps one could hack/patch that - but I can't recommend it. Would you routinely hack/patch the GPFS metadata that comprises a directory? > Consider replicated and logged metadata ... Consider you've corrupted the hash table of all ACL values... > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Tue Aug 11 18:59:08 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 13:59:08 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: We (myself and a few other GPFS people) are reading this and considering... Of course we can't promise anything here. I can see some ways to improve and make easier the job of finding and changing the ACLs of many files. But I think whatever we end up doing will still be, at best, a matter of changing every inode, rather than changing on ACL that all those inodes happen to point to. IOW, as a lower bound, we're talking at least as much overhead as doing chmod on the chosen files. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Tue Aug 11 19:11:26 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Tue, 11 Aug 2015 18:11:26 +0000 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , Message-ID: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Aug 11 20:45:56 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 15:45:56 -0400 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: The mmfind command/script you may find in samples/ilm of 4.1.1 (July 2015) is completely revamped and immensely improved compared to any previous mmfind script you may have seen shipped in an older samples/ilm/mmfind. If you have a classic "find" job that you'd like to easily parallelize, give the new mmfind a shot and let us know how you make out! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Tue Aug 11 21:56:34 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 11 Aug 2015 21:56:34 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: <55CA6182.9010507@buzzard.me.uk> On 11/08/15 19:11, James Davis wrote: > If trying the naive approach, a la > find /fs ... -exec changeMyACL {} \; > or > /usr/lpp/mmfs/samples/ilm/mmfind /fs ... -exec changeMyACL {} \; > #shameless plug for my mmfind tool, available in the latest release of > GPFS. See the associated README. > I think the cost will be prohibitive. I believe a relatively strong > internal lock is required to do ACL changes, and consequently I think > the performance of modifying the ACL on a bunch of files will be painful > at best. I am not sure what it is like in 4.x but up to 3.5 the mmputacl was some sort of abomination of a command. It could only set the ACL for a single file and if you wanted to edit rather than set you had to call mmgetacl first, manipulate the text file output and then feed that into mmputacl. So if you need to set the ACL's on a directory hierarchy over loads of files then mmputacl is going to be exec'd potentially millions of times, which is a massive overhead just there. If only because mmputacl is a ksh wrapper around tsputacl. Execution time doing this was god dam awful. So I instead wrote a simple C program that used the ntfw library call and the gpfs API to set the ACL's it was way way faster. Of course I was setting a very limited number of different ACL's that where required to support a handful of Samba share types after the data had been copied onto a GPFS file system. As I said previously what is needed is an "mm" version of the FreeBSD setfacl command http://www.freebsd.org/cgi/man.cgi?format=html&query=setfacl(1) That has the -R/--recursive option of the Linux setfacl command which uses the fast inode scanning GPFS API. You want to be able to type something like mmsetfacl -mR g:www:rpaRc::allow foo What you don't want to be doing is calling the abomination of a command that is mmputacl. Frankly whoever is responsible for that command needs taking out the back and given a good kicking. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From makaplan at us.ibm.com Tue Aug 11 23:11:24 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 18:11:24 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <55CA6182.9010507@buzzard.me.uk> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> Message-ID: On Linux you are free to use setfacl and getfacl commands on GPFS files. Works for me. As you say, at least you can avoid the overhead of shell interpretation and forking and whatnot for each file. Or use the APIs, see /usr/include/sys/acl.h. May need to install libacl-devel package and co. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Tue Aug 11 23:27:13 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 11 Aug 2015 23:27:13 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> Message-ID: <55CA76C1.4050109@buzzard.me.uk> On 11/08/15 23:11, Marc A Kaplan wrote: > On Linux you are free to use setfacl and getfacl commands on GPFS files. > Works for me. Really, for NFSv4 ACL's? Given the RichACL kernel patches are only carried by SuSE I somewhat doubt that you can. http://www.bestbits.at/richacl/ People what to set NFSv4 ACL's on GPFS because when used with vfs_gpfs you can get Windows server/NTFS like rich permissions on your Windows SMB clients. You don't get that with Posix ACL's. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From usa-principal at gpfsug.org Tue Aug 11 23:36:11 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Tue, 11 Aug 2015 18:36:11 -0400 Subject: [gpfsug-discuss] Additional Details for Fall 2015 GPFS UG Meet Up in NYC Message-ID: <7d3395cb2575576c30ba55919124e44d@webmail.gpfsug.org> Hello, We are working on some additional information regarding the proposed NYC meet up. Below is the draft agenda for the "Meet the Developers" session. We are still working on closing on an exact date, and will communicate that soon --targeting September or October. Please e-mail Janet Ellsworth (janetell at us.ibm.com) if you are interested in attending. Janet is coordinating the logistics of the event. ? IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. ? IBM developer to demo future Graphical User Interface ? Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this !) ? Open Q&A with the development team Thoughts? Ideas? Best, Kristy GPFS UG - USA Principal PS - I believe we're still looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. From chair at gpfsug.org Wed Aug 12 10:20:40 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Wed, 12 Aug 2015 10:20:40 +0100 Subject: [gpfsug-discuss] USA Co-Principal Message-ID: Hi All, We only had 1 self nomination for the co-principal of the USA side of the group. I've very much like to thank Bob Oesterlin for nominating himself to help Kristy with the USA side of things. I've spoken a few times with Bob "off-list" and he's helped me out with a few bits and pieces. As you may have seen, Kristy has been posting from usa-principal at gpfsug.org, I'll sort another address out for the co-principal role today. Both Kristy and Bob seem determined to get the USA group off the ground and I wish them every success with this. Simon Bob's profile follows: LinkedIn Profile: https://www.linkedin.com/in/boboesterlin Short Profile: I have over 15 years experience with GPFS. Prior to 2013 I was with IBM and wa actively involved with developing solutions for customers using GPFS both inside and outside IBM. Prior to my work with GPFS, I was active in the AFS and OpenAFS community where I served as one of founding Elder members of that group. I am well know inside IBM and have worked to maintain my contacts with development. After 2013, I joined Nuance Communications where I am the Sr Storage Engineer for the HPC grid. I have been active in the GPFS DeveloperWorks Forum and the mailing list, presented multiple times at IBM Edge and IBM Interconnect. I'm active in multiple IBM Beta programs, providing active feedback on new products and future directions. For the user group, my vision is to build an active user community where we can share expertise and skills to help each other. I'd also like to see this group be more active in shaping the future direction of GPFS. I would also like to foster broader co-operation and discussion with users and administrators of other clustered file systems. (Lustre and OpenAFS) From makaplan at us.ibm.com Wed Aug 12 15:43:03 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 12 Aug 2015 10:43:03 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <55CA76C1.4050109@buzzard.me.uk> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> Message-ID: On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work fine for me. nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today not at all, at least not for me ;-( [root at n2 ~]# setfacl -m u:wsawdon:r-x /mak/sil/x [root at n2 ~]# echo $? 0 [root at n2 ~]# getfacl /mak/sil/x getfacl: Removing leading '/' from absolute path names # file: mak/sil/x # owner: root # group: root user::--- user:makaplan:rwx user:wsawdon:r-x group::--- mask::rwx other::--- [root at n2 ~]# nfs4_getfacl /mak/sil/x Operation to request attribute not supported. [root at n2 ~]# echo $? 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ross.keeping at uk.ibm.com Wed Aug 12 15:44:38 2015 From: ross.keeping at uk.ibm.com (Ross Keeping3) Date: Wed, 12 Aug 2015 15:44:38 +0100 Subject: [gpfsug-discuss] Q4 Meet the devs location? Message-ID: Hey I was discussing with Simon and Claire where and when to run our Q4 meet the dev session. We'd like to take the next sessions up towards Scotland to give our Edinburgh/Dundee users a chance to participate sometime in November (around the 4.2 release date). I'm keen to hear from people who would be interested in attending an event in or near Scotland and is there anyone who can offer up a small meeting space for the day? Best regards, Ross Keeping IBM Spectrum Scale - Development Manager, People Manager IBM Systems UK - Manchester Development Lab Phone: (+44 161) 8362381-Line: 37642381 E-mail: ross.keeping at uk.ibm.com 3rd Floor, Maybrook House Manchester, M3 2EG United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Wed Aug 12 15:49:27 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 12 Aug 2015 14:49:27 +0000 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk>, Message-ID: I thought acls could either be posix or nfd4, but not both. Set when creating the file-system? Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 12 August 2015 15:43 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] fast ACL alter solution On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work fine for me. nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today not at all, at least not for me ;-( [root at n2 ~]# setfacl -m u:wsawdon:r-x /mak/sil/x [root at n2 ~]# echo $? 0 [root at n2 ~]# getfacl /mak/sil/x getfacl: Removing leading '/' from absolute path names # file: mak/sil/x # owner: root # group: root user::--- user:makaplan:rwx user:wsawdon:r-x group::--- mask::rwx other::--- [root at n2 ~]# nfs4_getfacl /mak/sil/x Operation to request attribute not supported. [root at n2 ~]# echo $? 1 From jonathan at buzzard.me.uk Wed Aug 12 17:29:00 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 12 Aug 2015 17:29:00 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> Message-ID: <1439396940.3856.4.camel@buzzard.phy.strath.ac.uk> On Wed, 2015-08-12 at 10:43 -0400, Marc A Kaplan wrote: > On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work > fine for me. > Yes they do, but they only set POSIX ACL's, and well most people are wanting to set NFSv4 ACL's so the getfacl and setfacl commands are of no use. > nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today > not at all, at least not for me ;-( Yep they only work against an NFSv4 mounted file system with NFSv4 ACL's. So if you NFSv4 exported a GPFS file system from an AIX node and mounted it on a Linux node that would work for you. It might also work if you NFSv4 exported a GPFS file system using the userspace ganesha NFS server with an appropriate VFS backend for GPFS and mounted on Linux https://github.com/nfs-ganesha/nfs-ganesha However last time I checked such a GPFS VFS backend for ganesha was still under development. The RichACL stuff might also in theory work except it is not in mainline kernels and there is certainly no advertised support by IBM for GPFS using it. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jonathan at buzzard.me.uk Wed Aug 12 17:35:55 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 12 Aug 2015 17:35:55 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> , Message-ID: <1439397355.3856.11.camel@buzzard.phy.strath.ac.uk> On Wed, 2015-08-12 at 14:49 +0000, Simon Thompson (Research Computing - IT Services) wrote: > I thought acls could either be posix or nfd4, but not both. Set when creating the file-system? > The options for ACL's on GPFS are POSIX, NFSv4, all which is mixed NFSv4/POSIX and finally Samba. The first two are self explanatory. The mixed mode is best given a wide berth in my opinion. The fourth is well lets say "undocumented" last time I checked. You can set it, and it shows up when you query the file system but what it does I can only speculate. Take a look at the Korn shell of mmchfs if you doubt it exists. Try it out on a test file system with mmchfs -k samba My guess though I have never verified it, is that it changes the schematics of the NFSv4 ACL's to more closely match those of NTFS ACL's. A bit like some of the other GPFS settings you can fiddle with to make GPFS behave more like an NTFS file system. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From C.J.Walker at qmul.ac.uk Thu Aug 13 15:23:07 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Thu, 13 Aug 2015 16:23:07 +0200 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host Message-ID: <55CCA84B.1080600@qmul.ac.uk> I've set up a couple of VM hosts to export some of its GPFS filesystem via NFS to machines on that VM host[1,2]. Is live migration of VMs likely to work? Live migration isn't a hard requirement, but if it will work, it could make our life easier. Chris [1] AIUI, this is explicitly permitted by the licencing FAQ. [2] For those wondering why we are doing this, it's that some users want docker - and they can probably easily escape to become root on the VM. Doing it this way permits us (we hope) to only export certain bits of the GPFS filesystem. From S.J.Thompson at bham.ac.uk Thu Aug 13 15:32:18 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 13 Aug 2015 14:32:18 +0000 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: <55CCA84B.1080600@qmul.ac.uk> References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: >I've set up a couple of VM hosts to export some of its GPFS filesystem >via NFS to machines on that VM host[1,2]. Provided all your sockets no the VM host are licensed. >Is live migration of VMs likely to work? > >Live migration isn't a hard requirement, but if it will work, it could >make our life easier. Live migration using a GPFS file-system on the hypervisor node should work (subject to the usual caveats of live migration). Whether live migration and your VM instances would still be able to NFS mount (assuming loopback address?) if they moved to a different hypervisor, pass, you might get weird NFS locks. And if they are still mounting from the original VM host, then you are not doing what the FAQ says you can do. Simon From dhildeb at us.ibm.com Fri Aug 14 18:54:59 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 14 Aug 2015 10:54:59 -0700 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: Thanks for the replies Simon... Chris, are you using -v to give the container access to the nfs subdir (and hence to a gpfs subdir) (and hence achieve a level of multi-tenancy)? Even without containers, I wonder if this could allow users to run their own VMs as root as well...and preventing them from becoming root on gpfs... I'd love for you to share your experience (mgmt and perf) with this architecture once you get it up and running. Some side benefits of this architecture that we have been thinking about as well is that it allows both the containers and VMs to be somewhat ephemeral, while the gpfs continues to run in the hypervisor... To ensure VMotion works relatively smoothly, just ensure each VM is given a hostname to mount that always routes back to the localhost nfs server on each machine...and I think things should work relatively smoothly. Note you'll need to maintain the same set of nfs exports across the entire cluster as well, so taht when a VM moves to another machine it immediately has an available export to mount. Dean Hildebrand IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/13/2015 07:33 AM Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host Sent by: gpfsug-discuss-bounces at gpfsug.org >I've set up a couple of VM hosts to export some of its GPFS filesystem >via NFS to machines on that VM host[1,2]. Provided all your sockets no the VM host are licensed. >Is live migration of VMs likely to work? > >Live migration isn't a hard requirement, but if it will work, it could >make our life easier. Live migration using a GPFS file-system on the hypervisor node should work (subject to the usual caveats of live migration). Whether live migration and your VM instances would still be able to NFS mount (assuming loopback address?) if they moved to a different hypervisor, pass, you might get weird NFS locks. And if they are still mounting from the original VM host, then you are not doing what the FAQ says you can do. Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Mon Aug 17 13:50:17 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 17 Aug 2015 12:50:17 +0000 Subject: [gpfsug-discuss] Metadata compression Message-ID: <2D1E2C5B-499D-46D3-AC27-765E3B40E340@nuance.com> Anyone have any practical experience here, especially using Flash, compressing GPFS metadata? IBM points out that they specifically DON?T support it on there devices (SVC/V9000/StoreWize) Spectrum Scale FAQ: https://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html?lang=en (look for the word compressed) But ? I could not find any blanket statements that it?s not supported outright. They don?t mention anything about data, and since the default for GPFS is mixing data and metadata on the same LUNs you?re more than likely compressing the metadata as well. :-) Also, no statements that you must split metadata from data when using compression. Bob Oesterlin Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Wed Aug 19 11:53:39 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Wed, 19 Aug 2015 12:53:39 +0200 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: References: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Hi Marc, maybe a stupid question - is it expected that the 4.1.1 mmfind set of tools also works on a 4.1.0.8 environment ? -- Martin > On 11 Aug, 2015, at 21:45, Marc A Kaplan wrote: > > The mmfind command/script you may find in samples/ilm of 4.1.1 (July 2015) is completely revamped and immensely improved compared to any previous mmfind script you may have seen shipped in an older samples/ilm/mmfind. > > If you have a classic "find" job that you'd like to easily parallelize, give the new mmfind a shot and let us know how you make out! > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Wed Aug 19 14:18:14 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 19 Aug 2015 09:18:14 -0400 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> References: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Message-ID: mmfind in 4.1.1 depends on some new functionality added to mmapplypolicy in 4.1.1. Depending which find predicates you happen to use, the new functions in mmapplypolicy will be invoked (or not.) If you'd like to try it out - go ahead - it either works or it doesn't. If it doesn't you can also try using the new mmapplypolicy script and the new tsapolicy binary on the old GPFS system. BUT of course that's not supported. AFAIK, nothing bad will happen, but it's not supported. mmfind in 4.1.1 ships as a "sample", so it is not completely supported, but we will take bug reports and constructive criticism seriously, when you run it on a GPFS cluster that has been completely upgraded to 4.1.1. (Please don't complain that it does not work on a back level system.) For testing this kind of functionality, GPFS can be run on a single node or VM. You can emulate an NSD volume by "giving" mmcrnsd a GB sized file (or larger) instead of a block device. (Also not supported and not very swift but it works.) So there's no need to even "provision" a disk. --marc of GPFS -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Wed Aug 19 14:25:35 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Wed, 19 Aug 2015 13:25:35 +0000 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com><09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Message-ID: <201508191343.t7JDhlaU022402@d01av04.pok.ibm.com> An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Thu Aug 20 14:23:41 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Thu, 20 Aug 2015 09:23:41 -0400 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Message-ID: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal From bbanister at jumptrading.com Thu Aug 20 16:42:09 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 20 Aug 2015 15:42:09 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From Kevin.Buterbaugh at Vanderbilt.Edu Thu Aug 20 17:37:37 2015 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 20 Aug 2015 16:37:37 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> Hi All, I feel sorry for Kristy, as she just simply isn?t going to be able to meet everyones? needs here. For example, I had already e-mailed Kristy off list expressing my hope that the GPFS US UG meeting could be on Tuesday the 17th. Why? Because, as Bryan points out, the DDN User Group meeting is typically on Monday. We have limited travel funds and so if the two meetings were on consecutive days that would allow me to attend both (we have both non-DDN and DDN GPFS storage here). I?d prefer Tuesday over Sunday because that would at least allow me to grab a few minutes on the conference show floor. If the meeting is on the Friday or Saturday before or after SC 15 then I will have to choose ? or possibly not go at all. But I think that Bryan is right ? everyone should express their preferences as soon as possible and then Kristy can have the unenviable task of trying to disappoint the least number of people! :-O Kevin On Aug 20, 2015, at 10:42 AM, Bryan Banister > wrote: Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Aug 20 19:09:27 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 20 Aug 2015 18:09:27 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com>, <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> Message-ID: With my uk hat on, id suggest its also important to factor in IBM's ability to ship people in as well. I know last year there was an IBM GPFS event on the Monday at SC as I spoke there, I'm assuming the GPFS UG will really be an extended version of that, and there were quite a a lot in the audience for that. I know I made some really good contacts with both users and IBM at the event (and I encourage people to speak as its a great way of meeting people!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 20 August 2015 17:37 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Hi All, I feel sorry for Kristy, as she just simply isn?t going to be able to meet everyones? needs here. For example, I had already e-mailed Kristy off list expressing my hope that the GPFS US UG meeting could be on Tuesday the 17th. Why? Because, as Bryan points out, the DDN User Group meeting is typically on Monday. We have limited travel funds and so if the two meetings were on consecutive days that would allow me to attend both (we have both non-DDN and DDN GPFS storage here). I?d prefer Tuesday over Sunday because that would at least allow me to grab a few minutes on the conference show floor. If the meeting is on the Friday or Saturday before or after SC 15 then I will have to choose ? or possibly not go at all. But I think that Bryan is right ? everyone should express their preferences as soon as possible and then Kristy can have the unenviable task of trying to disappoint the least number of people! :-O Kevin On Aug 20, 2015, at 10:42 AM, Bryan Banister > wrote: Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 From dhildeb at us.ibm.com Thu Aug 20 17:12:09 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 20 Aug 2015 09:12:09 -0700 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center From: Bryan Banister To: gpfsug main discussion list Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [ mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From kallbac at iu.edu Thu Aug 20 20:00:21 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Thu, 20 Aug 2015 19:00:21 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 12:26:47 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 11:26:47 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. The docs are a little lacking in detail of how you create NSD disks on clients, I've tried using: %nsd: device=sdb2 nsd=cl0901u17_hawc_sdb2 servers=cl0901u17 pool=system.log failureGroup=90117 (and also with usage=metadataOnly as well), however mmcrsnd -F tells me "mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license designation" Which is correct as its a client system, though HAWC is supposed to be able to run on client systems. I know for LROC you have to set usage=localCache, is there a new value for using HAWC? I'm also a little unclear about failureGroups for this. The docs suggest setting the HAWC to be replicated for client systems, so I guess that means putting each client node into its own failure group? Thanks Simon From Robert.Oesterlin at nuance.com Wed Aug 26 12:46:59 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 26 Aug 2015 11:46:59 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Not directly related to HWAC, but I found a bug in 4.1.1 that results in LROC NSDs not being properly formatted (they don?t work) - Reference APAR IV76242 . Still waiting for a fix. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 6:26 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] Using HAWC (write cache) Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 13:23:36 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 12:23:36 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon From: , Robert > Reply-To: gpfsug main discussion list > Date: Wednesday, 26 August 2015 12:46 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Not directly related to HWAC, but I found a bug in 4.1.1 that results in LROC NSDs not being properly formatted (they don?t work) - Reference APAR IV76242 . Still waiting for a fix. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 6:26 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] Using HAWC (write cache) Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Aug 26 13:27:36 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 26 Aug 2015 12:27:36 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Aug 26 13:50:44 2015 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 26 Aug 2015 12:50:44 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> References: , <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: <201D6001C896B846A9CFC2E841986AC1454FFB0B@mailnycmb2a.winmail.deshaw.com> There is a more severe issue with LROC enabled in saveInodePtrs() which results in segfaults and loss of acknowledged writes, which has caused us to roll back all LROC for now. We are testing an efix (ref Defect 970773, IV76155) now which addresses this. But I would advise against running with LROC/HAWC in production without this fix. We experienced this on 4.1.0-6, but had the efix built against 4.1.1-1, so the exposure seems likely to be all 4.1 versions. Thx Paul Sent with Good (www.good.com) ________________________________ From: gpfsug-discuss-bounces at gpfsug.org on behalf of Oesterlin, Robert Sent: Wednesday, August 26, 2015 8:27:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 13:57:56 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 12:57:56 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From stuartb at 4gh.net Sat Aug 1 22:45:40 2015 From: stuartb at 4gh.net (Stuart Barkley) Date: Sat, 1 Aug 2015 17:45:40 -0400 (EDT) Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: On Tue, 28 Jul 2015 at 12:28 -0000, Martin Gasthuber wrote: > In our setup, the files gets copied to a (user accessible) GPFS > instance which controls the access by NFSv4 ACLs (only !) and from > time to time, we had to modify these ACLs (add/remove user/group > etc.). Doing a (non policy-run based) simple approach, changing 9 > million files requires ~200 hours to run - which we consider not > really a good option. Just a thought, but instead of applying the ACLs to the files individually, could you apply the ACLs on a few parent directories instead? There are certainly issues to consider (current directory structure, actual security model, any write permissions, etc), but this might simplify things considerably. Stuart -- I've never been lost; I was once bewildered for three days, but never lost! -- Daniel Boone From makaplan at us.ibm.com Mon Aug 3 18:05:51 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 3 Aug 2015 13:05:51 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Reality check on GPFS ACLs. I think it would be helpful to understand how ACLs are implemented in GPFS - - All ACLs for a file sytem are stored as records in a special file. - Each inode that has an ACL (more than just the classic Posix mode bits) has a non-NULL offset to the governing ACL in the special acl file. - Yes, inodes with identical ACLs will have the same ACL offset value. Hence in many (most?) use cases, the ACL file can be relatively small - it's size is proportional to the number of unique ACLs, not the number of files. And how and what mmapplypolicy can do for you - mmapplypolicy can rapidly scan the directories and inodes of a file system. This scanning bypasses most locking regimes and takes advantage of both parallel processing and streaming full tracks of inodes. So it is good at finding files (inodes) that satifsy criteria that can be described by an SQL expression over the attributes stored in the inode. BUT to change the attributes of any particular file we must use APIs and code that respect all required locks, log changes, etc, etc. Those changes can be "driven" by the execution phase of mmapplypolicy, in parallel - but overheads are significantly higher per file, than during the scanning phases of operation. NOW to the problem at hand. It might be possible to improve ACL updates somewhat by writing a command that processes multiple files at once, still using the same APIs used by the mmputacl command. Hmmm.... it wouldn't be very hard for GPFS development team to modify the mmputacl command to accept a list of files... I see that the Linux command setfacl does accept multiple files in its argument list. Finally and not officially supported nor promised nor especially efficient .... try getAcl() as a GPFS SQL policy function. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Tue Aug 4 08:32:31 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Tue, 04 Aug 2015 08:32:31 +0100 Subject: [gpfsug-discuss] GPFS UG User Group@USA Message-ID: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. Short Bio from Kristy: "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. LinkedIn Profile: www.linkedin.com/in/kristykallbackrose " We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. Kristy will be following up later with some announcements about the USA group activities. Simon GPFS UG Chair From kraemerf at de.ibm.com Tue Aug 4 12:28:24 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Tue, 4 Aug 2015 13:28:24 +0200 Subject: [gpfsug-discuss] Whitepaper Spectrum Scale and ownCloud + plus Webinar on large scale ownCloud project In-Reply-To: References: Message-ID: 1) Here is the link for latest ISV solution with IBM Spectrum Scale and ownCloud... https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_on-premise-file-syn-share-owncloud 2) Webinar on large scale ownCloud+GPFS project running in Germany Sciebo Scales Enterprise File Sync and Share for 500K Users: A Proven Solution from ownCloud and IBM Spectrum Storage. https://cc.readytalk.com/cc/s/registrations/new?cid=y5gn9c445u2k -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany From kallbac at iu.edu Wed Aug 5 03:56:32 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Tue, 4 Aug 2015 22:56:32 -0400 Subject: [gpfsug-discuss] GPFS UG User Group@USA In-Reply-To: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> References: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> Message-ID: <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> Hello, Thanks Simon and all for moving the USA-based group forward. You?ve got a great user group in the UK and am grateful it?s being extended. I?m looking forward to increased opportunities for the US user community to interact with GPFS developers and for us to interact with each other as users of GPFS as well. Having said that, here are some initial plans: We propose the first "Meet the Developers" session be in New York City at the IBM 590 Madison office during 2H of September (3-4 hours and lunch will be provided). [Personally, I want to avoid the week of September 28th which is the HPSS Users Forum. Let us know of any date preferences you have.] The rough agenda will include a session by a Spectrum Scale development architect followed by a demo of one of the upcoming functions. We would also like to include a user lead session --sharing their experiences or use case scenarios with Spectrum Scale. For this go round, those who are interested in attending this event should write to Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Please also chime in if you are interested in sharing an experience or use case scenario for this event or a future event. Lastly, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy On Aug 4, 2015, at 3:32 AM, GPFS UG Chair (Simon Thompson) wrote: > > As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. > > We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. > > Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. > > Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). > > I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. > > Short Bio from Kristy: > > "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. > > LinkedIn Profile: www.linkedin.com/in/kristykallbackrose > " > > We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: > > A paragraph covering their credentials; > A paragraph covering what they would bring to the group; > A paragraph setting out their vision for the group for the next two years. > > Note that this should be a GPFS customer based in the USA. > > If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. > > Kristy will be following up later with some announcements about the USA group activities. > > Simon > GPFS UG Chair > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From Robert.Oesterlin at nuance.com Wed Aug 5 12:12:17 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 5 Aug 2015 11:12:17 +0000 Subject: [gpfsug-discuss] GPFS UG User Group@USA In-Reply-To: <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> References: <3cc5a596e56553b7eba37e6a1d9387c1@webmail.gpfsug.org> <39CADFDE-BC5F-4360-AC49-1AC8C59DEF8E@iu.edu> Message-ID: <315FAEF7-DEC0-4252-BA3B-D318DE05933C@nuance.com> Hi Kristy Thanks for stepping up to the duties for the USA based user group! Getting the group organized is going to be a challenge and I?m happy to help out where I can. Regarding some of the planning for SC15, I wonder if you could drop me a note off the mailing list to discuss this, since I have been working with some others at IBM on a BOF proposal for SC15 and these two items definitely overlap. My email is robert.oesterlin at nuance.com (probably end up regretting putting that out on the mailing list at some point ? sigh) Bob Oesterlin Sr Storage Engineer, Nuance Communications From: > on behalf of Kristy Kallback-Rose Reply-To: gpfsug main discussion list Date: Tuesday, August 4, 2015 at 9:56 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFS UG User Group at USA Hello, Thanks Simon and all for moving the USA-based group forward. You?ve got a great user group in the UK and am grateful it?s being extended. I?m looking forward to increased opportunities for the US user community to interact with GPFS developers and for us to interact with each other as users of GPFS as well. Having said that, here are some initial plans: We propose the first "Meet the Developers" session be in New York City at the IBM 590 Madison office during 2H of September (3-4 hours and lunch will be provided). [Personally, I want to avoid the week of September 28th which is the HPSS Users Forum. Let us know of any date preferences you have.] The rough agenda will include a session by a Spectrum Scale development architect followed by a demo of one of the upcoming functions. We would also like to include a user lead session --sharing their experiences or use case scenarios with Spectrum Scale. For this go round, those who are interested in attending this event should write to Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Please also chime in if you are interested in sharing an experience or use case scenario for this event or a future event. Lastly, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy On Aug 4, 2015, at 3:32 AM, GPFS UG Chair (Simon Thompson) > wrote: As many of you know, there has been some interest in creating a USA based section of the group. Theres's been bits of discussion off list about this with a couple of interested parties and IBM. Rather than form a separate entity, we feel it would be good to have the USA group related to the @gpfsug.org mailing list as our two communities are likely to be of interest to the discussions. We've agreed that we'll create the title "USA Principal" for the lead of the USA based group. The title "Chair" will remain as a member of the UK group where the GPFS UG group was initially founded. Its proposed that we'd operate the title Principal in a similar manner to the UK chair post. The Principal would take the lead in coordinating USA based events with a local IBM based representative, we'd also call for election of the Principal every two years. Given the size of the USA in terms of geography, we'll have to see how it works out and the Principal will review the USA group in 6 months time. We're planning also to create a co-principal (see details below). I'd like to thank Kristy Kallback-Rose from Indiana University for taking this forward with IBM behind the scenes, and she is nominating herself for the inaugural Principal of the USA based group. Unless we hear otherwise by 10th August, we'll assume the group is OK with this as a way of us moving the USA based group forward. Short Bio from Kristy: "My experience with GPFS began about 3 years ago. I manage a storage team that uses GPFS to provide Home Directories for Indiana University?s HPC systems and also desktop access via sftp and Samba (with Active Directory integration). Prior to that I was a sysadmin in the same group providing a similar desktop service via OpenAFS and archival storage via High Performance Storage System (HPSS). I have spoken at the HPSS Users Forum multiple times and have driven a community process we call ?Burning Issues? to allow the community to express and vote upon changes they would like to see in HPSS. I would like to be involved in similar community-driven processes in the GPFS arena including the GPFS Users Group. LinkedIn Profile: www.linkedin.com/in/kristykallbackrose " We're also looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. Kristy will be following up later with some announcements about the USA group activities. Simon GPFS UG Chair _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 5 20:23:45 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 5 Aug 2015 19:23:45 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: , Message-ID: Just picking this topic back up. Does anyone have any comments/thoughts on these questions? Thanks Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Luke Raimbach [Luke.Raimbach at crick.ac.uk] Sent: 20 July 2015 08:02 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.1.1 immutable filesets Can I add to this list of questions? Apparently, one cannot set immutable, or append-only attributes on files / directories within an AFM cache. However, if I have an independent writer and set immutability at home, what does the AFM IW cache do about this? Or does this restriction just apply to entire filesets (which would make more sense)? Cheers, Luke. -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: 19 July 2015 11:45 To: gpfsug main discussion list Subject: [gpfsug-discuss] 4.1.1 immutable filesets I was wondering if anyone had looked at the immutable fileset features in 4.1.1? In particular I was looking at the iam compliant mode, but I've a couple of questions. * if I have an iam compliant fileset, and it contains immutable files or directories, can I still unlink and delete the filset? * will HSM work with immutable files? I.e. Can I migrate files to tape and restore them? The docs mention that extended attributes can be updated internally by dmapi, so I guess HSM might work? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Aug 7 14:46:04 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 13:46:04 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets Message-ID: On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" wrote: >* if I have an iam compliant fileset, and it contains immutable files or >directories, can I still unlink and delete the filset? So just to answer my own questions here. (Actually I tried in non-compliant mode, rather than full compliance, but I figured this was the mode I actually need as I might need to reset the immutable time back earlier to allow me to delete something that shouldn't have gone in). Yes, I can both unlink and delete an immutable fileset which has immutable files which are non expired in it. >* will HSM work with immutable files? I.e. Can I migrate files to tape >and restore them? The docs mention that extended attributes can be >updated internally by dmapi, so I guess HSM might work? And yes, HSM files work. I created a file, made it immutable, backed up, migrated it: $ mmlsattr -L BHAM_DATASHARE_10.zip file name: BHAM_DATASHARE_10.zip metadata replication: 2 max 2 data replication: 2 max 2 immutable: yes appendOnly: no indefiniteRetention: no expiration Time: Fri Aug 7 14:45:00 2015 flags: storage pool name: tier2 fileset name: rds-projects-2015-thompssj-01 snapshot name: creation time: Fri Aug 7 14:38:30 2015 Windows attributes: ARCHIVE OFFLINE READONLY Encrypted: no I was then able to recall the file. Simon From wsawdon at us.ibm.com Fri Aug 7 16:13:31 2015 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 7 Aug 2015 08:13:31 -0700 Subject: [gpfsug-discuss] Hello Message-ID: Hello, Although I am new to this user group, I've worked on GPFS at IBM since before it was a product.! I am interested in hearing from the group about the features you like or don't like and of course, what features you would like to see. Wayne Sawdon STSM; IBM Research Manager | Cloud Data Management Phone: 1-408-927-1848 E-mail: wsawdon at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Fri Aug 7 16:27:33 2015 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 7 Aug 2015 08:27:33 -0700 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: Message-ID: > On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" > wrote: > > >* if I have an iam compliant fileset, and it contains immutable files or > >directories, can I still unlink and delete the filset? > > So just to answer my own questions here. (Actually I tried in > non-compliant mode, rather than full compliance, but I figured this was > the mode I actually need as I might need to reset the immutable time back > earlier to allow me to delete something that shouldn't have gone in). > > Yes, I can both unlink and delete an immutable fileset which has immutable > files which are non expired in it. > It was decided that deleting a fileset with compliant data is a "hole", but apparently it was not closed before the GA. The same rule should apply to unlinking the fileset. HSM on compliant data should be fine. I don't know what happens when you combine compliance and AFM, but I would suggest not mixing the two. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Aug 7 16:36:03 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 15:36:03 +0000 Subject: [gpfsug-discuss] 4.1.1 immutable filesets In-Reply-To: References: , Message-ID: I did only try in nc mode, so possibly if its fully compliant it wouldn't have let me delete the fileset. One other observation. As a user Id set the atime and chmod -w the file. Once it had expired, I was then unable to reset the atime into the future. (I could as root). I'm not sure what the expected behaviour should be, but I was sorta surprised that I could initially set the time as the user, but then not be able to extend even once it had expired. Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Wayne Sawdon [wsawdon at us.ibm.com] Sent: 07 August 2015 16:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.1.1 immutable filesets > On 05/08/2015 20:23, "Simon Thompson (Research Computing - IT Services)" > wrote: > > >* if I have an iam compliant fileset, and it contains immutable files or > >directories, can I still unlink and delete the filset? > > So just to answer my own questions here. (Actually I tried in > non-compliant mode, rather than full compliance, but I figured this was > the mode I actually need as I might need to reset the immutable time back > earlier to allow me to delete something that shouldn't have gone in). > > Yes, I can both unlink and delete an immutable fileset which has immutable > files which are non expired in it. > It was decided that deleting a fileset with compliant data is a "hole", but apparently it was not closed before the GA. The same rule should apply to unlinking the fileset. HSM on compliant data should be fine. I don't know what happens when you combine compliance and AFM, but I would suggest not mixing the two. -Wayne From S.J.Thompson at bham.ac.uk Fri Aug 7 16:56:17 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 15:56:17 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes Message-ID: I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. Does anyone have a script to do this already? Surely there is a better way? Thanks Simon From rclee at lbl.gov Fri Aug 7 17:30:21 2015 From: rclee at lbl.gov (Rei Lee) Date: Fri, 7 Aug 2015 09:30:21 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: Message-ID: <55C4DD1D.7000402@lbl.gov> We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Aug 7 17:49:03 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 7 Aug 2015 16:49:03 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: <55C4DD1D.7000402@lbl.gov> References: , <55C4DD1D.7000402@lbl.gov> Message-ID: Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From ckerner at ncsa.uiuc.edu Fri Aug 7 17:41:14 2015 From: ckerner at ncsa.uiuc.edu (Chad Kerner) Date: Fri, 7 Aug 2015 11:41:14 -0500 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: <55C4DD1D.7000402@lbl.gov> References: <55C4DD1D.7000402@lbl.gov> Message-ID: <20150807164114.GA29652@logos.ncsa.illinois.edu> You can use the mmlsfileset DEVICE -L option to see the maxinodes and allocated inodes. I have a perl script that loops through all of our file systems every hour and scans for it. If one is nearing capacity(tunable threshold in the code), it automatically expands it by a set amount(also tunable). We add 10% currently. This also works on file systems that have no filesets as it appears as the root fileset. I can check with my boss to see if its ok to post it if you want it. Its about 40 lines of perl. Chad -- Chad Kerner, Systems Engineer Storage Enabling Technologies National Center for Supercomputing Applications On Fri, Aug 07, 2015 at 09:30:21AM -0700, Rei Lee wrote: > We have the same problem when we started using independent fileset. I think > this should be a RFE item that IBM should provide a tool similar to 'mmdf > -F' to show the number of free/used inodes for an independent fileset. > > Rei > > On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > >I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > > >We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > > >mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > > >The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > > >Does anyone have a script to do this already? > > > >Surely there is a better way? > > > >Thanks > > > >Simon > >_______________________________________________ > >gpfsug-discuss mailing list > >gpfsug-discuss at gpfsug.org > >http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From makaplan at us.ibm.com Fri Aug 7 21:12:05 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 7 Aug 2015 16:12:05 -0400 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: , <55C4DD1D.7000402@lbl.gov> Message-ID: Try mmlsfileset filesystem_name -i From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From martin.gasthuber at desy.de Fri Aug 7 21:41:08 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Fri, 7 Aug 2015 22:41:08 +0200 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Hi Marc, your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-) best regards, Martin > On 3 Aug, 2015, at 19:05, Marc A Kaplan wrote: > > Reality check on GPFS ACLs. > > I think it would be helpful to understand how ACLs are implemented in GPFS - > > - All ACLs for a file sytem are stored as records in a special file. > - Each inode that has an ACL (more than just the classic Posix mode bits) has a non-NULL offset to the governing ACL in the special acl file. > - Yes, inodes with identical ACLs will have the same ACL offset value. Hence in many (most?) use cases, the ACL file can be relatively small - > it's size is proportional to the number of unique ACLs, not the number of files. > > And how and what mmapplypolicy can do for you - > > mmapplypolicy can rapidly scan the directories and inodes of a file system. > This scanning bypasses most locking regimes and takes advantage of both parallel processing > and streaming full tracks of inodes. So it is good at finding files (inodes) that satifsy criteria that can > be described by an SQL expression over the attributes stored in the inode. > > BUT to change the attributes of any particular file we must use APIs and code that respect all required locks, > log changes, etc, etc. > > Those changes can be "driven" by the execution phase of mmapplypolicy, in parallel - but overheads are significantly higher per file, > than during the scanning phases of operation. > > NOW to the problem at hand. It might be possible to improve ACL updates somewhat by writing a command that processes > multiple files at once, still using the same APIs used by the mmputacl command. > > Hmmm.... it wouldn't be very hard for GPFS development team to modify the mmputacl command to accept a list of files... > I see that the Linux command setfacl does accept multiple files in its argument list. > > Finally and not officially supported nor promised nor especially efficient .... try getAcl() as a GPFS SQL policy function._______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From rclee at lbl.gov Fri Aug 7 21:44:23 2015 From: rclee at lbl.gov (Rei Lee) Date: Fri, 7 Aug 2015 13:44:23 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: <55C518A7.6020605@lbl.gov> We have tried that command but it took a very long time like it was hanging so I killed the command before it finished. I was not sure if it was a bug in early 4.1.0 software but I did not open a PMR. I just ran the command again on a quiet file system and it has been 5 minutes and the command is still not showing any output. 'mmdf -F' returns very fast. 'mmlsfileset -l' does not report the number of free inodes. Rei On 8/7/15 1:12 PM, Marc A Kaplan wrote: > Try > > mmlsfileset filesystem_name -i > > > Marc A Kaplan > > > > From: "Simon Thompson (Research Computing - IT Services)" > > To: gpfsug main discussion list > Date: 08/07/2015 12:49 PM > Subject: Re: [gpfsug-discuss] Independent fileset free inodes > Sent by: gpfsug-discuss-bounces at gpfsug.org > ------------------------------------------------------------------------ > > > > > Hmm. I'll create an RFE next week then. (just in case someone comes > back with a magic flag we don't know about!). > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at gpfsug.org > [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] > Sent: 07 August 2015 17:30 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] Independent fileset free inodes > > We have the same problem when we started using independent fileset. I > think this should be a RFE item that IBM should provide a tool similar > to 'mmdf -F' to show the number of free/used inodes for an independent > fileset. > > Rei > > On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) > wrote: > > I was just wondering if anyone had a way to return the number of > free/used inodes for an independent fileset and all its children. > > > > We recently had a case where we were unable to create new files in a > child file-set, and it turns out the independent parent had run out of > inodes. > > > > mmsf however only lists the inodes used directly in the parent > fileset, I.e. About 8 as that was the number of child filesets. > > > > The suggestion from IBM support is that we use mmdf and then add up > the numbers from all the child filesets to workout how many are > free/used in the independent fileset. > > > > Does anyone have a script to do this already? > > > > Surely there is a better way? > > > > Thanks > > > > Simon > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From bevans at pixitmedia.com Fri Aug 7 21:44:44 2015 From: bevans at pixitmedia.com (Barry Evans) Date: Fri, 7 Aug 2015 21:44:44 +0100 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: <-2676389644758800053@unknownmsgid> -i will give you the exact used number but... Avoid running it during peak usage on most setups. It's pretty heavy, like running a -d on lssnapshot. Your best bet is from earlier posts: '-L' gives you the max and alloc. If they match, you know you're in bother soon. It's not accurate, of course, but prevention is typically the best medicine in this case. Cheers, Barry ArcaStream/Pixit On 7 Aug 2015, at 21:12, Marc A Kaplan wrote: Try mmlsfileset filesystem_name -i From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org ------------------------------ Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Aug 7 22:21:28 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 7 Aug 2015 17:21:28 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: You asked: "your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-) " Perhaps one could hack/patch that - but I can't recommend it. Would you routinely hack/patch the GPFS metadata that comprises a directory? Consider replicated and logged metadata ... Consider you've corrupted the hash table of all ACL values... -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Aug 10 08:13:43 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 10 Aug 2015 07:13:43 +0000 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: Hi Marc, Thanks for this. Just to clarify the output when it mentions allocated inodes, does that mean the number used or the number allocated? I.e. If I pre-create a bunch of inodes will they appear as allocated? Or is that only when they are used by a file etc? Thanks Simon From: Marc A Kaplan > Reply-To: gpfsug main discussion list > Date: Friday, 7 August 2015 21:12 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Independent fileset free inodes Try mmlsfileset filesystem_name -i [Marc A Kaplan] From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/07/2015 12:49 PM Subject: Re: [gpfsug-discuss] Independent fileset free inodes Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hmm. I'll create an RFE next week then. (just in case someone comes back with a magic flag we don't know about!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Rei Lee [rclee at lbl.gov] Sent: 07 August 2015 17:30 To: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Independent fileset free inodes We have the same problem when we started using independent fileset. I think this should be a RFE item that IBM should provide a tool similar to 'mmdf -F' to show the number of free/used inodes for an independent fileset. Rei On 8/7/15 8:56 AM, Simon Thompson (Research Computing - IT Services) wrote: > I was just wondering if anyone had a way to return the number of free/used inodes for an independent fileset and all its children. > > We recently had a case where we were unable to create new files in a child file-set, and it turns out the independent parent had run out of inodes. > > mmsf however only lists the inodes used directly in the parent fileset, I.e. About 8 as that was the number of child filesets. > > The suggestion from IBM support is that we use mmdf and then add up the numbers from all the child filesets to workout how many are free/used in the independent fileset. > > Does anyone have a script to do this already? > > Surely there is a better way? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00002.gif Type: image/gif Size: 21994 bytes Desc: ATT00002.gif URL: From makaplan at us.ibm.com Mon Aug 10 19:14:58 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 10 Aug 2015 14:14:58 -0400 Subject: [gpfsug-discuss] Independent fileset free inodes In-Reply-To: References: <55C4DD1D.7000402@lbl.gov> Message-ID: mmlsfileset xxx -i 1. Yes it is slow. I don't know the reasons. Perhaps someone more familiar with the implementation can comment. It's surprising to me that it is sooo much slower than mmdf EVEN ON a filesystem that only has the root fileset! 2. used: how many inodes (files) currently exist in the given fileset or fileset allocated: number of inodes "pre"allocated in the (special) file of all inodes. maximum: number of inodes that GPFS might allocate on demand, with current --inode-limit settings from mmchfileset and mmchfs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From taylorm at us.ibm.com Mon Aug 10 22:23:02 2015 From: taylorm at us.ibm.com (Michael L Taylor) Date: Mon, 10 Aug 2015 14:23:02 -0700 Subject: [gpfsug-discuss] Independent fileset free inodes Message-ID: <201508102123.t7ALNZDV012260@d01av01.pok.ibm.com> This capability is available in Storage Insights, which is a Software as a Service (SaaS) storage management solution. You can play with a live demo and try a free 30 day trial here: https://www.ibmserviceengage.com/storage-insights/learn I could also provide a screen shot of what IBM Spectrum Control looks like when managing Spectrum Scale and how you can easily see fileset relationships and used space and inodes per fileset if interested. -------------- next part -------------- An HTML attachment was scrubbed... URL: From GARWOODM at uk.ibm.com Tue Aug 11 17:05:52 2015 From: GARWOODM at uk.ibm.com (Michael Garwood7) Date: Tue, 11 Aug 2015 16:05:52 +0000 Subject: [gpfsug-discuss] Developer Works forum post on Spectrum Scale and Spark work Message-ID: <201508111606.t7BG6Vt6005368@d06av01.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Tue Aug 11 17:53:32 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Tue, 11 Aug 2015 18:53:32 +0200 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: Hi Marc, this was meant to be more a joke than a 'wish' - but it would be interesting for us (with the case of several millions of files having the same ACL) if there are ways/plans to treat ACLs more referenced from each of these files and having a mechanism to treat all of them in a single operation. -- Martin > On 7 Aug, 2015, at 23:21, Marc A Kaplan wrote: > > You asked: > > "your description of the ACL implementation looks like each file has some sort of reference to the ACL - so multiple files could reference the same physical ACL data. In our case, we need to set a large number of files (and directories) to the same ACL (content) - could we take any benefit from these 'pseudo' referencing nature ? i.e. set ACL contents once and the job is done ;-). It looks that mmputacl can only access the ACL data 'through' a filename and not 'directly' - we just need the 'dirty' way ;-)" > > > Perhaps one could hack/patch that - but I can't recommend it. Would you routinely hack/patch the GPFS metadata that comprises a directory? > Consider replicated and logged metadata ... Consider you've corrupted the hash table of all ACL values... > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Tue Aug 11 18:59:08 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 13:59:08 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: Message-ID: We (myself and a few other GPFS people) are reading this and considering... Of course we can't promise anything here. I can see some ways to improve and make easier the job of finding and changing the ACLs of many files. But I think whatever we end up doing will still be, at best, a matter of changing every inode, rather than changing on ACL that all those inodes happen to point to. IOW, as a lower bound, we're talking at least as much overhead as doing chmod on the chosen files. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Tue Aug 11 19:11:26 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Tue, 11 Aug 2015 18:11:26 +0000 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , Message-ID: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Aug 11 20:45:56 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 15:45:56 -0400 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: The mmfind command/script you may find in samples/ilm of 4.1.1 (July 2015) is completely revamped and immensely improved compared to any previous mmfind script you may have seen shipped in an older samples/ilm/mmfind. If you have a classic "find" job that you'd like to easily parallelize, give the new mmfind a shot and let us know how you make out! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Tue Aug 11 21:56:34 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 11 Aug 2015 21:56:34 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: <55CA6182.9010507@buzzard.me.uk> On 11/08/15 19:11, James Davis wrote: > If trying the naive approach, a la > find /fs ... -exec changeMyACL {} \; > or > /usr/lpp/mmfs/samples/ilm/mmfind /fs ... -exec changeMyACL {} \; > #shameless plug for my mmfind tool, available in the latest release of > GPFS. See the associated README. > I think the cost will be prohibitive. I believe a relatively strong > internal lock is required to do ACL changes, and consequently I think > the performance of modifying the ACL on a bunch of files will be painful > at best. I am not sure what it is like in 4.x but up to 3.5 the mmputacl was some sort of abomination of a command. It could only set the ACL for a single file and if you wanted to edit rather than set you had to call mmgetacl first, manipulate the text file output and then feed that into mmputacl. So if you need to set the ACL's on a directory hierarchy over loads of files then mmputacl is going to be exec'd potentially millions of times, which is a massive overhead just there. If only because mmputacl is a ksh wrapper around tsputacl. Execution time doing this was god dam awful. So I instead wrote a simple C program that used the ntfw library call and the gpfs API to set the ACL's it was way way faster. Of course I was setting a very limited number of different ACL's that where required to support a handful of Samba share types after the data had been copied onto a GPFS file system. As I said previously what is needed is an "mm" version of the FreeBSD setfacl command http://www.freebsd.org/cgi/man.cgi?format=html&query=setfacl(1) That has the -R/--recursive option of the Linux setfacl command which uses the fast inode scanning GPFS API. You want to be able to type something like mmsetfacl -mR g:www:rpaRc::allow foo What you don't want to be doing is calling the abomination of a command that is mmputacl. Frankly whoever is responsible for that command needs taking out the back and given a good kicking. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From makaplan at us.ibm.com Tue Aug 11 23:11:24 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 11 Aug 2015 18:11:24 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <55CA6182.9010507@buzzard.me.uk> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> Message-ID: On Linux you are free to use setfacl and getfacl commands on GPFS files. Works for me. As you say, at least you can avoid the overhead of shell interpretation and forking and whatnot for each file. Or use the APIs, see /usr/include/sys/acl.h. May need to install libacl-devel package and co. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Tue Aug 11 23:27:13 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 11 Aug 2015 23:27:13 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> Message-ID: <55CA76C1.4050109@buzzard.me.uk> On 11/08/15 23:11, Marc A Kaplan wrote: > On Linux you are free to use setfacl and getfacl commands on GPFS files. > Works for me. Really, for NFSv4 ACL's? Given the RichACL kernel patches are only carried by SuSE I somewhat doubt that you can. http://www.bestbits.at/richacl/ People what to set NFSv4 ACL's on GPFS because when used with vfs_gpfs you can get Windows server/NTFS like rich permissions on your Windows SMB clients. You don't get that with Posix ACL's. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From usa-principal at gpfsug.org Tue Aug 11 23:36:11 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Tue, 11 Aug 2015 18:36:11 -0400 Subject: [gpfsug-discuss] Additional Details for Fall 2015 GPFS UG Meet Up in NYC Message-ID: <7d3395cb2575576c30ba55919124e44d@webmail.gpfsug.org> Hello, We are working on some additional information regarding the proposed NYC meet up. Below is the draft agenda for the "Meet the Developers" session. We are still working on closing on an exact date, and will communicate that soon --targeting September or October. Please e-mail Janet Ellsworth (janetell at us.ibm.com) if you are interested in attending. Janet is coordinating the logistics of the event. ? IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. ? IBM developer to demo future Graphical User Interface ? Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this !) ? Open Q&A with the development team Thoughts? Ideas? Best, Kristy GPFS UG - USA Principal PS - I believe we're still looking for someone to volunteer as co-principal, if this is something you are interested in, please can you provide a short Bio (to chair at gpfsug.org) including: A paragraph covering their credentials; A paragraph covering what they would bring to the group; A paragraph setting out their vision for the group for the next two years. Note that this should be a GPFS customer based in the USA. If we get more than 1 person, we'll run a mini election for the post. Please can you respond by 11th August 2015 if you are interested. From chair at gpfsug.org Wed Aug 12 10:20:40 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Wed, 12 Aug 2015 10:20:40 +0100 Subject: [gpfsug-discuss] USA Co-Principal Message-ID: Hi All, We only had 1 self nomination for the co-principal of the USA side of the group. I've very much like to thank Bob Oesterlin for nominating himself to help Kristy with the USA side of things. I've spoken a few times with Bob "off-list" and he's helped me out with a few bits and pieces. As you may have seen, Kristy has been posting from usa-principal at gpfsug.org, I'll sort another address out for the co-principal role today. Both Kristy and Bob seem determined to get the USA group off the ground and I wish them every success with this. Simon Bob's profile follows: LinkedIn Profile: https://www.linkedin.com/in/boboesterlin Short Profile: I have over 15 years experience with GPFS. Prior to 2013 I was with IBM and wa actively involved with developing solutions for customers using GPFS both inside and outside IBM. Prior to my work with GPFS, I was active in the AFS and OpenAFS community where I served as one of founding Elder members of that group. I am well know inside IBM and have worked to maintain my contacts with development. After 2013, I joined Nuance Communications where I am the Sr Storage Engineer for the HPC grid. I have been active in the GPFS DeveloperWorks Forum and the mailing list, presented multiple times at IBM Edge and IBM Interconnect. I'm active in multiple IBM Beta programs, providing active feedback on new products and future directions. For the user group, my vision is to build an active user community where we can share expertise and skills to help each other. I'd also like to see this group be more active in shaping the future direction of GPFS. I would also like to foster broader co-operation and discussion with users and administrators of other clustered file systems. (Lustre and OpenAFS) From makaplan at us.ibm.com Wed Aug 12 15:43:03 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 12 Aug 2015 10:43:03 -0400 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: <55CA76C1.4050109@buzzard.me.uk> References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> Message-ID: On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work fine for me. nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today not at all, at least not for me ;-( [root at n2 ~]# setfacl -m u:wsawdon:r-x /mak/sil/x [root at n2 ~]# echo $? 0 [root at n2 ~]# getfacl /mak/sil/x getfacl: Removing leading '/' from absolute path names # file: mak/sil/x # owner: root # group: root user::--- user:makaplan:rwx user:wsawdon:r-x group::--- mask::rwx other::--- [root at n2 ~]# nfs4_getfacl /mak/sil/x Operation to request attribute not supported. [root at n2 ~]# echo $? 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ross.keeping at uk.ibm.com Wed Aug 12 15:44:38 2015 From: ross.keeping at uk.ibm.com (Ross Keeping3) Date: Wed, 12 Aug 2015 15:44:38 +0100 Subject: [gpfsug-discuss] Q4 Meet the devs location? Message-ID: Hey I was discussing with Simon and Claire where and when to run our Q4 meet the dev session. We'd like to take the next sessions up towards Scotland to give our Edinburgh/Dundee users a chance to participate sometime in November (around the 4.2 release date). I'm keen to hear from people who would be interested in attending an event in or near Scotland and is there anyone who can offer up a small meeting space for the day? Best regards, Ross Keeping IBM Spectrum Scale - Development Manager, People Manager IBM Systems UK - Manchester Development Lab Phone: (+44 161) 8362381-Line: 37642381 E-mail: ross.keeping at uk.ibm.com 3rd Floor, Maybrook House Manchester, M3 2EG United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Wed Aug 12 15:49:27 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 12 Aug 2015 14:49:27 +0000 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk>, Message-ID: I thought acls could either be posix or nfd4, but not both. Set when creating the file-system? Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 12 August 2015 15:43 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] fast ACL alter solution On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work fine for me. nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today not at all, at least not for me ;-( [root at n2 ~]# setfacl -m u:wsawdon:r-x /mak/sil/x [root at n2 ~]# echo $? 0 [root at n2 ~]# getfacl /mak/sil/x getfacl: Removing leading '/' from absolute path names # file: mak/sil/x # owner: root # group: root user::--- user:makaplan:rwx user:wsawdon:r-x group::--- mask::rwx other::--- [root at n2 ~]# nfs4_getfacl /mak/sil/x Operation to request attribute not supported. [root at n2 ~]# echo $? 1 From jonathan at buzzard.me.uk Wed Aug 12 17:29:00 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 12 Aug 2015 17:29:00 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> Message-ID: <1439396940.3856.4.camel@buzzard.phy.strath.ac.uk> On Wed, 2015-08-12 at 10:43 -0400, Marc A Kaplan wrote: > On GPFS-Linux-Redhat commands getfacl and setfacl DO seem to work > fine for me. > Yes they do, but they only set POSIX ACL's, and well most people are wanting to set NFSv4 ACL's so the getfacl and setfacl commands are of no use. > nfs4_getfacl and nfs4_setfacl ... NOT so much ... actually, today > not at all, at least not for me ;-( Yep they only work against an NFSv4 mounted file system with NFSv4 ACL's. So if you NFSv4 exported a GPFS file system from an AIX node and mounted it on a Linux node that would work for you. It might also work if you NFSv4 exported a GPFS file system using the userspace ganesha NFS server with an appropriate VFS backend for GPFS and mounted on Linux https://github.com/nfs-ganesha/nfs-ganesha However last time I checked such a GPFS VFS backend for ganesha was still under development. The RichACL stuff might also in theory work except it is not in mainline kernels and there is certainly no advertised support by IBM for GPFS using it. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jonathan at buzzard.me.uk Wed Aug 12 17:35:55 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 12 Aug 2015 17:35:55 +0100 Subject: [gpfsug-discuss] fast ACL alter solution In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <55CA6182.9010507@buzzard.me.uk> <55CA76C1.4050109@buzzard.me.uk> , Message-ID: <1439397355.3856.11.camel@buzzard.phy.strath.ac.uk> On Wed, 2015-08-12 at 14:49 +0000, Simon Thompson (Research Computing - IT Services) wrote: > I thought acls could either be posix or nfd4, but not both. Set when creating the file-system? > The options for ACL's on GPFS are POSIX, NFSv4, all which is mixed NFSv4/POSIX and finally Samba. The first two are self explanatory. The mixed mode is best given a wide berth in my opinion. The fourth is well lets say "undocumented" last time I checked. You can set it, and it shows up when you query the file system but what it does I can only speculate. Take a look at the Korn shell of mmchfs if you doubt it exists. Try it out on a test file system with mmchfs -k samba My guess though I have never verified it, is that it changes the schematics of the NFSv4 ACL's to more closely match those of NTFS ACL's. A bit like some of the other GPFS settings you can fiddle with to make GPFS behave more like an NTFS file system. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From C.J.Walker at qmul.ac.uk Thu Aug 13 15:23:07 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Thu, 13 Aug 2015 16:23:07 +0200 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host Message-ID: <55CCA84B.1080600@qmul.ac.uk> I've set up a couple of VM hosts to export some of its GPFS filesystem via NFS to machines on that VM host[1,2]. Is live migration of VMs likely to work? Live migration isn't a hard requirement, but if it will work, it could make our life easier. Chris [1] AIUI, this is explicitly permitted by the licencing FAQ. [2] For those wondering why we are doing this, it's that some users want docker - and they can probably easily escape to become root on the VM. Doing it this way permits us (we hope) to only export certain bits of the GPFS filesystem. From S.J.Thompson at bham.ac.uk Thu Aug 13 15:32:18 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 13 Aug 2015 14:32:18 +0000 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: <55CCA84B.1080600@qmul.ac.uk> References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: >I've set up a couple of VM hosts to export some of its GPFS filesystem >via NFS to machines on that VM host[1,2]. Provided all your sockets no the VM host are licensed. >Is live migration of VMs likely to work? > >Live migration isn't a hard requirement, but if it will work, it could >make our life easier. Live migration using a GPFS file-system on the hypervisor node should work (subject to the usual caveats of live migration). Whether live migration and your VM instances would still be able to NFS mount (assuming loopback address?) if they moved to a different hypervisor, pass, you might get weird NFS locks. And if they are still mounting from the original VM host, then you are not doing what the FAQ says you can do. Simon From dhildeb at us.ibm.com Fri Aug 14 18:54:59 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 14 Aug 2015 10:54:59 -0700 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: Thanks for the replies Simon... Chris, are you using -v to give the container access to the nfs subdir (and hence to a gpfs subdir) (and hence achieve a level of multi-tenancy)? Even without containers, I wonder if this could allow users to run their own VMs as root as well...and preventing them from becoming root on gpfs... I'd love for you to share your experience (mgmt and perf) with this architecture once you get it up and running. Some side benefits of this architecture that we have been thinking about as well is that it allows both the containers and VMs to be somewhat ephemeral, while the gpfs continues to run in the hypervisor... To ensure VMotion works relatively smoothly, just ensure each VM is given a hostname to mount that always routes back to the localhost nfs server on each machine...and I think things should work relatively smoothly. Note you'll need to maintain the same set of nfs exports across the entire cluster as well, so taht when a VM moves to another machine it immediately has an available export to mount. Dean Hildebrand IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/13/2015 07:33 AM Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host Sent by: gpfsug-discuss-bounces at gpfsug.org >I've set up a couple of VM hosts to export some of its GPFS filesystem >via NFS to machines on that VM host[1,2]. Provided all your sockets no the VM host are licensed. >Is live migration of VMs likely to work? > >Live migration isn't a hard requirement, but if it will work, it could >make our life easier. Live migration using a GPFS file-system on the hypervisor node should work (subject to the usual caveats of live migration). Whether live migration and your VM instances would still be able to NFS mount (assuming loopback address?) if they moved to a different hypervisor, pass, you might get weird NFS locks. And if they are still mounting from the original VM host, then you are not doing what the FAQ says you can do. Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Mon Aug 17 13:50:17 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 17 Aug 2015 12:50:17 +0000 Subject: [gpfsug-discuss] Metadata compression Message-ID: <2D1E2C5B-499D-46D3-AC27-765E3B40E340@nuance.com> Anyone have any practical experience here, especially using Flash, compressing GPFS metadata? IBM points out that they specifically DON?T support it on there devices (SVC/V9000/StoreWize) Spectrum Scale FAQ: https://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html?lang=en (look for the word compressed) But ? I could not find any blanket statements that it?s not supported outright. They don?t mention anything about data, and since the default for GPFS is mixing data and metadata on the same LUNs you?re more than likely compressing the metadata as well. :-) Also, no statements that you must split metadata from data when using compression. Bob Oesterlin Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Wed Aug 19 11:53:39 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Wed, 19 Aug 2015 12:53:39 +0200 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: References: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> Message-ID: <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Hi Marc, maybe a stupid question - is it expected that the 4.1.1 mmfind set of tools also works on a 4.1.0.8 environment ? -- Martin > On 11 Aug, 2015, at 21:45, Marc A Kaplan wrote: > > The mmfind command/script you may find in samples/ilm of 4.1.1 (July 2015) is completely revamped and immensely improved compared to any previous mmfind script you may have seen shipped in an older samples/ilm/mmfind. > > If you have a classic "find" job that you'd like to easily parallelize, give the new mmfind a shot and let us know how you make out! > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Wed Aug 19 14:18:14 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 19 Aug 2015 09:18:14 -0400 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> References: <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com> <09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Message-ID: mmfind in 4.1.1 depends on some new functionality added to mmapplypolicy in 4.1.1. Depending which find predicates you happen to use, the new functions in mmapplypolicy will be invoked (or not.) If you'd like to try it out - go ahead - it either works or it doesn't. If it doesn't you can also try using the new mmapplypolicy script and the new tsapolicy binary on the old GPFS system. BUT of course that's not supported. AFAIK, nothing bad will happen, but it's not supported. mmfind in 4.1.1 ships as a "sample", so it is not completely supported, but we will take bug reports and constructive criticism seriously, when you run it on a GPFS cluster that has been completely upgraded to 4.1.1. (Please don't complain that it does not work on a back level system.) For testing this kind of functionality, GPFS can be run on a single node or VM. You can emulate an NSD volume by "giving" mmcrnsd a GB sized file (or larger) instead of a block device. (Also not supported and not very swift but it works.) So there's no need to even "provision" a disk. --marc of GPFS -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Wed Aug 19 14:25:35 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Wed, 19 Aug 2015 13:25:35 +0000 Subject: [gpfsug-discuss] mmfind in GPFS/Spectrum Scale 4.1.1 In-Reply-To: References: , <201508111811.t7BIBYt0004336@d03av04.boulder.ibm.com><09692703-7C0F-43D1-BBCA-80D38A0852E8@desy.de> Message-ID: <201508191343.t7JDhlaU022402@d01av04.pok.ibm.com> An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Thu Aug 20 14:23:41 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Thu, 20 Aug 2015 09:23:41 -0400 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Message-ID: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal From bbanister at jumptrading.com Thu Aug 20 16:42:09 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 20 Aug 2015 15:42:09 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From Kevin.Buterbaugh at Vanderbilt.Edu Thu Aug 20 17:37:37 2015 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 20 Aug 2015 16:37:37 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> Hi All, I feel sorry for Kristy, as she just simply isn?t going to be able to meet everyones? needs here. For example, I had already e-mailed Kristy off list expressing my hope that the GPFS US UG meeting could be on Tuesday the 17th. Why? Because, as Bryan points out, the DDN User Group meeting is typically on Monday. We have limited travel funds and so if the two meetings were on consecutive days that would allow me to attend both (we have both non-DDN and DDN GPFS storage here). I?d prefer Tuesday over Sunday because that would at least allow me to grab a few minutes on the conference show floor. If the meeting is on the Friday or Saturday before or after SC 15 then I will have to choose ? or possibly not go at all. But I think that Bryan is right ? everyone should express their preferences as soon as possible and then Kristy can have the unenviable task of trying to disappoint the least number of people! :-O Kevin On Aug 20, 2015, at 10:42 AM, Bryan Banister > wrote: Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Aug 20 19:09:27 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 20 Aug 2015 18:09:27 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com>, <146800F3-6EB8-487C-AA9E-9707AED8C059@vanderbilt.edu> Message-ID: With my uk hat on, id suggest its also important to factor in IBM's ability to ship people in as well. I know last year there was an IBM GPFS event on the Monday at SC as I spoke there, I'm assuming the GPFS UG will really be an extended version of that, and there were quite a a lot in the audience for that. I know I made some really good contacts with both users and IBM at the event (and I encourage people to speak as its a great way of meeting people!). Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 20 August 2015 17:37 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Hi All, I feel sorry for Kristy, as she just simply isn?t going to be able to meet everyones? needs here. For example, I had already e-mailed Kristy off list expressing my hope that the GPFS US UG meeting could be on Tuesday the 17th. Why? Because, as Bryan points out, the DDN User Group meeting is typically on Monday. We have limited travel funds and so if the two meetings were on consecutive days that would allow me to attend both (we have both non-DDN and DDN GPFS storage here). I?d prefer Tuesday over Sunday because that would at least allow me to grab a few minutes on the conference show floor. If the meeting is on the Friday or Saturday before or after SC 15 then I will have to choose ? or possibly not go at all. But I think that Bryan is right ? everyone should express their preferences as soon as possible and then Kristy can have the unenviable task of trying to disappoint the least number of people! :-O Kevin On Aug 20, 2015, at 10:42 AM, Bryan Banister > wrote: Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 From dhildeb at us.ibm.com Thu Aug 20 17:12:09 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 20 Aug 2015 09:12:09 -0700 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center From: Bryan Banister To: gpfsug main discussion list Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [ mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From kallbac at iu.edu Thu Aug 20 20:00:21 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Thu, 20 Aug 2015 19:00:21 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 12:26:47 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 11:26:47 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. The docs are a little lacking in detail of how you create NSD disks on clients, I've tried using: %nsd: device=sdb2 nsd=cl0901u17_hawc_sdb2 servers=cl0901u17 pool=system.log failureGroup=90117 (and also with usage=metadataOnly as well), however mmcrsnd -F tells me "mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license designation" Which is correct as its a client system, though HAWC is supposed to be able to run on client systems. I know for LROC you have to set usage=localCache, is there a new value for using HAWC? I'm also a little unclear about failureGroups for this. The docs suggest setting the HAWC to be replicated for client systems, so I guess that means putting each client node into its own failure group? Thanks Simon From Robert.Oesterlin at nuance.com Wed Aug 26 12:46:59 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 26 Aug 2015 11:46:59 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Not directly related to HWAC, but I found a bug in 4.1.1 that results in LROC NSDs not being properly formatted (they don?t work) - Reference APAR IV76242 . Still waiting for a fix. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 6:26 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] Using HAWC (write cache) Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 13:23:36 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 12:23:36 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon From: , Robert > Reply-To: gpfsug main discussion list > Date: Wednesday, 26 August 2015 12:46 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Not directly related to HWAC, but I found a bug in 4.1.1 that results in LROC NSDs not being properly formatted (they don?t work) - Reference APAR IV76242 . Still waiting for a fix. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 6:26 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] Using HAWC (write cache) Hi, I was wondering if anyone knows how to configure HAWC which was added in the 4.1.1 release (this is the hardened write cache) (http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spectr um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) In particular I'm interested in running it on my client systems which have SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD for HAWC on our hypervisors as it buffers small IO writes, which sounds like what we want for running VMs which are doing small IO updates to the VM disk images stored on GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Aug 26 13:27:36 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 26 Aug 2015 12:27:36 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Aug 26 13:50:44 2015 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 26 Aug 2015 12:50:44 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> References: , <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: <201D6001C896B846A9CFC2E841986AC1454FFB0B@mailnycmb2a.winmail.deshaw.com> There is a more severe issue with LROC enabled in saveInodePtrs() which results in segfaults and loss of acknowledged writes, which has caused us to roll back all LROC for now. We are testing an efix (ref Defect 970773, IV76155) now which addresses this. But I would advise against running with LROC/HAWC in production without this fix. We experienced this on 4.1.0-6, but had the efix built against 4.1.1-1, so the exposure seems likely to be all 4.1 versions. Thx Paul Sent with Good (www.good.com) ________________________________ From: gpfsug-discuss-bounces at gpfsug.org on behalf of Oesterlin, Robert Sent: Wednesday, August 26, 2015 8:27:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Aug 26 13:57:56 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 26 Aug 2015 12:57:56 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) Message-ID: Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From C.J.Walker at qmul.ac.uk Wed Aug 26 14:46:56 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Wed, 26 Aug 2015 14:46:56 +0100 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: <55DDC350.8010603@qmul.ac.uk> On 13/08/15 15:32, Simon Thompson (Research Computing - IT Services) wrote: > >> I've set up a couple of VM hosts to export some of its GPFS filesystem >> via NFS to machines on that VM host[1,2]. > > Provided all your sockets no the VM host are licensed. Yes, they are. > >> Is live migration of VMs likely to work? >> >> Live migration isn't a hard requirement, but if it will work, it could >> make our life easier. > > Live migration using a GPFS file-system on the hypervisor node should work > (subject to the usual caveats of live migration). > > Whether live migration and your VM instances would still be able to NFS > mount (assuming loopback address?) if they moved to a different > hypervisor, pass, you might get weird NFS locks. And if they are still > mounting from the original VM host, then you are not doing what the FAQ > says you can do. > Yes, that's the intent - VMs get access to GPFS from the hypervisor - that complies with the licence and, presumably, should get better performance. It sounds like our problem would be the NFS end of this if we try a live migrate. Chris From C.J.Walker at qmul.ac.uk Wed Aug 26 15:15:48 2015 From: C.J.Walker at qmul.ac.uk (Christopher J. Walker) Date: Wed, 26 Aug 2015 15:15:48 +0100 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: References: <55CCA84B.1080600@qmul.ac.uk> Message-ID: <55DDCA14.8010103@qmul.ac.uk> On 14/08/15 18:54, Dean Hildebrand wrote: > Thanks for the replies Simon... > > Chris, are you using -v to give the container access to the nfs subdir > (and hence to a gpfs subdir) (and hence achieve a level of > multi-tenancy)? -v option to what? > Even without containers, I wonder if this could allow > users to run their own VMs as root as well...and preventing them from > becoming root on gpfs... > > I'd love for you to share your experience (mgmt and perf) with this > architecture once you get it up and running. A quick and dirty test: From a VM: -bash-4.1$ time dd if=/dev/zero of=cjwtestfile2 bs=1M count=10240 real 0m20.411s 0m22.137s 0m21.431s 0m21.730s 0m22.056s 0m21.759s user 0m0.005s 0m0.007s 0m0.006s 0m0.003s 0m0.002s 0m0.004s sys 0m11.710s 0m10.615s 0m10.399s 0m10.474s 0m10.682s 0m9.965s From the underlying hypervisor. real 0m11.138s 0m9.813s 0m9.761s 0m9.793s 0m9.773s 0m9.723s user 0m0.006s 0m0.013s 0m0.009s 0m0.008s 0m0.008s 0m0.009s sys 0m5.447s 0m5.529s 0m5.802s 0m5.580s 0m6.190s 0m5.516s So there's a factor of just over 2 slowdown. As it's still 500MB/s, I think it's good enough for now. The machine has a 10Gbit/s network connection and both hypervisor and VM are running SL6. > Some side benefits of this > architecture that we have been thinking about as well is that it allows > both the containers and VMs to be somewhat ephemeral, while the gpfs > continues to run in the hypervisor... Indeed. This is another advantage. If we were running Debian, it would be possible to export part of a filesystem to a VM. Which would presumably work. In redhat derived OSs (we are currently using Scientific Linux), I don't believe it is - hence exporting via NFS. > > To ensure VMotion works relatively smoothly, just ensure each VM is > given a hostname to mount that always routes back to the localhost nfs > server on each machine...and I think things should work relatively > smoothly. Note you'll need to maintain the same set of nfs exports > across the entire cluster as well, so taht when a VM moves to another > machine it immediately has an available export to mount. Yes, we are doing this. Simon alludes to potential problems at the NFS layer on live migration. Otherwise, yes indeed the setup should be fine. I'm not familiar enough with the details of NFS - but I have heard NFS described as "a stateless filesystem with state". It's the stateful bits I'm concerned about. Chris > > Dean Hildebrand > IBM Almaden Research Center > > > Inactive hide details for "Simon Thompson (Research Computing - IT > Services)" ---08/13/2015 07:33:16 AM--->I've set up a couple"Simon > Thompson (Research Computing - IT Services)" ---08/13/2015 07:33:16 > AM--->I've set up a couple of VM hosts to export some of its GPFS > filesystem >via NFS to machines on that > > From: "Simon Thompson (Research Computing - IT Services)" > > To: gpfsug main discussion list > Date: 08/13/2015 07:33 AM > Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host > Sent by: gpfsug-discuss-bounces at gpfsug.org > > ------------------------------------------------------------------------ > > > > > >I've set up a couple of VM hosts to export some of its GPFS filesystem > >via NFS to machines on that VM host[1,2]. > > Provided all your sockets no the VM host are licensed. > > >Is live migration of VMs likely to work? > > > >Live migration isn't a hard requirement, but if it will work, it could > >make our life easier. > > Live migration using a GPFS file-system on the hypervisor node should work > (subject to the usual caveats of live migration). > > Whether live migration and your VM instances would still be able to NFS > mount (assuming loopback address?) if they moved to a different > hypervisor, pass, you might get weird NFS locks. And if they are still > mounting from the original VM host, then you are not doing what the FAQ > says you can do. > > Simon > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From tpathare at sidra.org Wed Aug 26 16:43:51 2015 From: tpathare at sidra.org (Tushar Pathare) Date: Wed, 26 Aug 2015 15:43:51 +0000 Subject: [gpfsug-discuss] Welcome to the "gpfsug-discuss" mailing list In-Reply-To: References: Message-ID: <06133E83-2DCB-4A1C-868A-CD4FDAC61A27@sidra.org> Hello Folks, This is Tushar Pathare from Sidra Medical & Research Centre.I am a HPC Administrator at Sidra. Before joining Sidra I worked with IBM for about 7 years with GPFS Test Team,Pune,India with partner lab being IBM Poughkeepsie,USA Sidra has total GPFS storage of about 1.5PB and growing.Compute power about 5000 cores acquired and growing. Sidra is into Next Generation Sequencing and medical research related to it. Its a pleasure being part of this group. Thank you. Tushar B Pathare High Performance Computing (HPC) Administrator General Parallel File System Scientific Computing Bioinformatics Division Research Sidra Medical and Research Centre PO Box 26999 | Doha, Qatar Burj Doha Tower,Floor 8 D +974 44042250 | M +974 74793547 tpathare at sidra.org | www.sidra.org On 8/26/15, 5:04 PM, "gpfsug-discuss-bounces at gpfsug.org on behalf of gpfsug-discuss-request at gpfsug.org" wrote: >Welcome to the gpfsug-discuss at gpfsug.org mailing list! Hello and >welcome. > > Please introduce yourself to the members with your first post. > > A quick hello with an overview of how you use GPFS, your company >name, market sector and any other interesting information would be >most welcomed. > >Please let us know which City and Country you live in. > >Many thanks. > >GPFS UG Chair > > >To post to this list, send your email to: > > > >General information about the mailing list is at: > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >If you ever want to unsubscribe or change your options (eg, switch to >or from digest mode, change your password, etc.), visit your >subscription page at: > > http://gpfsug.org/mailman/options/gpfsug-discuss/tpathare%40sidra.org > > >You can also make such adjustments via email by sending a message to: > > gpfsug-discuss-request at gpfsug.org > >with the word `help' in the subject or body (don't include the >quotes), and you will get back a message with instructions. > >You must know your password to change your options (including changing >the password, itself) or to unsubscribe. It is: > > p3nguins > >Normally, Mailman will remind you of your gpfsug.org mailing list >passwords once every month, although you can disable this if you >prefer. This reminder will also include instructions on how to >unsubscribe or change your account options. There is also a button on >your options page that will email your current password to you. Disclaimer: This email and its attachments may be confidential and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, any reading, printing, storage, disclosure, copying or any other action taken in respect of this e-mail is prohibited and may be unlawful. If you are not the intended recipient, please notify the sender immediately by using the reply function and then permanently delete what you have received. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Sidra Medical and Research Center. From dhildeb at us.ibm.com Thu Aug 27 01:22:52 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Wed, 26 Aug 2015 17:22:52 -0700 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Thu Aug 27 08:42:34 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 27 Aug 2015 07:42:34 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "valid" as they are essentially NSDs in a different cluster from where the storage cluster would be, but it sounds like it is. Now if I can just get it working ... Looking in mmfsfuncs: if [[ $diskUsage != "localCache" ]] then combinedList=${primaryAdminNodeList},${backupAdminNodeList} IFS="," for server in $combinedList do IFS="$IFS_sv" [[ -z $server ]] && continue $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1 if [[ $? -ne 0 ]] then # The node does not have a server license. printErrorMsg 118 $mmcmd $server return 1 fi IFS="," done # end for server in ${primaryAdminNodeList},${backupAdminNodeList} IFS="$IFS_sv" fi # end of if [[ $diskUsage != "localCache" ]] So unless the NSD device usage=localCache, then it requires a server License when you try and create the NSD, but localCache cannot have a storage pool assigned. I've opened a PMR with IBM. Simon From: Dean Hildebrand > Reply-To: gpfsug main discussion list > Date: Thursday, 27 August 2015 01:22 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center [Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other ques]"Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" > wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From ckrafft at de.ibm.com Thu Aug 27 10:36:27 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Thu, 27 Aug 2015 11:36:27 +0200 Subject: [gpfsug-discuss] Best Practices using GPFS with SVC (and XiV) Message-ID: <201508270936.t7R9asQI012288@d06av08.portsmouth.uk.ibm.com> Dear GPFS folks, I know - it may not be an optimal setup for GPFS ... but is someone willing to share technical best practices when using GPFS with SVC (and XiV). >From the past I remember some recommendations concerning the nr of vDisks in SVC and certainly block size (XiV=1M) could be an issue. Thank you very much for sharing any insights with me. Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 06057114.gif Type: image/gif Size: 1851 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Thu Aug 27 12:58:12 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 27 Aug 2015 11:58:12 +0000 Subject: [gpfsug-discuss] Best Practices using GPFS with SVC (and XiV) In-Reply-To: <201508270936.t7R9asQI012288@d06av08.portsmouth.uk.ibm.com> References: <201508270936.t7R9asQI012288@d06av08.portsmouth.uk.ibm.com> Message-ID: IBM in general doesn?t have a comprehensive set of best practices around Spectrum Scale (trying to get used to that!) and SVC or storage system like XIV (or HP 3PAR). From my IBM days (a few years back) I used both with GPFS successfully. I do recall some discussion regarding a larger block size, but haven?t seen any recent updates. (Scott Fadden, are you listening?) Larger block sizes are problematic for file systems with lots of small files. (like ours) - Since SVC is striping data across multiple storage LUNs, and GPFS is striping as well, what?s the possible impact? My thought would be to use image mode vdisks, but that sort of defeats the purpose/utility of SVC. - IBM specifically points out not to use compression on the SVC/V9000 with GPFS metadata, so if you use these features be careful. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of Christoph Krafft Reply-To: gpfsug main discussion list Date: Thursday, August 27, 2015 at 4:36 AM To: "gpfsug-discuss at gpfsug.org" Subject: [gpfsug-discuss] Best Practices using GPFS with SVC (and XiV) Dear GPFS folks, I know - it may not be an optimal setup for GPFS ... but is someone willing to share technical best practices when using GPFS with SVC (and XiV). From the past I remember some recommendations concerning the nr of vDisks in SVC and certainly block size (XiV=1M) could be an issue. Thank you very much for sharing any insights with me. Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group ________________________________ Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH [cid:2__=8FBBF43DDFA7F6638f9e8a93df938690918c8FB@] Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany ________________________________ IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: ecblank.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 06057114.gif Type: image/gif Size: 1851 bytes Desc: 06057114.gif URL: From S.J.Thompson at bham.ac.uk Thu Aug 27 15:17:19 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 27 Aug 2015 14:17:19 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> References: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: Oh yeah, I see what you mean, I've just looking on another cluster with LROC drives and they have all disappeared. They are still listed in mmlsnsd, but mmdiag --lroc shows the drive as "NULL"/Idle. Simon From: , Robert > Reply-To: gpfsug main discussion list > Date: Wednesday, 26 August 2015 13:27 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So? tread lightly. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Wednesday, August 26, 2015 at 7:23 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hmm mine seem to be working which I created this morning (on a client node): mmdiag --lroc === mmdiag: lroc === LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running Cache inodes 1 dirs 1 data 1 Config: maxFile 0 stubFile 0 Max capacity: 190732 MB, currently in use: 4582 MB Statistics from: Tue Aug 25 14:54:52 2015 Total objects stored 4927 (4605 MB) recalled 81 (55 MB) objects failed to store 467 failed to recall 1 failed to inval 0 objects queried 0 (0 MB) not found 0 = 0.00 % objects invalidated 548 (490 MB) This was running 4.1.1-1. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Aug 27 15:30:14 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 27 Aug 2015 14:30:14 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: <423C8D07-85BD-411B-88C7-3D37D0DD8FB5@nuance.com> Message-ID: <3B636593-906F-4AEC-A3DF-1A24376B4841@nuance.com> What do they say on that side of the pond? ?Bob?s your uncle!? :-) Yea, same for me. Pretty big oops if you ask me. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" Reply-To: gpfsug main discussion list Date: Thursday, August 27, 2015 at 9:17 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Oh yeah, I see what you mean, I've just looking on another cluster with LROC drives and they have all disappeared. They are still listed in mmlsnsd, but mmdiag --lroc shows the drive as "NULL"/Idle. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Thu Aug 27 20:24:50 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 27 Aug 2015 12:24:50 -0700 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Hi Simon, This appears to be a mistake, as using clients for the System.log pool should not require a server license (should be similar to lroc).... thanks for opening the PMR... Dean Hildebrand IBM Almaden Research Center From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 08/27/2015 12:42 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "valid" as they are essentially NSDs in a different cluster from where the storage cluster would be, but it sounds like it is. Now if I can just get it working ... Looking in mmfsfuncs: if [[ $diskUsage != "localCache" ]] then combinedList=${primaryAdminNodeList},${backupAdminNodeList} IFS="," for server in $combinedList do IFS="$IFS_sv" [[ -z $server ]] && continue $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1 if [[ $? -ne 0 ]] then # The node does not have a server license. printErrorMsg 118 $mmcmd $server return 1 fi IFS="," done # end for server in ${primaryAdminNodeList},$ {backupAdminNodeList} IFS="$IFS_sv" fi # end of if [[ $diskUsage != "localCache" ]] So unless the NSD device usage=localCache, then it requires a server License when you try and create the NSD, but localCache cannot have a storage pool assigned. I've opened a PMR with IBM. Simon From: Dean Hildebrand Reply-To: gpfsug main discussion list Date: Thursday, 27 August 2015 01:22 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other ques"Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [attachment "graycol.gif" deleted by Dean Hildebrand/Almaden/IBM] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From dhildeb at us.ibm.com Thu Aug 27 21:36:26 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 27 Aug 2015 13:36:26 -0700 Subject: [gpfsug-discuss] Reexporting GPFS via NFS on VM host In-Reply-To: <55DDCA14.8010103@qmul.ac.uk> References: <55CCA84B.1080600@qmul.ac.uk> <55DDCA14.8010103@qmul.ac.uk> Message-ID: Hi Christopher, > > > > Chris, are you using -v to give the container access to the nfs subdir > > (and hence to a gpfs subdir) (and hence achieve a level of > > multi-tenancy)? > > -v option to what? I was referring to how you were using docker/containers to expose the NFS storage to the container...there are several different ways to do it and one way is to simply expose a directory to the container via the -v option https://docs.docker.com/userguide/dockervolumes/ > > > Even without containers, I wonder if this could allow > > users to run their own VMs as root as well...and preventing them from > > becoming root on gpfs... > > > > > > I'd love for you to share your experience (mgmt and perf) with this > > architecture once you get it up and running. > > A quick and dirty test: > > From a VM: > -bash-4.1$ time dd if=/dev/zero of=cjwtestfile2 bs=1M count=10240 > real 0m20.411s 0m22.137s 0m21.431s 0m21.730s 0m22.056s 0m21.759s > user 0m0.005s 0m0.007s 0m0.006s 0m0.003s 0m0.002s 0m0.004s > sys 0m11.710s 0m10.615s 0m10.399s 0m10.474s 0m10.682s 0m9.965s > > From the underlying hypervisor. > > real 0m11.138s 0m9.813s 0m9.761s 0m9.793s 0m9.773s 0m9.723s > user 0m0.006s 0m0.013s 0m0.009s 0m0.008s 0m0.008s 0m0.009s > sys 0m5.447s 0m5.529s 0m5.802s 0m5.580s 0m6.190s 0m5.516s > > So there's a factor of just over 2 slowdown. > > As it's still 500MB/s, I think it's good enough for now. Interesting test... I assume you have VLANs setup so that the data doesn't leave the VM, go to the network switch, and then back to the nfs server in the hypervisor again? Also, there might be a few NFS tuning options you could try, like increasing the number of nfsd threads, etc...but there are extra data copies occuring so the perf will never match... > > The machine has a 10Gbit/s network connection and both hypervisor and VM > are running SL6. > > > Some side benefits of this > > architecture that we have been thinking about as well is that it allows > > both the containers and VMs to be somewhat ephemeral, while the gpfs > > continues to run in the hypervisor... > > Indeed. This is another advantage. > > If we were running Debian, it would be possible to export part of a > filesystem to a VM. Which would presumably work. I'm not aware of this...is this through VirtFS or something else? In redhat derived OSs > (we are currently using Scientific Linux), I don't believe it is - hence > exporting via NFS. > > > > > To ensure VMotion works relatively smoothly, just ensure each VM is > > given a hostname to mount that always routes back to the localhost nfs > > server on each machine...and I think things should work relatively > > smoothly. Note you'll need to maintain the same set of nfs exports > > across the entire cluster as well, so taht when a VM moves to another > > machine it immediately has an available export to mount. > > Yes, we are doing this. > > Simon alludes to potential problems at the NFS layer on live migration. > Otherwise, yes indeed the setup should be fine. I'm not familiar enough > with the details of NFS - but I have heard NFS described as "a stateless > filesystem with state". It's the stateful bits I'm concerned about. Are you using v3 or v4? It doesn't really matter though, as in either case, gpfs would handle the state failover parts... Ideally the vM would umount the local nfs server, do VMotion, and then mount the new local nfs server, but given there might be open files...it makes sense that this may not be possible... Dean > > Chris > > > > > Dean Hildebrand > > IBM Almaden Research Center > > > > > > Inactive hide details for "Simon Thompson (Research Computing - IT > > Services)" ---08/13/2015 07:33:16 AM--->I've set up a couple"Simon > > Thompson (Research Computing - IT Services)" ---08/13/2015 07:33:16 > > AM--->I've set up a couple of VM hosts to export some of its GPFS > > filesystem >via NFS to machines on that > > > > From: "Simon Thompson (Research Computing - IT Services)" > > > > To: gpfsug main discussion list > > Date: 08/13/2015 07:33 AM > > Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host > > Sent by: gpfsug-discuss-bounces at gpfsug.org > > > > ------------------------------------------------------------------------ > > > > > > > > > > >I've set up a couple of VM hosts to export some of its GPFS filesystem > > >via NFS to machines on that VM host[1,2]. > > > > Provided all your sockets no the VM host are licensed. > > > > >Is live migration of VMs likely to work? > > > > > >Live migration isn't a hard requirement, but if it will work, it could > > >make our life easier. > > > > Live migration using a GPFS file-system on the hypervisor node should work > > (subject to the usual caveats of live migration). > > > > Whether live migration and your VM instances would still be able to NFS > > mount (assuming loopback address?) if they moved to a different > > hypervisor, pass, you might get weird NFS locks. And if they are still > > mounting from the original VM host, then you are not doing what the FAQ > > says you can do. > > > > Simon > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aquan at o2.pl Fri Aug 28 16:12:23 2015 From: aquan at o2.pl (=?UTF-8?Q?aquan?=) Date: Fri, 28 Aug 2015 17:12:23 +0200 Subject: [gpfsug-discuss] =?utf-8?q?Unix_mode_bits_and_mmapplypolicy?= Message-ID: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Hello, This is my first time here. I'm a computer science student from Poland and I use GPFS during my internship at DESY. GPFS is a completely new experience to me, I don't know much about file systems and especially those used on clusters. I would like to ask about the unix mode bits and mmapplypolicy. What I noticed is that when I do the following: 1. Recursively call chmod on some directory (i.e. chmod -R 0777 some_directory) 2. Call mmapplypolicy to list mode (permissions), the listed modes of files don't correspond exactly to the modes that I set with chmod. However, if I wait a bit between step 1 and 2, the listed modes are correct. It seems that the mode bits are updated somewhat asynchronically and if I run mmapplypolicy too soon, they will contain old values. I would like to ask if it is possible to make sure that before calling mmputacl, the mode bits of that directory will be up to date on the list generated by a policy? - Omer Sakarya -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Aug 28 17:55:21 2015 From: makaplan at us.ibm.com (makaplan at us.ibm.com) Date: Fri, 28 Aug 2015 16:55:21 +0000 Subject: [gpfsug-discuss] Unix mode bits and mmapplypolicy In-Reply-To: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> References: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Message-ID: An HTML attachment was scrubbed... URL: From kallbac at iu.edu Sat Aug 29 09:23:45 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Sat, 29 Aug 2015 04:23:45 -0400 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> Message-ID: <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> OK, here?s what I?ve heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you?ll note the known conflicts on that date. What I?m asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I?ll setup a poll for that, so I can quickly tally answers. I value your feedback, but don?t want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG ?email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I?ll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A wrote: > It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. > > Best, > Kristy > > On Aug 20, 2015, at 12:12 PM, Dean Hildebrand wrote: > >> Hi Bryan, >> >> Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) >> >> Dean Hildebrand >> IBM Almaden Research Center >> >> >> Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi >> >> From: Bryan Banister >> To: gpfsug main discussion list >> Date: 08/20/2015 08:42 AM >> Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location >> Sent by: gpfsug-discuss-bounces at gpfsug.org >> >> >> >> Hi Kristy, >> >> Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! >> >> I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule >> >> I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. >> >> Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: >> 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) >> 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? >> 2) Will IBM presenters be available on the Saturday before or after? >> 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? >> 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? >> 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? >> >> As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. >> >> I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! >> >> Cheers, >> -Bryan >> >> -----Original Message----- >> From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org >> Sent: Thursday, August 20, 2015 8:24 AM >> To: gpfsug-discuss at gpfsug.org >> Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location >> >> Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. >> >> Many thanks to Janet for her efforts in organizing the venue and speakers. >> >> Date: Wednesday, October 7th >> Place: IBM building at 590 Madison Avenue, New York City >> Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well >> :-) >> >> Agenda >> >> IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. >> IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team >> >> We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. >> >> We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. >> >> As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. >> >> Best, >> Kristy >> GPFS UG - USA Principal >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> ________________________________ >> >> Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 495 bytes Desc: Message signed with OpenPGP using GPGMail URL: From bbanister at jumptrading.com Sat Aug 29 22:17:44 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Sat, 29 Aug 2015 21:17:44 +0000 Subject: [gpfsug-discuss] Unix mode bits and mmapplypolicy In-Reply-To: References: <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com> Before I try these mmfsctl commands, what are the implications of suspending writes? I assume the entire file system will be quiesced? What if NSD clients are non responsive to this operation? Does a deadlock occur or is there a risk of a deadlock? Thanks in advance! -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of makaplan at us.ibm.com Sent: Friday, August 28, 2015 11:55 AM To: gpfsug-discuss at gpfsug.org Cc: gpfsug-discuss at gpfsug.org Subject: Re: [gpfsug-discuss] Unix mode bits and mmapplypolicy This is due to a design trade-off in mmapplypolicy. Mmapplypolicy bypasses locks and caches - so it doesn't "see" inode&metadata changes until they are flushed to disk. I believe this is hinted at in our publications. You can force a flush with`mmfsctl fsname suspend-write; mmfsctl fsname resume` ----- Original message ----- From: aquan > Sent by: gpfsug-discuss-bounces at gpfsug.org To: gpfsug-discuss at gpfsug.org Cc: Subject: [gpfsug-discuss] Unix mode bits and mmapplypolicy Date: Fri, Aug 28, 2015 11:12 AM Hello, This is my first time here. I'm a computer science student from Poland and I use GPFS during my internship at DESY. GPFS is a completely new experience to me, I don't know much about file systems and especially those used on clusters. I would like to ask about the unix mode bits and mmapplypolicy. What I noticed is that when I do the following: 1. Recursively call chmod on some directory (i.e. chmod -R 0777 some_directory) 2. Call mmapplypolicy to list mode (permissions), the listed modes of files don't correspond exactly to the modes that I set with chmod. However, if I wait a bit between step 1 and 2, the listed modes are correct. It seems that the mode bits are updated somewhat asynchronically and if I run mmapplypolicy too soon, they will contain old values. I would like to ask if it is possible to make sure that before calling mmputacl, the mode bits of that directory will be up to date on the list generated by a policy? - Omer Sakarya _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Sun Aug 30 01:16:02 2015 From: makaplan at us.ibm.com (makaplan at us.ibm.com) Date: Sun, 30 Aug 2015 00:16:02 +0000 Subject: [gpfsug-discuss] mmfsctl fs suspend-write Unix mode bits and mmapplypolicy In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com>, <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> Message-ID: <201508300016.t7U0Gxi9001977@d01av04.pok.ibm.com> An HTML attachment was scrubbed... URL: From aquan at o2.pl Mon Aug 31 16:49:06 2015 From: aquan at o2.pl (=?UTF-8?Q?aquan?=) Date: Mon, 31 Aug 2015 17:49:06 +0200 Subject: [gpfsug-discuss] =?utf-8?q?mmfsctl_fs_suspend-write_Unix_mode_bit?= =?utf-8?q?s_andmmapplypolicy?= In-Reply-To: <201508300016.t7U0Gxi9001977@d01av04.pok.ibm.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB05BF32C4@CHI-EXCHANGEW1.w2k.jumptrading.com> <17c34fa6.6a9410d7.55e07a57.3a3d1@o2.pl> <201508300016.t7U0Gxi9001977@d01av04.pok.ibm.com> Message-ID: <1834e8cf.3c47fde.55e47772.d9226@o2.pl> Thank you for responding to my post. Is there any other way to make sure, that the mode bits are up-to-date when applying a policy? What would happen if a user changed mode bits when the policy that executes mmputacl is run? Which change will be the result in the end, the mmputacl mode bits or chmod mode bits? Dnia 30 sierpnia 2015 2:16 makaplan at us.ibm.com napisa?(a): I don't know exactly how suspend-write works.? But I am NOT suggesting that is be used lightly.It's there for special situations.? Obviously any process trying to change anything in the filesystemis going to be blocked until mmfsctl fs resume.?? That should not cause a GPFS deadlock, but systems thatdepend on GPFS responding may be unhappy... -------------- next part -------------- An HTML attachment was scrubbed... URL: