From jan.finnerman at load.se Fri Apr 1 12:04:38 2016 From: jan.finnerman at load.se (Jan Finnerman Load) Date: Fri, 1 Apr 2016 11:04:38 +0000 Subject: [gpfsug-discuss] Failure Group Message-ID: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se> Hi, I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw Device Mapping. They just ran in to an issue with adding some nsd disks. They claim that their current file system?s nsddisks are specified with 4001 as the failure group. This is out of bounds, since the allowed range is ?1>??>4000. So, when they now try to add some new disks with mmcrnsd, with 4001 specified, they get an error message. Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt [Description: cid:image001.png at 01D18B5D.FFCEFE30] His gpfsdisk.txt file looks like this. [Description: cid:image002.png at 01D18B5D.FFCEFE30] A listing of current disks show all as belonging to Failure group 4001 [Description: cid:image003.png at 01D18B5D.FFCEFE30] So, Why can?t he choose failure group 4001 when the existing disks are member of that group ? If he creates a disk in an other failure group, what?s the pros and cons with that ? I guess issues with replication not working as expected?. Brgds ///Jan [cid:95049B1E-9581-4B5E-8878-5BC3F3371B27] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:DB2EE70A-D139-4B15-B58C-5BD987D2FAB5] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png Type: image/png Size: 5565 bytes Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png Type: image/png Size: 8584 bytes Desc: E895055E-B11B-47C3-BA29-E12D29D394FA.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1].png Type: image/png Size: 6664 bytes Desc: CertPowerSystems_sm[1].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png Type: image/png Size: 3320 bytes Desc: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png Type: image/png Size: 1648 bytes Desc: 7A01C40C-085E-430C-BA95-D4238AFE5602.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 446525C9-567E-4B06-ACA0-34865B35B109.png Type: image/png Size: 6144 bytes Desc: 446525C9-567E-4B06-ACA0-34865B35B109.png URL: From Robert.Oesterlin at nuance.com Fri Apr 1 16:08:02 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:08:02 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? Message-ID: There are a number of good guides and Redbooks out from IBM that talk about the implementation of encryption in a Spectrum Scale (GPFS) cluster. What I?m looking for are other white papers, guidelines, reference material on the sizing considerations. For instance, what?s the performance overhead on an NSD server? If I have a well running cluster today, and I start using encryption, will my NSD servers need to be changed? (more of then, more CPU, etc) And references material or practical experience welcome. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 1 16:10:00 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 1 Apr 2016 15:10:00 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: I thought the enc/decrypt was done client side? So nothing on the nsd server? Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 01 April 2016 16:08 To: gpfsug main discussion list Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? There are a number of good guides and Redbooks out from IBM that talk about the implementation of encryption in a Spectrum Scale (GPFS) cluster. What I?m looking for are other white papers, guidelines, reference material on the sizing considerations. For instance, what?s the performance overhead on an NSD server? If I have a well running cluster today, and I start using encryption, will my NSD servers need to be changed? (more of then, more CPU, etc) And references material or practical experience welcome. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From Robert.Oesterlin at nuance.com Fri Apr 1 16:17:20 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:17:20 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? Message-ID: Hrm ? I thought it was done at the server, meaning data in the client (pagepool) was unencrypted? Well, Simon, one of us is wrong here :) Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Fri Apr 1 16:19:58 2016 From: oehmes at gmail.com (Sven Oehme) Date: Fri, 1 Apr 2016 08:19:58 -0700 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: its done on the client On Fri, Apr 1, 2016 at 8:17 AM, Oesterlin, Robert < Robert.Oesterlin at nuance.com> wrote: > Hrm ? I thought it was done at the server, meaning data in the client > (pagepool) was unencrypted? > > Well, Simon, one of us is wrong here :) > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 1 16:26:31 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 1 Apr 2016 15:26:31 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: Hmm. I thought part of the point was that different nodes (clients?) could have different encryption keys. And I also understood that it was encrypted to the client (I.e. Potentially on the wire). Though the docs talk about at rest and decrypted on the way, so a little unclear. But I could be completely wrong on this. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 01 April 2016 16:17 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? Hrm ? I thought it was done at the server, meaning data in the client (pagepool) was unencrypted? Well, Simon, one of us is wrong here :) Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From Robert.Oesterlin at nuance.com Fri Apr 1 16:28:07 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:28:07 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: <91CA6AA1-25A0-47FD-A05C-A1EE52A86E06@nuance.com> Thanks for clearing that up! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? its done on the client -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 1 16:34:42 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 1 Apr 2016 15:34:42 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: <91CA6AA1-25A0-47FD-A05C-A1EE52A86E06@nuance.com> References: , <91CA6AA1-25A0-47FD-A05C-A1EE52A86E06@nuance.com> Message-ID: The docs (https://www.ibm.com/support/knowledgecenter/#!/SSFKCN_4.1.0/com.ibm.cluster.gpfs.v4r1.gpfs200.doc/bl1adv_encryption.htm) Do say at rest. It also says it protects against an untrusted node in multi cluster. I thought if you were root on such a box, whilst you cant read the file, you could delete it? Can we clear that up? Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 01 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? Thanks for clearing that up! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? its done on the client ) From Robert.Oesterlin at nuance.com Fri Apr 1 16:35:28 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:35:28 +0000 Subject: [gpfsug-discuss] Encryption - client performance penalties? Message-ID: Hit send too fast ? so the question is now ? what?s the penalty on the client side? Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Robert Oesterlin > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:28 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? Thanks for clearing that up! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? its done on the client -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mark.Bush at siriuscom.com Fri Apr 1 16:48:17 2016 From: Mark.Bush at siriuscom.com (Mark.Bush at siriuscom.com) Date: Fri, 1 Apr 2016 15:48:17 +0000 Subject: [gpfsug-discuss] ESS cabling guide Message-ID: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> Is there such a thing as this? And if we want to use protocol nodes along with ESS could they use the same HMC as the ESS? Mark R. Bush | Solutions Architect Mobile: 210.237.8415 | mark.bush at siriuscom.com Sirius Computer Solutions | www.siriuscom.com 10100 Reunion Place, Suite 500, San Antonio, TX 78216 This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you. Sirius Computer Solutions -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Fri Apr 1 16:48:51 2016 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 1 Apr 2016 07:48:51 -0800 Subject: [gpfsug-discuss] Encryption - client performance penalties? In-Reply-To: References: Message-ID: <201604011549.u31Fn1u8016410@d01av03.pok.ibm.com> > From: "Oesterlin, Robert" > > Hit send too fast ? so the question is now ? what?s the penalty on > the client side? > Data is encrypted/decrypted on the path to/from the storage device -- it is in cleartext in the buffer pool. If you can read-ahead and write-behind you may not see the overhead of encryption. Random reads and synchronous writes will see it. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsallen at alcf.anl.gov Fri Apr 1 17:51:16 2016 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 1 Apr 2016 16:51:16 +0000 Subject: [gpfsug-discuss] ESS cabling guide In-Reply-To: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> References: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> Message-ID: Mark, There are SAS and networking diagrams in the ESS Install Procedure PDF that ships with the Spectrum Scale RAID download from FixCentral. You can use the same HMC as the ESS with any other Power hardware. There is a maximum of 48 hosts per HMC however. Depending on firmware levels, you may need to upgrade the HMC first for newer hardware. Ben > On Apr 1, 2016, at 10:48 AM, Mark.Bush at siriuscom.com wrote: > > Is there such a thing as this? And if we want to use protocol nodes along with ESS could they use the same HMC as the ESS? > > > Mark R. Bush | Solutions Architect > Mobile: 210.237.8415 | mark.bush at siriuscom.com > Sirius Computer Solutions | www.siriuscom.com > 10100 Reunion Place, Suite 500, San Antonio, TX 78216 > > This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you. > > Sirius Computer Solutions > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From janfrode at tanso.net Fri Apr 1 20:04:58 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 1 Apr 2016 21:04:58 +0200 Subject: [gpfsug-discuss] Failure Group In-Reply-To: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se> References: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se> Message-ID: Hi :-) I seem to remember failure group 4001 was common at some point, but can't see why.. Maybe it was just the default when no failure group was specified ? Have you tried what happens if you use an empty failure group "::", does it default to "-1" on v3.4 -- or maybe "4001"? You might consider changing the failure groups of the existing disks using mmchdisk if you need them to be the same. Pro's and cons of using another failure group.. Depends a bit on if they're using any replication within the filesystem. If all other NSDs are in failure group 4001 -- they can't be doing any replication, so it doesn't matter much. Only side effect I know of is that new block allocations will first go round robin over the failure groups, then round robin within the failure group, so unless you have similar amount of disks in the two failure groups the disk load might become a bit uneven. -jf On Fri, Apr 1, 2016 at 1:04 PM, Jan Finnerman Load wrote: > Hi, > > I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw > Device Mapping. They just ran in to an issue with adding some nsd disks. > They claim that their current file system?s nsddisks are specified with > 4001 as the failure group. This is out of bounds, since the allowed range > is ?1>??>4000. > So, when they now try to add some new disks with mmcrnsd, with 4001 > specified, they get an error message. > > Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt > > [image: Description: cid:image001.png at 01D18B5D.FFCEFE30] > > > > > > His gpfsdisk.txt file looks like this. > > [image: Description: cid:image002.png at 01D18B5D.FFCEFE30] > > > > > > A listing of current disks show all as belonging to Failure group 4001 > > [image: Description: cid:image003.png at 01D18B5D.FFCEFE30] > > > > So, Why can?t he choose failure group 4001 when the existing disks are > member of that group ? > > If he creates a disk in an other failure group, what?s the pros and cons > with that ? I guess issues with replication not working as expected?. > > > Brgds > > ///Jan > > > Jan Finnerman > > Senior Technical consultant > > > [image: CertTiv_sm] > > > Kista Science Tower > > 164 51 Kista > > Mobil: +46 (0)70 631 66 26 > > Kontor: +46 (0)8 633 66 00/26 > > jan.finnerman at load.se > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 446525C9-567E-4B06-ACA0-34865B35B109.png Type: image/png Size: 6144 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1].png Type: image/png Size: 6664 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png Type: image/png Size: 8584 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png Type: image/png Size: 3320 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png Type: image/png Size: 1648 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png Type: image/png Size: 5565 bytes Desc: not available URL: From jan.finnerman at load.se Fri Apr 1 20:16:11 2016 From: jan.finnerman at load.se (Jan Finnerman Load) Date: Fri, 1 Apr 2016 19:16:11 +0000 Subject: [gpfsug-discuss] Failure Group In-Reply-To: References: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se>, Message-ID: <5E3DB2EE-D644-475A-AABA-FE49BFB84D91@load.se> Ok, I checked the replication status with mmlsfs the output is: -r=1, -m=1, -R=2,-M=2, which means they don't use replication, although they could activate it. I told them that they could add the new disks to the file system with a different failure group e.g. 201 It shouldn't matter that much if they coexist with the 4001 disks, since they don't replicate. I'll follow up on Monday. MVH Jan Finnerman Konsult Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 1 apr. 2016 kl. 21:05 skrev Jan-Frode Myklebust >: Hi :-) I seem to remember failure group 4001 was common at some point, but can't see why.. Maybe it was just the default when no failure group was specified ? Have you tried what happens if you use an empty failure group "::", does it default to "-1" on v3.4 -- or maybe "4001"? You might consider changing the failure groups of the existing disks using mmchdisk if you need them to be the same. Pro's and cons of using another failure group.. Depends a bit on if they're using any replication within the filesystem. If all other NSDs are in failure group 4001 -- they can't be doing any replication, so it doesn't matter much. Only side effect I know of is that new block allocations will first go round robin over the failure groups, then round robin within the failure group, so unless you have similar amount of disks in the two failure groups the disk load might become a bit uneven. -jf On Fri, Apr 1, 2016 at 1:04 PM, Jan Finnerman Load > wrote: Hi, I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw Device Mapping. They just ran in to an issue with adding some nsd disks. They claim that their current file system's nsddisks are specified with 4001 as the failure group. This is out of bounds, since the allowed range is -1>-->4000. So, when they now try to add some new disks with mmcrnsd, with 4001 specified, they get an error message. Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt His gpfsdisk.txt file looks like this. <7A01C40C-085E-430C-BA95-D4238AFE5602.png> A listing of current disks show all as belonging to Failure group 4001 <446525C9-567E-4B06-ACA0-34865B35B109.png> So, Why can't he choose failure group 4001 when the existing disks are member of that group ? If he creates a disk in an other failure group, what's the pros and cons with that ? I guess issues with replication not working as expected.... Brgds ///Jan Jan Finnerman Senior Technical consultant Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 446525C9-567E-4B06-ACA0-34865B35B109.png Type: image/png Size: 6144 bytes Desc: 446525C9-567E-4B06-ACA0-34865B35B109.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1].png Type: image/png Size: 6664 bytes Desc: CertPowerSystems_sm[1].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png Type: image/png Size: 8584 bytes Desc: E895055E-B11B-47C3-BA29-E12D29D394FA.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png Type: image/png Size: 3320 bytes Desc: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png Type: image/png Size: 1648 bytes Desc: 7A01C40C-085E-430C-BA95-D4238AFE5602.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png Type: image/png Size: 5565 bytes Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png URL: From janfrode at tanso.net Sat Apr 2 20:27:09 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Sat, 02 Apr 2016 19:27:09 +0000 Subject: [gpfsug-discuss] ESS cabling guide In-Reply-To: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> References: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> Message-ID: Share hmc is no problem, also I think it should be fairly easy to use the xcat-setup on the EMS to deploy and manage the protocol nodes. -jf fre. 1. apr. 2016 kl. 17.48 skrev Mark.Bush at siriuscom.com < Mark.Bush at siriuscom.com>: > Is there such a thing as this? And if we want to use protocol nodes along > with ESS could they use the same HMC as the ESS? > > > Mark R. Bush | Solutions Architect > Mobile: 210.237.8415 | mark.bush at siriuscom.com > Sirius Computer Solutions | www.siriuscom.com > 10100 Reunion Place, Suite 500, San Antonio, TX 78216 > > This message (including any attachments) is intended only for the use of > the individual or entity to which it is addressed and may contain > information that is non-public, proprietary, privileged, confidential, and > exempt from disclosure under applicable law. If you are not the intended > recipient, you are hereby notified that any use, dissemination, > distribution, or copying of this communication is strictly prohibited. This > message may be viewed by parties at Sirius Computer Solutions other than > those named in the message header. This message does not contain an > official representation of Sirius Computer Solutions. If you have received > this communication in error, notify Sirius Computer Solutions immediately > and (i) destroy this message if a facsimile or (ii) delete this message > immediately if this is an electronic communication. Thank you. > Sirius Computer Solutions > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Mon Apr 4 21:52:37 2016 From: usa-principal at gpfsug.org (GPFS UG USA Principal) Date: Mon, 4 Apr 2016 16:52:37 -0400 Subject: [gpfsug-discuss] GPFS/Spectrum Scale Upcoming US Events - Save the Dates Message-ID: <4192D644-E3AB-4B7B-AF7A-96D3D617FA7B@gpfsug.org> Hello all, We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 ? Enhancements for CORAL from IBM ? Panel discussion with customers, topic TBD ? AFM and integration with Spectrum Protect ? Best practices for GPFS or Spectrum Scale Tuning. ? At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ?? 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ?? We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Tue Apr 5 10:50:35 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Tue, 5 Apr 2016 09:50:35 +0000 Subject: [gpfsug-discuss] Excluding AFM Caches from mmbackup Message-ID: Hi All, Is there any intelligence yet for mmbackup to ignore AFM cache filesets? I guess a way to do this would be to dynamically re-write TSM include / exclude rules based on the extended attributes of the fileset; for example: 1. Scan the all the available filesets in the filesystem, determining which ones have the MISC_ATTRIBUTE=%P% set, 2. Lookup the junction points for the list of filesets returned in (1), 3. Write out EXCLUDE statements for TSM for each directory in (2), 4. Proceed with mmbackup using the new EXCLUDE rules. Presumably one could accomplish this by using the -P flag for mmbackup and writing your own rule to do this? But, maybe IBM could do this for me and put another flag on the mmbackup command :) Although... a blanket flag for ignoring AFM caches altogether might not be good if you want to backup changed files in a local-update cache. Anybody want to do this work for me? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From chair at spectrumscale.org Mon Apr 11 10:37:38 2016 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Mon, 11 Apr 2016 10:37:38 +0100 Subject: [gpfsug-discuss] UK May Meeting Message-ID: Hi All, We are down to our last few places for the May user group meeting, if you are planning to come along, please do register: The draft agenda and registration for the day is at: http://www.eventbrite.com/e/spectrum-scale-gpfs-uk-user-group-spring-2016-t ickets-21724951916 If you have registered and aren't able to attend now, please do let us know so that we can free the slot for other members of the group. We also have 1 slot left on the agenda for a user talk, so if you have an interesting deployment or plans and are able to speak, please let me know! Thanks Simon From damir.krstic at gmail.com Mon Apr 11 14:15:30 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Mon, 11 Apr 2016 13:15:30 +0000 Subject: [gpfsug-discuss] backup and disaster recovery solutions Message-ID: We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinto at scinet.utoronto.ca Mon Apr 11 15:34:54 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 11 Apr 2016 10:34:54 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: References: Message-ID: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> Do you want backups or periodic frozen snapshots of the file system? Backups can entail some level of version control, so that you or end-users can get files back on certain points in time, in case of accidental deletions. Besides 1.5PB is a lot of material, so you may not want to take full snapshots that often. In that case, a combination of daily incremental backups using TSM with GPFS's mmbackup can be a good option. TSM also does a very good job at controlling how material is distributed across multiple tapes, and that is something that requires a lot of micro-management if you want a home grown solution of rsync+LTFS. On the other hand, you could use gpfs built-in tools such a mmapplypolicy to identify candidates for incremental backup, and send them to LTFS. Just more micro management, and you may have to come up with your own tool to let end-users restore their stuff, or you'll have to act on their behalf. Jaime Quoting Damir Krstic : > We have implemented 1.5PB ESS solution recently in our HPC environment. > Today we are kicking of backup and disaster recovery discussions so I was > wondering what everyone else is using for their backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life cycle > feature - so if the file is not touched for number of days, it's moved to a > tape (something like LTFS). > > Thanks in advance. > > DAmir > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From jonathan at buzzard.me.uk Mon Apr 11 16:02:45 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 11 Apr 2016 16:02:45 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> Message-ID: <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: > Do you want backups or periodic frozen snapshots of the file system? > > Backups can entail some level of version control, so that you or > end-users can get files back on certain points in time, in case of > accidental deletions. Besides 1.5PB is a lot of material, so you may > not want to take full snapshots that often. In that case, a > combination of daily incremental backups using TSM with GPFS's > mmbackup can be a good option. TSM also does a very good job at > controlling how material is distributed across multiple tapes, and > that is something that requires a lot of micro-management if you want > a home grown solution of rsync+LTFS. Is there any other viable option other than TSM for backing up 1.5PB of data? All other backup software does not handle this at all well. > On the other hand, you could use gpfs built-in tools such a > mmapplypolicy to identify candidates for incremental backup, and send > them to LTFS. Just more micro management, and you may have to come up > with your own tool to let end-users restore their stuff, or you'll > have to act on their behalf. > I was not aware of a way of letting end users restore their stuff from *backup* for any of the major backup software while respecting the file system level security of the original file system. If you let the end user have access to the backup they can restore any file to any location which is generally not a good idea. I do have a concept of creating a read only Fuse mounted file system from a TSM point in time synthetic backup, and then using the shadow copy feature of Samba to enable restores using the "Previous Versions" feature of windows file manager. I got as far as getting a directory tree you could browse through but then had an enforced change of jobs and don't have access to a TSM server any more to continue development. Note if anyone from IBM is listening that would be a super cool feature. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From makaplan at us.ibm.com Mon Apr 11 16:11:24 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 11 Apr 2016 11:11:24 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: References: Message-ID: <201604111511.u3BFBVbg015832@d03av02.boulder.ibm.com> Since you write " so if the file is not touched for number of days, it's moved to a tape" - that is what we call the HSM feature. This is additional function beyond backup. IBM has two implementations. (1) TSM/HSM now called IBM Spectrum Protect. http://www-03.ibm.com/software/products/en/spectrum-protect-for-space-management (2) HPSS http://www.hpss-collaboration.org/ The GPFS (Spectrum Scale File System) policy feature supports both, so that mmapplypolicy and GPFS policy rules can be used to perform accelerated metadata scans to identify which files should be migrated. Also, GPFS supports on-demand recall (on application reads) of data from long term storage (tape) to GPFS storage (disk or SSD). See also DMAPI. From: Damir Krstic To: gpfsug main discussion list Date: 04/11/2016 09:16 AM Subject: [gpfsug-discuss] backup and disaster recovery solutions Sent by: gpfsug-discuss-bounces at spectrumscale.org We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From pinto at scinet.utoronto.ca Mon Apr 11 16:18:47 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 11 Apr 2016 11:18:47 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> Message-ID: <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> I heard as recently as last Friday from IBM support/vendors/developers of GPFS/TSM/HSM that the newest release of Spectrum Protect (7.11) offers a GUI interface that is user centric, and will allow for unprivileged users to restore their own material via a newer WebGUI (one that also works with Firefox, Chrome and on linux, not only IE on Windows). Users may authenticate via AD or LDAP, and traverse only what they would be allowed to via linux permissions and ACLs. Jaime Quoting Jonathan Buzzard : > On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: >> Do you want backups or periodic frozen snapshots of the file system? >> >> Backups can entail some level of version control, so that you or >> end-users can get files back on certain points in time, in case of >> accidental deletions. Besides 1.5PB is a lot of material, so you may >> not want to take full snapshots that often. In that case, a >> combination of daily incremental backups using TSM with GPFS's >> mmbackup can be a good option. TSM also does a very good job at >> controlling how material is distributed across multiple tapes, and >> that is something that requires a lot of micro-management if you want >> a home grown solution of rsync+LTFS. > > Is there any other viable option other than TSM for backing up 1.5PB of > data? All other backup software does not handle this at all well. > >> On the other hand, you could use gpfs built-in tools such a >> mmapplypolicy to identify candidates for incremental backup, and send >> them to LTFS. Just more micro management, and you may have to come up >> with your own tool to let end-users restore their stuff, or you'll >> have to act on their behalf. >> > > I was not aware of a way of letting end users restore their stuff from > *backup* for any of the major backup software while respecting the file > system level security of the original file system. If you let the end > user have access to the backup they can restore any file to any location > which is generally not a good idea. > > I do have a concept of creating a read only Fuse mounted file system > from a TSM point in time synthetic backup, and then using the shadow > copy feature of Samba to enable restores using the "Previous Versions" > feature of windows file manager. > > I got as far as getting a directory tree you could browse through but > then had an enforced change of jobs and don't have access to a TSM > server any more to continue development. > > Note if anyone from IBM is listening that would be a super cool feature. > > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From jtucker at pixitmedia.com Mon Apr 11 16:23:06 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Mon, 11 Apr 2016 16:23:06 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> Message-ID: Hi Having just commissioned three TSM setups and one with HSM, I can say that's not available from the standard APAR updates at present - however it would be rather nice... The current release is 7.1.5 http://www-01.ibm.com/support/docview.wss?uid=swg24041864 Jez On Mon, Apr 11, 2016 at 4:18 PM, Jaime Pinto wrote: > I heard as recently as last Friday from IBM support/vendors/developers of > GPFS/TSM/HSM that the newest release of Spectrum Protect (7.11) offers a > GUI interface that is user centric, and will allow for unprivileged users > to restore their own material via a newer WebGUI (one that also works with > Firefox, Chrome and on linux, not only IE on Windows). Users may > authenticate via AD or LDAP, and traverse only what they would be allowed > to via linux permissions and ACLs. > > Jaime > > > Quoting Jonathan Buzzard : > > On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: >> >>> Do you want backups or periodic frozen snapshots of the file system? >>> >>> Backups can entail some level of version control, so that you or >>> end-users can get files back on certain points in time, in case of >>> accidental deletions. Besides 1.5PB is a lot of material, so you may >>> not want to take full snapshots that often. In that case, a >>> combination of daily incremental backups using TSM with GPFS's >>> mmbackup can be a good option. TSM also does a very good job at >>> controlling how material is distributed across multiple tapes, and >>> that is something that requires a lot of micro-management if you want >>> a home grown solution of rsync+LTFS. >>> >> >> Is there any other viable option other than TSM for backing up 1.5PB of >> data? All other backup software does not handle this at all well. >> >> On the other hand, you could use gpfs built-in tools such a >>> mmapplypolicy to identify candidates for incremental backup, and send >>> them to LTFS. Just more micro management, and you may have to come up >>> with your own tool to let end-users restore their stuff, or you'll >>> have to act on their behalf. >>> >>> >> I was not aware of a way of letting end users restore their stuff from >> *backup* for any of the major backup software while respecting the file >> system level security of the original file system. If you let the end >> user have access to the backup they can restore any file to any location >> which is generally not a good idea. >> >> I do have a concept of creating a read only Fuse mounted file system >> from a TSM point in time synthetic backup, and then using the shadow >> copy feature of Samba to enable restores using the "Previous Versions" >> feature of windows file manager. >> >> I got as far as getting a directory tree you could browse through but >> then had an enforced change of jobs and don't have access to a TSM >> server any more to continue development. >> >> Note if anyone from IBM is listening that would be a super cool feature. >> >> >> JAB. >> >> -- >> Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk >> Fife, United Kingdom. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> > > --- > Jaime Pinto > SciNet HPC Consortium - Compute/Calcul Canada > www.scinet.utoronto.ca - www.computecanada.org > University of Toronto > 256 McCaul Street, Room 235 > Toronto, ON, M5T1W5 > P: 416-978-2755 > C: 416-505-1477 > > ---------------------------------------------------------------- > This message was sent using IMP at SciNet Consortium, University of > Toronto. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominic.mueller at de.ibm.com Mon Apr 11 16:26:45 2016 From: dominic.mueller at de.ibm.com (Dominic Mueller-Wicke01) Date: Mon, 11 Apr 2016 17:26:45 +0200 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 51, Issue 9 In-Reply-To: References: Message-ID: <201604111529.u3BFT51c027238@d06av02.portsmouth.uk.ibm.com> Spectrum Protect backup (under the hood of mmbackup) and Spectrum Protect for Space Management (HSM) can be combined on the same data. There are some valuable integration topics between the products that can reduce the overall network traffic if using backup and HSM on the same files. With the combination of the products you have the ability to free file system space from cold data and migrate them out to tape and to have several versions of frequently used files in backup in the same file system. Greetings, Dominic. ______________________________________________________________________________________________________________ Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com Vorsitzende des Aufsichtsrats: Martina Koederitz; Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen; Registergericht: Amtsgericht Stuttgart, HRB 243294 From: gpfsug-discuss-request at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Date: 11.04.2016 17:11 Subject: gpfsug-discuss Digest, Vol 51, Issue 9 Sent by: gpfsug-discuss-bounces at spectrumscale.org Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. backup and disaster recovery solutions (Damir Krstic) 2. Re: backup and disaster recovery solutions (Jaime Pinto) 3. Re: backup and disaster recovery solutions (Jonathan Buzzard) 4. Re: backup and disaster recovery solutions (Marc A Kaplan) ----- Message from Damir Krstic on Mon, 11 Apr 2016 13:15:30 +0000 ----- To: gpfsug main discussion list Subject: [gpfsug-discuss] backup and disaster recovery solutions We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir ----- Message from Jaime Pinto on Mon, 11 Apr 2016 10:34:54 -0400 ----- To: gpfsug main discussion list , Damir Krstic Subject: Re: [gpfsug-discuss] backup and disaster recovery solutions Do you want backups or periodic frozen snapshots of the file system? Backups can entail some level of version control, so that you or end-users can get files back on certain points in time, in case of accidental deletions. Besides 1.5PB is a lot of material, so you may not want to take full snapshots that often. In that case, a combination of daily incremental backups using TSM with GPFS's mmbackup can be a good option. TSM also does a very good job at controlling how material is distributed across multiple tapes, and that is something that requires a lot of micro-management if you want a home grown solution of rsync+LTFS. On the other hand, you could use gpfs built-in tools such a mmapplypolicy to identify candidates for incremental backup, and send them to LTFS. Just more micro management, and you may have to come up with your own tool to let end-users restore their stuff, or you'll have to act on their behalf. Jaime Quoting Damir Krstic : > We have implemented 1.5PB ESS solution recently in our HPC environment. > Today we are kicking of backup and disaster recovery discussions so I was > wondering what everyone else is using for their backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life cycle > feature - so if the file is not touched for number of days, it's moved to a > tape (something like LTFS). > > Thanks in advance. > > DAmir > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. ----- Message from Jonathan Buzzard on Mon, 11 Apr 2016 16:02:45 +0100 ----- To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] backup and disaster recovery solutions On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: > Do you want backups or periodic frozen snapshots of the file system? > > Backups can entail some level of version control, so that you or > end-users can get files back on certain points in time, in case of > accidental deletions. Besides 1.5PB is a lot of material, so you may > not want to take full snapshots that often. In that case, a > combination of daily incremental backups using TSM with GPFS's > mmbackup can be a good option. TSM also does a very good job at > controlling how material is distributed across multiple tapes, and > that is something that requires a lot of micro-management if you want > a home grown solution of rsync+LTFS. Is there any other viable option other than TSM for backing up 1.5PB of data? All other backup software does not handle this at all well. > On the other hand, you could use gpfs built-in tools such a > mmapplypolicy to identify candidates for incremental backup, and send > them to LTFS. Just more micro management, and you may have to come up > with your own tool to let end-users restore their stuff, or you'll > have to act on their behalf. > I was not aware of a way of letting end users restore their stuff from *backup* for any of the major backup software while respecting the file system level security of the original file system. If you let the end user have access to the backup they can restore any file to any location which is generally not a good idea. I do have a concept of creating a read only Fuse mounted file system from a TSM point in time synthetic backup, and then using the shadow copy feature of Samba to enable restores using the "Previous Versions" feature of windows file manager. I got as far as getting a directory tree you could browse through but then had an enforced change of jobs and don't have access to a TSM server any more to continue development. Note if anyone from IBM is listening that would be a super cool feature. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. ----- Message from "Marc A Kaplan" on Mon, 11 Apr 2016 11:11:24 -0400 ----- To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] backup and disaster recovery solutions Since you write "so if the file is not touched for number of days, it's moved to a tape" - that is what we call the HSM feature. This is additional function beyond backup. IBM has two implementations. (1) TSM/HSM now called IBM Spectrum Protect. http://www-03.ibm.com/software/products/en/spectrum-protect-for-space-management (2) HPSS http://www.hpss-collaboration.org/ The GPFS (Spectrum Scale File System) policy feature supports both, so that mmapplypolicy and GPFS policy rules can be used to perform accelerated metadata scans to identify which files should be migrated. Also, GPFS supports on-demand recall (on application reads) of data from long term storage (tape) to GPFS storage (disk or SSD). See also DMAPI. Marc A Kaplan From: Damir Krstic To: gpfsug main discussion list Date: 04/11/2016 09:16 AM Subject: [gpfsug-discuss] backup and disaster recovery solutions Sent by: gpfsug-discuss-bounces at spectrumscale.org We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0E436792.gif Type: image/gif Size: 21994 bytes Desc: not available URL: From jez.tucker at gpfsug.org Mon Apr 11 16:31:52 2016 From: jez.tucker at gpfsug.org (Jez Tucker) Date: Mon, 11 Apr 2016 16:31:52 +0100 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 51, Issue 9 In-Reply-To: <201604111529.u3BFT51c027238@d06av02.portsmouth.uk.ibm.com> References: <201604111529.u3BFT51c027238@d06av02.portsmouth.uk.ibm.com> Message-ID: <570BC368.9090307@gpfsug.org> Dominic, Speculatively, when is TSM converting from DMAPI to Light Weight Events? Is there an up-to-date slide share we can put on the UG website regarding the 7.1.11 / public roadmap? Jez On 11/04/16 16:26, Dominic Mueller-Wicke01 wrote: > > Spectrum Protect backup (under the hood of mmbackup) and Spectrum > Protect for Space Management (HSM) can be combined on the same data. > There are some valuable integration topics between the products that > can reduce the overall network traffic if using backup and HSM on the > same files. With the combination of the products you have the ability > to free file system space from cold data and migrate them out to tape > and to have several versions of frequently used files in backup in the > same file system. > > Greetings, Dominic. > > ______________________________________________________________________________________________________________ > Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical > Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com > > Vorsitzende des Aufsichtsrats: Martina Koederitz; Gesch?ftsf?hrung: > Dirk Wittkopp > Sitz der Gesellschaft: B?blingen; Registergericht: Amtsgericht > Stuttgart, HRB 243294 > > Inactive hide details for gpfsug-discuss-request---11.04.2016 > 17:11:55---Send gpfsug-discuss mailing list submissions to > gpfsugpfsug-discuss-request---11.04.2016 17:11:55---Send > gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > From: gpfsug-discuss-request at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Date: 11.04.2016 17:11 > Subject: gpfsug-discuss Digest, Vol 51, Issue 9 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > Today's Topics: > > 1. backup and disaster recovery solutions (Damir Krstic) > 2. Re: backup and disaster recovery solutions (Jaime Pinto) > 3. Re: backup and disaster recovery solutions (Jonathan Buzzard) > 4. Re: backup and disaster recovery solutions (Marc A Kaplan) > > ----- Message from Damir Krstic on Mon, 11 > Apr 2016 13:15:30 +0000 ----- > *To:* > gpfsug main discussion list > *Subject:* > [gpfsug-discuss] backup and disaster recovery solutions > > We have implemented 1.5PB ESS solution recently in our HPC > environment. Today we are kicking of backup and disaster recovery > discussions so I was wondering what everyone else is using for their > backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life > cycle feature - so if the file is not touched for number of days, it's > moved to a tape (something like LTFS). > > Thanks in advance. > > DAmir > ----- Message from Jaime Pinto on Mon, 11 > Apr 2016 10:34:54 -0400 ----- > *To:* > gpfsug main discussion list , Damir > Krstic > *Subject:* > Re: [gpfsug-discuss] backup and disaster recovery solutions > > Do you want backups or periodic frozen snapshots of the file system? > > Backups can entail some level of version control, so that you or > end-users can get files back on certain points in time, in case of > accidental deletions. Besides 1.5PB is a lot of material, so you may > not want to take full snapshots that often. In that case, a > combination of daily incremental backups using TSM with GPFS's > mmbackup can be a good option. TSM also does a very good job at > controlling how material is distributed across multiple tapes, and > that is something that requires a lot of micro-management if you want > a home grown solution of rsync+LTFS. > > On the other hand, you could use gpfs built-in tools such a > mmapplypolicy to identify candidates for incremental backup, and send > them to LTFS. Just more micro management, and you may have to come up > with your own tool to let end-users restore their stuff, or you'll > have to act on their behalf. > > Jaime > > > > > Quoting Damir Krstic : > > > We have implemented 1.5PB ESS solution recently in our HPC environment. > > Today we are kicking of backup and disaster recovery discussions so > I was > > wondering what everyone else is using for their backup? > > > > In our old storage environment we simply rsync-ed home and software > > directories and projects were not backed up. > > > > With ESS we are looking for more of a GPFS based backup solution - > > something to tape possibly and also something that will have life cycle > > feature - so if the file is not touched for number of days, it's > moved to a > > tape (something like LTFS). > > > > Thanks in advance. > > > > DAmir > > > > > > > > > ************************************ > TELL US ABOUT YOUR SUCCESS STORIES > http://www.scinethpc.ca/testimonials > ************************************ > --- > Jaime Pinto > SciNet HPC Consortium - Compute/Calcul Canada > www.scinet.utoronto.ca - www.computecanada.org > University of Toronto > 256 McCaul Street, Room 235 > Toronto, ON, M5T1W5 > P: 416-978-2755 > C: 416-505-1477 > > ---------------------------------------------------------------- > This message was sent using IMP at SciNet Consortium, University of > Toronto. > > > > > ----- Message from Jonathan Buzzard on Mon, > 11 Apr 2016 16:02:45 +0100 ----- > *To:* > gpfsug-discuss at spectrumscale.org > *Subject:* > Re: [gpfsug-discuss] backup and disaster recovery solutions > > On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: > > Do you want backups or periodic frozen snapshots of the file system? > > > > Backups can entail some level of version control, so that you or > > end-users can get files back on certain points in time, in case of > > accidental deletions. Besides 1.5PB is a lot of material, so you may > > not want to take full snapshots that often. In that case, a > > combination of daily incremental backups using TSM with GPFS's > > mmbackup can be a good option. TSM also does a very good job at > > controlling how material is distributed across multiple tapes, and > > that is something that requires a lot of micro-management if you want > > a home grown solution of rsync+LTFS. > > Is there any other viable option other than TSM for backing up 1.5PB of > data? All other backup software does not handle this at all well. > > > On the other hand, you could use gpfs built-in tools such a > > mmapplypolicy to identify candidates for incremental backup, and send > > them to LTFS. Just more micro management, and you may have to come up > > with your own tool to let end-users restore their stuff, or you'll > > have to act on their behalf. > > > > I was not aware of a way of letting end users restore their stuff from > *backup* for any of the major backup software while respecting the file > system level security of the original file system. If you let the end > user have access to the backup they can restore any file to any location > which is generally not a good idea. > > I do have a concept of creating a read only Fuse mounted file system > from a TSM point in time synthetic backup, and then using the shadow > copy feature of Samba to enable restores using the "Previous Versions" > feature of windows file manager. > > I got as far as getting a directory tree you could browse through but > then had an enforced change of jobs and don't have access to a TSM > server any more to continue development. > > Note if anyone from IBM is listening that would be a super cool feature. > > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > > > > ----- Message from "Marc A Kaplan" on Mon, 11 > Apr 2016 11:11:24 -0400 ----- > *To:* > gpfsug main discussion list > *Subject:* > Re: [gpfsug-discuss] backup and disaster recovery solutions > > Since you write "so if the file is not touched for number of days, > it's moved to a tape" - > that is what we call the HSM feature. This is additional function > beyond backup. IBM has two implementations. > > (1) TSM/HSM now called IBM Spectrum Protect. > _http://www-03.ibm.com/software/products/en/spectrum-protect-for-space-management_ > > (2) HPSS _http://www.hpss-collaboration.org/_ > > The GPFS (Spectrum Scale File System) policy feature supports both, so > that mmapplypolicy and GPFS policy rules can be used to perform > accelerated metadata scans to identify which files should be migrated. > > Also, GPFS supports on-demand recall (on application reads) of data > from long term storage (tape) to GPFS storage (disk or SSD). See also > DMAPI. > > > > Marc A Kaplan > > > > From: Damir Krstic > To: gpfsug main discussion list > Date: 04/11/2016 09:16 AM > Subject: [gpfsug-discuss] backup and disaster recovery solutions > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------------------------------------------------ > > > > We have implemented 1.5PB ESS solution recently in our HPC > environment. Today we are kicking of backup and disaster recovery > discussions so I was wondering what everyone else is using for their > backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life > cycle feature - so if the file is not touched for number of days, it's > moved to a tape (something like LTFS). > > Thanks in advance. > > DAmir _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org_ > __http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From makaplan at us.ibm.com Mon Apr 11 16:50:03 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 11 Apr 2016 11:50:03 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca><1460386965.19299.108.camel@buzzard.phy.strath.ac.uk><20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> Message-ID: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> IBM HSM products have always supported unprivileged, user triggered recall of any file. I am not familiar with any particular GUI, but from the CLI, it's easy enough: dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & # pulling the first few blocks will trigger a complete recall if the file happens to be on HSM We also had IBM HSM for mainframe MVS, years and years ago, which is now called DFHSM for z/OS. (I remember using this from TSO...) If the file has been migrated to a tape archive, accessing the file will trigger a tape mount which can take a while, depending on how fast your tape mounting (robot?), operates and what other requests may be queued ahead of yours....! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Mon Apr 11 17:01:19 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 11 Apr 2016 17:01:19 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> Message-ID: <1460390479.19299.125.camel@buzzard.phy.strath.ac.uk> On Mon, 2016-04-11 at 11:50 -0400, Marc A Kaplan wrote: > IBM HSM products have always supported unprivileged, user triggered > recall of any file. I am not familiar with any particular GUI, but > from the CLI, it's easy enough: Sure, but HSM != Backup. Right now secure aka with the appropriate level of privilege recall of *BACKUPS* ain't supported to my knowledge. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jez.tucker at gpfsug.org Mon Apr 11 17:01:37 2016 From: jez.tucker at gpfsug.org (Jez Tucker) Date: Mon, 11 Apr 2016 17:01:37 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> Message-ID: <570BCA61.4010900@gpfsug.org> Yes, but since the dsmrootd in 6.3.4+ removal be aware that several commands now require sudo: jtucker at tsm-demo-01:~$ dsmls /mmfs1/afile IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 7, Release 1, Level 4.4 Client date/time: 11/04/16 16:58:18 (c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved. ActS ResS ResB FSt FName ANS9505E dsmls: cannot initialize the DMAPI interface. Reason: Operation not permitted jtucker at tsm-demo-01:~$ sudo dsmls /mmfs1/afile [sudo] password for jtucker: IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 7, Release 1, Level 4.4 Client date/time: 11/04/16 16:58:25 (c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved. ActS ResS ResB FSt FName 8 8 0 p afile Though, yes, a straight cat of the file as an unpriv user works fine. Jez On 11/04/16 16:50, Marc A Kaplan wrote: > IBM HSM products have always supported unprivileged, user triggered > recall of any file. I am not familiar with any particular GUI, but > from the CLI, it's easy enough: > > dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & # > pulling the first few blocks will trigger a complete recall if the > file happens to be on HSM > > We also had IBM HSM for mainframe MVS, years and years ago, which is > now called DFHSM for z/OS. (I remember using this from TSO...) > > If the file has been migrated to a tape archive, accessing the file > will trigger a tape mount which can take a while, depending on how > fast your tape mounting (robot?), operates and what other requests may > be queued ahead of yours....! > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinto at scinet.utoronto.ca Mon Apr 11 17:03:00 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 11 Apr 2016 12:03:00 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca><1460386965.19299.108.camel@buzzard.phy.strath.ac.uk><20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> Message-ID: <20160411120300.171861d6i1iu1ltg@support.scinet.utoronto.ca> Hi Mark Personally I'm aware of the HSM features. However I was specifically referring to TSM Backup restore. I was told the new GUI for unprivileged users looks identical to what root would see, but unprivileged users would only be able to see material for which they have read permissions, and restore only to paths they have write permissions. The GUI is supposed to be a difference platform then the java/WebSphere like we have seen in the past to manage TSM. I'm looking forward to it as well. Jaime Quoting Marc A Kaplan : > IBM HSM products have always supported unprivileged, user triggered recall > of any file. I am not familiar with any particular GUI, but from the CLI, > it's easy enough: > > dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & # > pulling the first few blocks will trigger a complete recall if the file > happens to be on HSM > > We also had IBM HSM for mainframe MVS, years and years ago, which is now > called DFHSM for z/OS. (I remember using this from TSO...) > > If the file has been migrated to a tape archive, accessing the file will > trigger a tape mount which can take a while, depending on how fast your > tape mounting (robot?), operates and what other requests may be queued > ahead of yours....! > > > > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From jonathan at buzzard.me.uk Mon Apr 11 17:03:04 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 11 Apr 2016 17:03:04 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> Message-ID: <1460390584.19299.127.camel@buzzard.phy.strath.ac.uk> On Mon, 2016-04-11 at 11:18 -0400, Jaime Pinto wrote: > I heard as recently as last Friday from IBM support/vendors/developers > of GPFS/TSM/HSM that the newest release of Spectrum Protect (7.11) > offers a GUI interface that is user centric, and will allow for > unprivileged users to restore their own material via a newer WebGUI > (one that also works with Firefox, Chrome and on linux, not only IE on > Windows). Users may authenticate via AD or LDAP, and traverse only > what they would be allowed to via linux permissions and ACLs. > Hum, if they are they are not exactly advertising the feature or my Google foo is in extremely short supply today. Do you have a pointer to this on the web anywhere? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From mweil at genome.wustl.edu Mon Apr 11 17:05:17 2016 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 11 Apr 2016 11:05:17 -0500 Subject: [gpfsug-discuss] GPFS 4.2 SMB with IPA Message-ID: <570BCB3D.1020602@genome.wustl.edu> Hello all, Is there any good documentation out there to integrate IPA with CES? Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. From janfrode at tanso.net Mon Apr 11 17:43:21 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 11 Apr 2016 16:43:21 +0000 Subject: [gpfsug-discuss] GPFS 4.2 SMB with IPA In-Reply-To: <570BCB3D.1020602@genome.wustl.edu> References: <570BCB3D.1020602@genome.wustl.edu> Message-ID: As IPA is just an LDAP directory + kerberos, I believe you can follow example 7 in the mmuserauth manual. Another way would be to install your CES nodes into your domain outside of GPFS, and use the userdefined mmuserauth config. That's how I would have preferred to do it in an IPA managed linux environment. But, I believe there are still some problems with it overwriting /etc/krb5.keytab and /etc/nsswitch.conf, and stopping "sssd" unnecessarily on mmshutdown. So you might want to make the keytab and nsswitch immutable (chatter +i), and have some logic in f.ex. /var/mmfs/etc/mmfsup that restarts or somehow makes sure sssd is running. Oh.. and you'll need a shared NFS service principal in the krb5.keytab on all nodes to be able to use failover addresses.. and same for samba (which I think hides the ticket in /var/lib/samba/private/netlogon_creds_cli.tdb). -jf man. 11. apr. 2016 kl. 18.05 skrev Matt Weil : > Hello all, > > Is there any good documentation out there to integrate IPA with CES? > > Thanks > > Matt > > ____ > This email message is a private communication. The information > transmitted, including attachments, is intended only for the person or > entity to which it is addressed and may contain confidential, privileged, > and/or proprietary material. Any review, duplication, retransmission, > distribution, or other use of, or taking of any action in reliance upon, > this information by persons or entities other than the intended recipient > is unauthorized by the sender and is prohibited. If you have received this > message in error, please contact the sender immediately by return email and > delete the original message from all computer systems. Thank you. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr.roland.pabel at gmail.com Tue Apr 12 09:03:34 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Tue, 12 Apr 2016 10:03:34 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes Message-ID: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> Hi everyone, we are using GPFS 4.1.0.8 with 4 servers and 850 clients. Our GPFS setup is fairly new, we are still in the testing phase. A few days ago, we had some problems in the cluster which seemed to have started with deadlocks on a small number of nodes. To be better prepared for this scenario, I would like to install a callback for Event deadlockDetected. But this is a local event and the callback is executed on the client nodes, from which I cannot even send an email. Is it possible using mm-commands to instead delegate the callback to the servers (Nodeclass nsdNodes)? I guess it would be possible to use a callback of the form "ssh nsd0 /root/bin/deadlock-callback.sh", but then it is contingent upon server nsd0 being available. The mm-command style "-N nsdNodes" would more reliable in my opinion, because it would be run on all servers. On the servers, I can then check to actually only execute the script on the cluster manager. Thanks Roland -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From Robert.Oesterlin at nuance.com Tue Apr 12 12:54:39 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 12 Apr 2016 11:54:39 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> Message-ID: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Some general thoughts on ?deadlocks? and automated deadlock detection. I personally don?t like the term ?deadlock? as it implies a condition that won?t ever resolve itself. In GPFS terms, a deadlock is really a ?long RPC waiter? over a certain threshold. RPCs that wait on certain events can and do occur and they can take some time to complete. This is not necessarily a condition that is a problem, but you should be looking into them. GPFS does have automated deadlock detection and collection, but in the early releases it was ? well.. it?s not very ?robust?. With later releases (4.2) it?s MUCH better. I personally don?t rely on it because in larger clusters it can be too aggressive and depending on what?s really going on it can make things worse. This statement is my opinion and it doesn?t mean it?s not a good thing to have. :-) On the point of what commands to execute and what to collect ? be careful about long running callback scripts and executing commands on other nodes. Depending on what the issues is, you could end up causing a deadlock or making it worse. Some basic data collection, local to the node with the long RPC waiter is a good thing. Test them well before deploying them. And make sure that you don?t conflict with the automated collections. (which you might consider turning off) For my larger clusters, I dump the cluster waiters on a regular basis (once a minute: mmlsnode ?N waiters ?L), count the types and dump them into a database for graphing via Grafana. This doesn?t help me with true deadlock alerting, but it does give me insight into overall cluster behavior. If I see large numbers of long waiters I will (usually) go and investigate them on a cases by case basis. If you have large numbers of long RPC waiters on an ongoing basis, it's an indication of a larger problem that should be investigated. A few here and there is not a cause for real alarm in my experience. Last ? if you have a chance to upgrade to 4.1.1 or 4.2, I would encourage you to do so as the deadlock detection has improved quite a bit. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid robert.oesterlin at nuance.com From: > on behalf of Roland Pabel > Organization: RRZK Uni K?ln Reply-To: gpfsug main discussion list > Date: Tuesday, April 12, 2016 at 3:03 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Executing Callbacks on other Nodes Hi everyone, we are using GPFS 4.1.0.8 with 4 servers and 850 clients. Our GPFS setup is fairly new, we are still in the testing phase. A few days ago, we had some problems in the cluster which seemed to have started with deadlocks on a small number of nodes. To be better prepared for this scenario, I would like to install a callback for Event deadlockDetected. But this is a local event and the callback is executed on the client nodes, from which I cannot even send an email. Is it possible using mm-commands to instead delegate the callback to the servers (Nodeclass nsdNodes)? I guess it would be possible to use a callback of the form "ssh nsd0 /root/bin/deadlock-callback.sh", but then it is contingent upon server nsd0 being available. The mm-command style "-N nsdNodes" would more reliable in my opinion, because it would be run on all servers. On the servers, I can then check to actually only execute the script on the cluster manager. Thanks Roland -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=CwIFAw&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=c7jzNm-H6SdZMztP1xkwgySivoe4FlOcI2pS2SCJ8K8&s=AfohxS7tz0ky5C8ImoufbQmQpdwpo4wEO7cSCzHPCD0&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr.roland.pabel at gmail.com Tue Apr 12 14:25:33 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Tue, 12 Apr 2016 15:25:33 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Message-ID: <2149839.vuvB37DuRo@soliton.rrz.uni-koeln.de> Hi Bob, thanks for your remarks. I already understood that deadlocks are more timeouts than "tangled up balls of code". I was not (yet) planning on changing the whole routine, I'd just like to get a notice when something unexpected happens in the cluster. So, first, I just want to write these notices into a file and email it once it reaches a certain size. >From what you are saying, it sounds like it is worth upgrading to 4.1.1.x . We are planning a maintenance next month, I'll try to get this into the todo- list. Upgrading beyond this is going require a longer preparation, unless the prerequisite of "RHEL 6.4 or later" as stated on the IBM FAQ is irrelevant. Our clients still run RHEL 6.3. Best regards, Roland > Some general thoughts on ?deadlocks? and automated deadlock detection. > > I personally don?t like the term ?deadlock? as it implies a condition that > won?t ever resolve itself. In GPFS terms, a deadlock is really a ?long RPC > waiter? over a certain threshold. RPCs that wait on certain events can and > do occur and they can take some time to complete. This is not necessarily a > condition that is a problem, but you should be looking into them. > GPFS does have automated deadlock detection and collection, but in the early > releases it was ? well.. it?s not very ?robust?. With later releases (4.2) > it?s MUCH better. I personally don?t rely on it because in larger clusters > it can be too aggressive and depending on what?s really going on it can > make things worse. This statement is my opinion and it doesn?t mean it?s > not a good thing to have. :-) > On the point of what commands to execute and what to collect ? be careful > about long running callback scripts and executing commands on other nodes. > Depending on what the issues is, you could end up causing a deadlock or > making it worse. Some basic data collection, local to the node with the > long RPC waiter is a good thing. Test them well before deploying them. And > make sure that you don?t conflict with the automated collections. (which > you might consider turning off) > For my larger clusters, I dump the cluster waiters on a regular basis (once > a minute: mmlsnode ?N waiters ?L), count the types and dump them into a > database for graphing via Grafana. This doesn?t help me with true deadlock > alerting, but it does give me insight into overall cluster behavior. If I > see large numbers of long waiters I will (usually) go and investigate them > on a cases by case basis. If you have large numbers of long RPC waiters on > an ongoing basis, it's an indication of a larger problem that should be > investigated. A few here and there is not a cause for real alarm in my > experience. > Last ? if you have a chance to upgrade to 4.1.1 or 4.2, I would encourage > you to do so as the deadlock detection has improved quite a bit. > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > robert.oesterlin at nuance.com > > From: > ctrumscale.org>> on behalf of Roland Pabel > > > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > > > Date: Tuesday, April 12, 2016 at 3:03 AM > To: gpfsug main discussion list > > > Subject: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi everyone, > > we are using GPFS 4.1.0.8 with 4 servers and 850 clients. Our GPFS setup is > fairly new, we are still in the testing phase. A few days ago, we had some > problems in the cluster which seemed to have started with deadlocks on a > small number of nodes. To be better prepared for this scenario, I would > like to install a callback for Event deadlockDetected. But this is a local > event and the callback is executed on the client nodes, from which I cannot > even send an email. > > Is it possible using mm-commands to instead delegate the callback to the > servers (Nodeclass nsdNodes)? > > I guess it would be possible to use a callback of the form "ssh nsd0 > /root/bin/deadlock-callback.sh", but then it is contingent upon server nsd0 > being available. The mm-command style "-N nsdNodes" would more reliable in > my opinion, because it would be run on all servers. On the servers, I can > then check to actually only execute the script on the cluster manager. > Thanks > > Roland > -- > Dr. Roland Pabel > Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) > Weyertal 121, Raum 3.07 > D-50931 K?ln > > Tel.: +49 (221) 470-89589 > E-Mail: pabel at uni-koeln.de > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listi > nfo_gpfsug-2Ddiscuss&d=CwIFAw&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY& > r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=c7jzNm-H6SdZMztP1xkwgySivoe4 > FlOcI2pS2SCJ8K8&s=AfohxS7tz0ky5C8ImoufbQmQpdwpo4wEO7cSCzHPCD0&e= -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From Robert.Oesterlin at nuance.com Tue Apr 12 15:09:10 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 12 Apr 2016 14:09:10 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <2149839.vuvB37DuRo@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <2149839.vuvB37DuRo@soliton.rrz.uni-koeln.de> Message-ID: <59C81E1E-59CC-40C4-8A7E-73CC88F0741F@nuance.com> Hi Roland I ran into that issue as well ? if you are running 6.3 you need to update to get to the later levels. RH 6.3 is getting a bit dated, so an upgrade might be a good idea ? but I all too well how hard it is to push through those updates! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Roland Pabel > Organization: RRZK Uni K?ln Reply-To: gpfsug main discussion list > Date: Tuesday, April 12, 2016 at 8:25 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes Hi Bob, thanks for your remarks. I already understood that deadlocks are more timeouts than "tangled up balls of code". I was not (yet) planning on changing the whole routine, I'd just like to get a notice when something unexpected happens in the cluster. So, first, I just want to write these notices into a file and email it once it reaches a certain size. From what you are saying, it sounds like it is worth upgrading to 4.1.1.x . We are planning a maintenance next month, I'll try to get this into the todo- list. Upgrading beyond this is going require a longer preparation, unless the prerequisite of "RHEL 6.4 or later" as stated on the IBM FAQ is irrelevant. Our clients still run RHEL 6.3. Best regards, Roland -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Apr 12 23:01:40 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 12 Apr 2016 18:01:40 -0400 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Message-ID: <201604122201.u3CM1o7d031628@d01av02.pok.ibm.com> My understanding is (someone will correct me if I'm wrong) ... GPFS does not have true deadlock detection. As you say it has time outs. The argument is: As a practical matter, it makes not much difference to a sysadmin or user -- if things are gummed up "too long" they start to smell like a deadlock, so we may as well intervene as though there were a true technical deadlock. A genuine true deadlock is a situation where things are gummed up, there is no progress, and one can prove that there will be no progress, no matter how long one waits. E.g. Classically, you have locked resource A and I have locked resource B and now I decide I need resource A and I am waiting indefinitely long for that. And you have decided you need resouce B and you are waiting indefinitely for that. We are then deadlocked. Deadlock can occur on a single node or over multiple nodes. Technically it may be possible to execute a deadlock detection protocol that would identify cyclic, deadlocking dependencies, but it was decided that, for GPFS, it would be more practical to detect "very long waiters"... From: "Oesterlin, Robert" Some general thoughts on ?deadlocks? and automated deadlock detection. I personally don?t like the term ?deadlock? as it implies a condition that won?t ever resolve itself. In GPFS terms, a deadlock is really a ?long RPC waiter? over a certain threshold. RPCs that wait on certain events can and do occur and they can take some time to complete. This is not necessarily a condition that is a problem, but you should be looking into them. GPFS does have automated deadlock detection and collection, but in the early releases it was ? well.. it?s not very ?robust?. With later releases (4.2) it?s MUCH better. I personally don?t rely on it because in larger clusters it can be too aggressive and depending on what?s really going on it can make things worse. This statement is my opinion and it doesn?t mean it?s not a good thing to have. :-) ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Thu Apr 14 15:19:58 2016 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Thu, 14 Apr 2016 15:19:58 +0100 Subject: [gpfsug-discuss] May user group, call for help! Message-ID: Hi All, For the UK May user group meeting, we are hoping to be able to film the sessions so that we can post as many as talks as possible (permission permitting!) online after the event. In order to do this, we require some kit to film the sessions with ... If you are attending the day and have a video camera that we might be able to borrow, please let me or Claire know! If we don't get support from the community then we won't be able to film and share the talks afterwards! So if you are coming along and have something you'd be happy for us to use for the two days, please do let us know! Thanks Simon (UK Group Chair) From Robert.Oesterlin at nuance.com Thu Apr 14 19:10:20 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 14 Apr 2016 18:10:20 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore Message-ID: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> I?m getting these messages (repeating) in the mmfslog after I restored an NSD node ( relocated to a new physical system) with mmsddrestore - the server seems normal otherwise - what should I do? Thu Apr 14 13:44:48.800 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.1' failed (2) Thu Apr 14 13:44:48.801 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) Thu Apr 14 13:44:48.802 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.2' failed (2) Thu Apr 14 13:44:48.803 2016: [N] Load both paxos local files bad Thu Apr 14 13:44:48.804 2016: [N] Open /var/mmfs/ccr/ccr.paxos.1 failed (2) Thu Apr 14 13:44:48.805 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.1' failed (2) Thu Apr 14 13:44:48.806 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) Thu Apr 14 13:44:48.807 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.2' failed (2) Thu Apr 14 13:44:48.808 2016: [N] Load both paxos local files bad Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Thu Apr 14 19:22:41 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 14 Apr 2016 18:22:41 +0000 Subject: [gpfsug-discuss] GPFS 4.2 and 4.1 in multi-cluster environment Message-ID: <7635681D-31ED-461B-82A0-F17DA19DDFF4@vanderbilt.edu> Hi All, We have a multi-cluster environment consisting of: 1) a ?traditional? HPC cluster running on commodity hardware, and 2) a DDN based cluster which is mounted to the HPC cluster and also exports to researchers around campus using both CNFS and SAMBA / CTDB. Both of these cluster are currently running GPFS 4.1.0.8 efix 21. We are considering doing upgrades in May. I would like to take the HPC cluster to GPFS 4.2.0.x not just because that?s the current version, but to get some of the QoS features introduced in 4.2. However, it may not be possible to take the DDN cluster to GPFS 4.2. I?ve got another inquiry in to them about their plans, but the latest information I have is that they only support up thru GPFS 4.1.1.x. I know that it should be possible to run with the HPC cluster at GPFS 4.2.0.x and the DDN cluster at 4.1.1.x ? my question is - is anyone actually doing that? Any suggestions / warnings? I should mention that this question is motivated by the fact that a couple of years ago when both clusters were running GPFS 3.5.0.x, we got them out of sync on the PTF levels (I think the HPC cluster was at PTF 19 and the DDN cluster at PTF 11) and it caused problems. Because of that, we have tried to keep them in sync as much as possible. Thanks in advance, all? ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Thu Apr 14 20:33:17 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 14 Apr 2016 19:33:17 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore In-Reply-To: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> References: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> Message-ID: I would try switching from CCR to primary/secondary config servers, maybe delete the paxos files, and then back to CCR. I believe that's how I got out of a similar situation on a v4.1.1.x installation this january.. -jf tor. 14. apr. 2016 kl. 20.10 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > I?m getting these messages (repeating) in the mmfslog after I restored an > NSD node ( relocated to a new physical system) with mmsddrestore - the > server seems normal otherwise - what should I do? > > Thu Apr 14 13:44:48.800 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.1' failed (2) > Thu Apr 14 13:44:48.801 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) > Thu Apr 14 13:44:48.802 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.2' failed (2) > Thu Apr 14 13:44:48.803 2016: [N] Load both paxos local files bad > Thu Apr 14 13:44:48.804 2016: [N] Open /var/mmfs/ccr/ccr.paxos.1 failed (2) > Thu Apr 14 13:44:48.805 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.1' failed (2) > Thu Apr 14 13:44:48.806 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) > Thu Apr 14 13:44:48.807 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.2' failed (2) > Thu Apr 14 13:44:48.808 2016: [N] Load both paxos local files bad > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Apr 14 20:39:02 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 14 Apr 2016 19:39:02 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore In-Reply-To: References: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> Message-ID: <4668D451-7C58-456C-B160-54642C07C155@nuance.com> Yea ? turning of CCR means shutting down the entire cluster. Not an option. CCR is VERY POORLY documented. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Jan-Frode Myklebust > Reply-To: gpfsug main discussion list > Date: Thursday, April 14, 2016 at 2:33 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] CCR error messages after mmsdrrestore I would try switching from CCR to primary/secondary config servers, maybe delete the paxos files, and then back to CCR. I believe that's how I got out of a similar situation on a v4.1.1.x installation this january.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Apr 14 21:35:46 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 14 Apr 2016 20:35:46 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore In-Reply-To: <4668D451-7C58-456C-B160-54642C07C155@nuance.com> References: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> <4668D451-7C58-456C-B160-54642C07C155@nuance.com> Message-ID: <035C8381-5C9E-41A5-9DBC-55AEF25B14CC@nuance.com> Following up to my own problem?. It would appear mmsdrrestore doesn?t work (well) with quorum nodes in a CCR enabled cluster. So: change node to non-quorum mmsdrrestore change back to quorum Hey IBM ? how about we document this! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Robert Oesterlin > Reply-To: gpfsug main discussion list > Date: Thursday, April 14, 2016 at 2:39 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] CCR error messages after mmsdrrestore Yea ? turning of CCR means shutting down the entire cluster. Not an option. CCR is VERY POORLY documented. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Jan-Frode Myklebust > Reply-To: gpfsug main discussion list > Date: Thursday, April 14, 2016 at 2:33 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] CCR error messages after mmsdrrestore I would try switching from CCR to primary/secondary config servers, maybe delete the paxos files, and then back to CCR. I believe that's how I got out of a similar situation on a v4.1.1.x installation this january.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chekh at stanford.edu Fri Apr 15 00:30:51 2016 From: chekh at stanford.edu (Alex Chekholko) Date: Thu, 14 Apr 2016 16:30:51 -0700 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Message-ID: <5710282B.6060603@stanford.edu> ++ On 04/12/2016 04:54 AM, Oesterlin, Robert wrote: > For my larger clusters, I dump the cluster waiters on a regular basis > (once a minute: mmlsnode ?N waiters ?L), count the types and dump them > into a database for graphing via Grafana. -- Alex Chekholko chekh at stanford.edu 347-401-4860 From dr.roland.pabel at gmail.com Fri Apr 15 16:50:21 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Fri, 15 Apr 2016 17:50:21 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <5710282B.6060603@stanford.edu> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> Message-ID: <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> Hi, In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So running it every 30 seconds is a bit close. I'll try running it once a minute and then incorporating this into our graphing. Maybe the command is so slow for me because a few nodes are down? Is there a parameter to mmlsnode to configure the timeout? Thanks, Roland > ++ > > On 04/12/2016 04:54 AM, Oesterlin, Robert wrote: > > For my larger clusters, I dump the cluster waiters on a regular basis > > (once a minute: mmlsnode ?N waiters ?L), count the types and dump them > > into a database for graphing via Grafana. -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From Robert.Oesterlin at nuance.com Fri Apr 15 17:02:08 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 15 Apr 2016 16:02:08 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> Message-ID: <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> This command is just using ssh to all the nodes and dumping the waiter information and collecting it. That means if the node is down, slow to respond, or there are a large number of nodes, it could take a while to return. In my 400-500 node clusters this command usually take less than 10 seconds. I do prefix the command with a timeout value in case a node is hung up and ssh never returns (which it sometimes does, and that?s not the fault of GPFS) Something like this: timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L This means I get incomplete information, but if you don?t you end up piling up a lot of hung up commands. I would check over your cluster carefully to see if there are other issues that might cause ssh to hang up ? which could impact other GPFS commands that distribute via ssh. Another approach would be to dump the waiters locally on each node, send node specific information to the database, and then sum it up using the graphing software. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Roland Pabel > Organization: RRZK Uni K?ln Reply-To: gpfsug main discussion list > Date: Friday, April 15, 2016 at 10:50 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes Hi, In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So running it every 30 seconds is a bit close. I'll try running it once a minute and then incorporating this into our graphing. Maybe the command is so slow for me because a few nodes are down? Is there a parameter to mmlsnode to configure the timeout? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tortay at cc.in2p3.fr Fri Apr 15 17:06:41 2016 From: tortay at cc.in2p3.fr (Loic Tortay) Date: Fri, 15 Apr 2016 18:06:41 +0200 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Message-ID: <57111191.4050200@cc.in2p3.fr> Hello, I have a testbed cluster where I have setup AFM for an incremental NFS migration between 2 GPFS filesystems in the same cluster. This is with Spectrum Scale 4.1.1-5 on Linux (CentOS 7). The documentation states: "On a GPFS data source, AFM moves all user extended attributes and ACLs, and file sparseness is maintained." (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) If I'm not mistaken, I have a GPFS data source (since I'm doing a migration from GPFS to GPFS). While file sparseness is mostly maintained, user extended attributes and ACLs in the source/home filesystem do not appear to be migrated to the target/cache filesystem (same goes for basic tests with ACLs): % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 getfattr: Removing leading '/' from absolute path names # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 user.mfiles:sha2-256 % While on the target filesystem: % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 % Am I missing something ? Is there another meaning to "user extended attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | From oehmes at gmail.com Fri Apr 15 17:12:26 2016 From: oehmes at gmail.com (Sven Oehme) Date: Fri, 15 Apr 2016 12:12:26 -0400 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> Message-ID: If you can wait a few more month we will have stats for this in Zimon. Sven On Apr 15, 2016 12:02 PM, "Oesterlin, Robert" wrote: > This command is just using ssh to all the nodes and dumping the waiter > information and collecting it. That means if the node is down, slow to > respond, or there are a large number of nodes, it could take a while to > return. In my 400-500 node clusters this command usually take less than 10 > seconds. I do prefix the command with a timeout value in case a node is > hung up and ssh never returns (which it sometimes does, and that?s not the > fault of GPFS) Something like this: > > timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L > > This means I get incomplete information, but if you don?t you end up > piling up a lot of hung up commands. I would check over your cluster > carefully to see if there are other issues that might cause ssh to hang up > ? which could impact other GPFS commands that distribute via ssh. > > Another approach would be to dump the waiters locally on each node, send > node specific information to the database, and then sum it up using the > graphing software. > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > From: on behalf of Roland > Pabel > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > Date: Friday, April 15, 2016 at 10:50 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi, > > In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So > running it every 30 seconds is a bit close. I'll try running it once a > minute > and then incorporating this into our graphing. > > Maybe the command is so slow for me because a few nodes are down? > Is there a parameter to mmlsnode to configure the timeout? > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri Apr 15 17:48:14 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 15 Apr 2016 16:48:14 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> Message-ID: Excellent! I have Zimon fully deployed and this will make my life much easier. :-) Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 15, 2016 at 11:12 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes If you can wait a few more month we will have stats for this in Zimon. Sven -------------- next part -------------- An HTML attachment was scrubbed... URL: From vpuvvada at in.ibm.com Sat Apr 16 10:23:32 2016 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Sat, 16 Apr 2016 14:53:32 +0530 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <57111191.4050200@cc.in2p3.fr> References: <57111191.4050200@cc.in2p3.fr> Message-ID: <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> Hi, Can you check if AFM was enabled at home cluster using "mmafmconfig enable" command? What is the fileset mode are you using ? Regards, Venkat ------------------------------------------------------------------- Venkateswara R Puvvada/India/IBM at IBMIN vpuvvada at in.ibm.com From: Loic Tortay To: gpfsug-discuss at spectrumscale.org Date: 04/15/2016 09:35 PM Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello, I have a testbed cluster where I have setup AFM for an incremental NFS migration between 2 GPFS filesystems in the same cluster. This is with Spectrum Scale 4.1.1-5 on Linux (CentOS 7). The documentation states: "On a GPFS data source, AFM moves all user extended attributes and ACLs, and file sparseness is maintained." (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) If I'm not mistaken, I have a GPFS data source (since I'm doing a migration from GPFS to GPFS). While file sparseness is mostly maintained, user extended attributes and ACLs in the source/home filesystem do not appear to be migrated to the target/cache filesystem (same goes for basic tests with ACLs): % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 getfattr: Removing leading '/' from absolute path names # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 user.mfiles:sha2-256 % While on the target filesystem: % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 % Am I missing something ? Is there another meaning to "user extended attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From tortay at cc.in2p3.fr Sat Apr 16 10:40:12 2016 From: tortay at cc.in2p3.fr (Loic Tortay) Date: Sat, 16 Apr 2016 11:40:12 +0200 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> References: <57111191.4050200@cc.in2p3.fr> <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> Message-ID: <5712087C.9060608@cc.in2p3.fr> On 16/04/2016 11:23, Venkateswara R Puvvada wrote: > Hi, > > Can you check if AFM was enabled at home cluster using "mmafmconfig > enable" command? What is the fileset mode are you using ? > Hello, AFM was enabled for the 2 home filesets/NFS exports with "mmafmconfig enable /fs1/zone1" & "mmafmconfig enable /fs1/zone2". The fileset mode is read-only for botch cache filesets. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | > Regards, > Venkat > ------------------------------------------------------------------- > Venkateswara R Puvvada/India/IBM at IBMIN > vpuvvada at in.ibm.com > > > > > From: Loic Tortay > To: gpfsug-discuss at spectrumscale.org > Date: 04/15/2016 09:35 PM > Subject: [gpfsug-discuss] Extended attributes and ACLs with > AFM-based "NFS migration" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello, > I have a testbed cluster where I have setup AFM for an incremental NFS > migration between 2 GPFS filesystems in the same cluster. This is with > Spectrum Scale 4.1.1-5 on Linux (CentOS 7). > > The documentation states: "On a GPFS data source, AFM moves all user > extended attributes and ACLs, and file sparseness is maintained." > (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) > > If I'm not mistaken, I have a GPFS data source (since I'm doing a > migration from GPFS to GPFS). > > While file sparseness is mostly maintained, user extended attributes and > ACLs in the source/home filesystem do not appear to be migrated to the > target/cache filesystem (same goes for basic tests with ACLs): > % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > getfattr: Removing leading '/' from absolute path names > # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > user.mfiles:sha2-256 > % > While on the target filesystem: > % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > % > > Am I missing something ? Is there another meaning to "user extended > attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2931 bytes Desc: S/MIME Cryptographic Signature URL: From viccornell at gmail.com Mon Apr 18 14:41:36 2016 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 18 Apr 2016 14:41:36 +0100 Subject: [gpfsug-discuss] AFM Question Message-ID: Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinto at scinet.utoronto.ca Mon Apr 18 14:54:14 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 18 Apr 2016 09:54:14 -0400 Subject: [gpfsug-discuss] GPFS on ZFS? Message-ID: <20160418095414.10636zytueeqmupy@support.scinet.utoronto.ca> Since we can not get GNR outside ESS/GSS appliances, is anybody using ZFS for software raid on commodity storage? Thanks Jaime --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From dr.roland.pabel at gmail.com Mon Apr 18 16:10:02 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Mon, 18 Apr 2016 17:10:02 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> Message-ID: <7692100.SyKvSf6dcU@soliton.rrz.uni-koeln.de> Hi Bob, I'll try the second approach, i.e, collecting "mmfsadm dump waiters" locally and then summing the values up, since it can be done without the overhead of ssh. You mentioned mmlsnode starts all these ssh commands and that made me look into the file itself. I then noticed most of the mm commands are actually scripts. This helps a lot with regards to my original question. mmdsh seems to do what I need. Thanks, Roland > This command is just using ssh to all the nodes and dumping the waiter > information and collecting it. That means if the node is down, slow to > respond, or there are a large number of nodes, it could take a while to > return. In my 400-500 node clusters this command usually take less than 10 > seconds. I do prefix the command with a timeout value in case a node is > hung up and ssh never returns (which it sometimes does, and that?s not the > fault of GPFS) Something like this: > timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L > > This means I get incomplete information, but if you don?t you end up piling > up a lot of hung up commands. I would check over your cluster carefully to > see if there are other issues that might cause ssh to hang up ? which could > impact other GPFS commands that distribute via ssh. > Another approach would be to dump the waiters locally on each node, send > node specific information to the database, and then sum it up using the > graphing software. > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > From: > ctrumscale.org>> on behalf of Roland Pabel > > > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > > > Date: Friday, April 15, 2016 at 10:50 AM > To: gpfsug main discussion list > > > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi, > > In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So > running it every 30 seconds is a bit close. I'll try running it once a > minute and then incorporating this into our graphing. > > Maybe the command is so slow for me because a few nodes are down? > Is there a parameter to mmlsnode to configure the timeout? > > -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From JRLang at uwyo.edu Mon Apr 18 17:28:25 2016 From: JRLang at uwyo.edu (Jeffrey R. Lang) Date: Mon, 18 Apr 2016 16:28:25 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <7692100.SyKvSf6dcU@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> <7692100.SyKvSf6dcU@soliton.rrz.uni-koeln.de> Message-ID: Roland Here's a tool written by NCAR that provides waiter information on a per node bases using a light weight daemon on the monitored node. I have been using it for a while and it has helped me find and figure out long waiter nodes. It might do what you are looking for. https://sourceforge.net/projects/gpfsmonitorsuite/ jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Roland Pabel Sent: Monday, April 18, 2016 9:10 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes Hi Bob, I'll try the second approach, i.e, collecting "mmfsadm dump waiters" locally and then summing the values up, since it can be done without the overhead of ssh. You mentioned mmlsnode starts all these ssh commands and that made me look into the file itself. I then noticed most of the mm commands are actually scripts. This helps a lot with regards to my original question. mmdsh seems to do what I need. Thanks, Roland > This command is just using ssh to all the nodes and dumping the waiter > information and collecting it. That means if the node is down, slow to > respond, or there are a large number of nodes, it could take a while > to return. In my 400-500 node clusters this command usually take less > than 10 seconds. I do prefix the command with a timeout value in case > a node is hung up and ssh never returns (which it sometimes does, and > that?s not the fault of GPFS) Something like this: > timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L > > This means I get incomplete information, but if you don?t you end up > piling up a lot of hung up commands. I would check over your cluster > carefully to see if there are other issues that might cause ssh to > hang up ? which could impact other GPFS commands that distribute via ssh. > Another approach would be to dump the waiters locally on each node, > send node specific information to the database, and then sum it up > using the graphing software. > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > From: > s at spe ctrumscale.org>> on behalf of Roland Pabel > > > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > org>> > Date: Friday, April 15, 2016 at 10:50 AM > To: gpfsug main discussion list > org>> > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi, > > In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. > So running it every 30 seconds is a bit close. I'll try running it > once a minute and then incorporating this into our graphing. > > Maybe the command is so slow for me because a few nodes are down? > Is there a parameter to mmlsnode to configure the timeout? > > -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From shankbal at in.ibm.com Tue Apr 19 06:47:11 2016 From: shankbal at in.ibm.com (Shankar Balasubramanian) Date: Tue, 19 Apr 2016 11:17:11 +0530 Subject: [gpfsug-discuss] AFM Question In-Reply-To: References: Message-ID: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> SW mode does not support failover. IW does, so this will not work. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell To: gpfsug main discussion list Date: 04/18/2016 07:13 PM Subject: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From vpuvvada at in.ibm.com Tue Apr 19 07:01:07 2016 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Tue, 19 Apr 2016 11:31:07 +0530 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <5712087C.9060608@cc.in2p3.fr> References: <57111191.4050200@cc.in2p3.fr><201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> <5712087C.9060608@cc.in2p3.fr> Message-ID: <201604190602.u3J62bl314745928@d28relay02.in.ibm.com> Hi, AFM usually logs the following message at gateway node if it cannot open control file to read ACLs/EAs. AFM: Cannot find control file for file system fileset in the exported file system at home. ACLs and extended attributes will not be synchronized. Sparse files will have zeros written for holes. If the above message didn't not appear in logs and if AFM failed to bring ACLs, this may be a defect. Please open PMR with supporting traces to debug this issue further. Thanks. Regards, Venkat ------------------------------------------------------------------- Venkateswara R Puvvada/India/IBM at IBMIN vpuvvada at in.ibm.com From: Loic Tortay To: gpfsug main discussion list Date: 04/16/2016 03:10 PM Subject: Re: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Sent by: gpfsug-discuss-bounces at spectrumscale.org On 16/04/2016 11:23, Venkateswara R Puvvada wrote: > Hi, > > Can you check if AFM was enabled at home cluster using "mmafmconfig > enable" command? What is the fileset mode are you using ? > Hello, AFM was enabled for the 2 home filesets/NFS exports with "mmafmconfig enable /fs1/zone1" & "mmafmconfig enable /fs1/zone2". The fileset mode is read-only for botch cache filesets. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | > Regards, > Venkat > ------------------------------------------------------------------- > Venkateswara R Puvvada/India/IBM at IBMIN > vpuvvada at in.ibm.com > > > > > From: Loic Tortay > To: gpfsug-discuss at spectrumscale.org > Date: 04/15/2016 09:35 PM > Subject: [gpfsug-discuss] Extended attributes and ACLs with > AFM-based "NFS migration" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello, > I have a testbed cluster where I have setup AFM for an incremental NFS > migration between 2 GPFS filesystems in the same cluster. This is with > Spectrum Scale 4.1.1-5 on Linux (CentOS 7). > > The documentation states: "On a GPFS data source, AFM moves all user > extended attributes and ACLs, and file sparseness is maintained." > (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) > > If I'm not mistaken, I have a GPFS data source (since I'm doing a > migration from GPFS to GPFS). > > While file sparseness is mostly maintained, user extended attributes and > ACLs in the source/home filesystem do not appear to be migrated to the > target/cache filesystem (same goes for basic tests with ACLs): > % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > getfattr: Removing leading '/' from absolute path names > # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > user.mfiles:sha2-256 > % > While on the target filesystem: > % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > % > > Am I missing something ? Is there another meaning to "user extended > attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? > [attachment "smime.p7s" deleted by Venkateswara R Puvvada/India/IBM] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Tue Apr 19 11:46:00 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Tue, 19 Apr 2016 10:46:00 +0000 Subject: [gpfsug-discuss] AFM Question In-Reply-To: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> Message-ID: Hi Shankar, Vic, Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Shankar Balasubramanian Sent: 19 April 2016 06:47 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM Question SW mode does not support failover. IW does, so this will not work. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell > To: gpfsug main discussion list > Date: 04/18/2016 07:13 PM Subject: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Tue Apr 19 12:04:31 2016 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 19 Apr 2016 12:04:31 +0100 Subject: [gpfsug-discuss] AFM Question In-Reply-To: References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> Message-ID: Thanks Luke, The whole business of ?promoting? a cache from one type to another isn?t documented very well in the places that I am looking. I would be grateful to anyone with more info to share. I am in the process of investigating Async DR for new customers. It would just be useful to see what can be done with existing ones who have no interest in upgrading. Also Async DR means that I have to create snapshots (and worse delete them) on the ?working? side of a replication pair and this is something I?m not in a tearing hurry to do. Regards, Vic > On 19 Apr 2016, at 11:46, Luke Raimbach wrote: > > Hi Shankar, Vic, > > Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? > > Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. > > I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? > > Cheers, > Luke. > ? <> > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org ] On Behalf Of Shankar Balasubramanian > Sent: 19 April 2016 06:47 > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] AFM Question > > SW mode does not support failover. IW does, so this will not work. > > > Best Regards, > Shankar Balasubramanian > AFM & Async DR Development > IBM Systems > Bangalore - Embassy Golf Links > India > > > > > > From: Vic Cornell > > To: gpfsug main discussion list > > Date: 04/18/2016 07:13 PM > Subject: [gpfsug-discuss] AFM Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hi All, > Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? > > If it is not immediately obvious why this might be useful, see the following scenario: > > Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. > > The system hosting A fails and all data on fileset A is lost. > > Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. > > Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. > > So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? > > Cheers, > > Vic > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From shankbal at in.ibm.com Tue Apr 19 12:07:27 2016 From: shankbal at in.ibm.com (Shankar Balasubramanian) Date: Tue, 19 Apr 2016 16:37:27 +0530 Subject: [gpfsug-discuss] AFM Question In-Reply-To: References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> Message-ID: <201604191117.u3JBHYqi27525232@d28relay04.in.ibm.com> You can disable snapshots creation on DR by simply disabling RPO feature on DR. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell To: gpfsug main discussion list Date: 04/19/2016 04:34 PM Subject: Re: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Luke, The whole business of ?promoting? a cache from one type to another isn?t documented very well in the places that I am looking. I would be grateful to anyone with more info to share. I am in the process of investigating Async DR for new customers. It would just be useful to see what can be done with existing ones who have no interest in upgrading. Also Async DR means that I have to create snapshots (and worse delete them) on the ?working? side of a replication pair and this is something I?m not in a tearing hurry to do. Regards, Vic On 19 Apr 2016, at 11:46, Luke Raimbach wrote: Hi Shankar, Vic, Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Shankar Balasubramanian Sent: 19 April 2016 06:47 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM Question SW mode does not support failover. IW does, so this will not work. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell To: gpfsug main discussion list Date: 04/18/2016 07:13 PM Subject: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Tue Apr 19 12:20:08 2016 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 19 Apr 2016 12:20:08 +0100 Subject: [gpfsug-discuss] AFM Question In-Reply-To: <201604191117.u3JBHYqi27525232@d28relay04.in.ibm.com> References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> <201604191117.u3JBHYqi27525232@d28relay04.in.ibm.com> Message-ID: <377D783D-27EE-4E40-9F23-047F73FAFDF4@gmail.com> Thanks Shankar - that was the bit I was looking for. Vic > On 19 Apr 2016, at 12:07, Shankar Balasubramanian wrote: > > You can disable snapshots creation on DR by simply disabling RPO feature on DR. > > > Best Regards, > Shankar Balasubramanian > AFM & Async DR Development > IBM Systems > Bangalore - Embassy Golf Links > India > > > > > > From: Vic Cornell > To: gpfsug main discussion list > Date: 04/19/2016 04:34 PM > Subject: Re: [gpfsug-discuss] AFM Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Thanks Luke, > > The whole business of ?promoting? a cache from one type to another isn?t documented very well in the places that I am looking. I would be grateful to anyone with more info to share. > > I am in the process of investigating Async DR for new customers. It would just be useful to see what can be done with existing ones who have no interest in upgrading. > > Also Async DR means that I have to create snapshots (and worse delete them) on the ?working? side of a replication pair and this is something I?m not in a tearing hurry to do. > > > Regards, > > Vic > > On 19 Apr 2016, at 11:46, Luke Raimbach > wrote: > > Hi Shankar, Vic, > > Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? > > Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. > > I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? > > Cheers, > Luke. > <> > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org ] On Behalf Of Shankar Balasubramanian > Sent: 19 April 2016 06:47 > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] AFM Question > > SW mode does not support failover. IW does, so this will not work. > > > Best Regards, > Shankar Balasubramanian > AFM & Async DR Development > IBM Systems > Bangalore - Embassy Golf Links > India > > > > > > From: Vic Cornell > > To: gpfsug main discussion list > > Date: 04/18/2016 07:13 PM > Subject: [gpfsug-discuss] AFM Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > Hi All, > Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? > > If it is not immediately obvious why this might be useful, see the following scenario: > > Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. > > The system hosting A fails and all data on fileset A is lost. > > Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. > > Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. > > So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? > > Cheers, > > Vic > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tortay at cc.in2p3.fr Tue Apr 19 14:43:53 2016 From: tortay at cc.in2p3.fr (Loic Tortay) Date: Tue, 19 Apr 2016 15:43:53 +0200 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <201604190602.u3J62bl314745928@d28relay02.in.ibm.com> References: <57111191.4050200@cc.in2p3.fr> <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> <5712087C.9060608@cc.in2p3.fr> <201604190602.u3J62bl314745928@d28relay02.in.ibm.com> Message-ID: <57163619.6000500@cc.in2p3.fr> On 04/19/2016 08:01 AM, Venkateswara R Puvvada wrote: > Hi, > > AFM usually logs the following message at gateway node if it cannot open > control file to read ACLs/EAs. > > AFM: Cannot find control file for file system fileset > in the exported file system at home. > ACLs and extended attributes will not be synchronized. > Sparse files will have zeros written for holes. > > If the above message didn't not appear in logs and if AFM failed to bring > ACLs, this may be a defect. Please open PMR with supporting traces to > debug this issue further. Thanks. > Hello, There is no such message on any node in the test cluster. I have opened a PMR (50962,650,706), the "gpfs.snap" output is on ecurep.ibm.com in "/toibm/linux/gpfs.snap.50962.650.706.tar". BTW, it would probably be useful if "gpfs.snap" avoided doing a "find /var/mmfs ..." on AFM gateway nodes (or used appropriate find options), since the NFS mountpoints for AFM are in "/var/mmfs/afm" and their content is scanned too. This can be quite time consuming, for instance our test setup has several million files in the home filesystem. The "offending" 'find' is the one at line 3014 in the version of gpfs.snap included with Spectrum Scale 4.1.1-5. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | From SAnderson at convergeone.com Tue Apr 19 18:56:25 2016 From: SAnderson at convergeone.com (Shaun Anderson) Date: Tue, 19 Apr 2016 17:56:25 +0000 Subject: [gpfsug-discuss] Hello from Idaho Message-ID: <12ff9317b22e40ffb7d56e11bab19a58@NACR502.nacr.com> My name is Shaun Anderson and I work for an IBM Business Partner in Boise, ID, USA. Our main vertical is Health-Care but we do other work in other sectors as well. My experience with GPFS has been via the storage product line (Sonas, V7kU) and now with ESS/Spectrum Archive. I stumbled upon SpectrumScale.org today and am glad to have found it while I prepare to implement a cNFS/CTDB(SAMBA) cluster. Shaun Anderson Storage Architect M 214.263.7014 o 208.577.2112 [http://info.spanlink.com/hubfs/Email_images/C1-EmailSignature-logo_160px.png] NOTICE: This email message and any attachments hereto may contain confidential information. Any unauthorized review, use, disclosure, or distribution of such information is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy the original message and all copies of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2323 bytes Desc: image001.png URL: From bbanister at jumptrading.com Tue Apr 19 19:00:53 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 19 Apr 2016 18:00:53 +0000 Subject: [gpfsug-discuss] Hello from Idaho In-Reply-To: <12ff9317b22e40ffb7d56e11bab19a58@NACR502.nacr.com> References: <12ff9317b22e40ffb7d56e11bab19a58@NACR502.nacr.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB0609E1E6@CHI-EXCHANGEW1.w2k.jumptrading.com> Hello Shaun, welcome to the list. If you haven't already see the new Cluster Export Services (CES) facility in 4.1.1-X and 4.2.X-X releases of Spectrum Scale, which provides cross-protocol support of clustered NFS/SMB/etc, then I would highly suggest looking at that as a fully-supported solution over CTDB w/ SAMBA. Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Shaun Anderson Sent: Tuesday, April 19, 2016 12:56 PM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Hello from Idaho My name is Shaun Anderson and I work for an IBM Business Partner in Boise, ID, USA. Our main vertical is Health-Care but we do other work in other sectors as well. My experience with GPFS has been via the storage product line (Sonas, V7kU) and now with ESS/Spectrum Archive. I stumbled upon SpectrumScale.org today and am glad to have found it while I prepare to implement a cNFS/CTDB(SAMBA) cluster. Shaun Anderson Storage Architect M 214.263.7014 o 208.577.2112 [http://info.spanlink.com/hubfs/Email_images/C1-EmailSignature-logo_160px.png] NOTICE: This email message and any attachments hereto may contain confidential information. Any unauthorized review, use, disclosure, or distribution of such information is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy the original message and all copies of it. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2323 bytes Desc: image001.png URL: From vpuvvada at in.ibm.com Wed Apr 20 12:04:42 2016 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 20 Apr 2016 16:34:42 +0530 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <57163619.6000500@cc.in2p3.fr> References: <57111191.4050200@cc.in2p3.fr><201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com><5712087C.9060608@cc.in2p3.fr><201604190602.u3J62bl314745928@d28relay02.in.ibm.com> <57163619.6000500@cc.in2p3.fr> Message-ID: <201604201114.u3KBEnww50331902@d28relay01.in.ibm.com> Hi, There is an issue with gpfs.snap which scans AFM internal mounts. This is issue got fixed in later releases. To workaround this problem, 1. cp /usr/lpp/mmfs/bin/gpfs.snap /usr/lpp/mmfs/bin/gpfs.snap.orig 2. Change this line : ccrSnapExcludeListRaw=$($find /var/mmfs \ \( -name "proxy-server*" -o -name "keystone*" -o -name "openrc*" \) \ 2>/dev/null) to this: ccrSnapExcludeListRaw=$($find /var/mmfs -xdev \ \( -name "proxy-server*" -o -name "keystone*" -o -name "openrc*" \) \ 2>/dev/null) Regards, Venkat ------------------------------------------------------------------- Venkateswara R Puvvada/India/IBM at IBMIN vpuvvada at in.ibm.com +91-80-41777734 From: Loic Tortay To: gpfsug main discussion list Date: 04/19/2016 07:13 PM Subject: Re: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Sent by: gpfsug-discuss-bounces at spectrumscale.org On 04/19/2016 08:01 AM, Venkateswara R Puvvada wrote: > Hi, > > AFM usually logs the following message at gateway node if it cannot open > control file to read ACLs/EAs. > > AFM: Cannot find control file for file system fileset > in the exported file system at home. > ACLs and extended attributes will not be synchronized. > Sparse files will have zeros written for holes. > > If the above message didn't not appear in logs and if AFM failed to bring > ACLs, this may be a defect. Please open PMR with supporting traces to > debug this issue further. Thanks. > Hello, There is no such message on any node in the test cluster. I have opened a PMR (50962,650,706), the "gpfs.snap" output is on ecurep.ibm.com in "/toibm/linux/gpfs.snap.50962.650.706.tar". BTW, it would probably be useful if "gpfs.snap" avoided doing a "find /var/mmfs ..." on AFM gateway nodes (or used appropriate find options), since the NFS mountpoints for AFM are in "/var/mmfs/afm" and their content is scanned too. This can be quite time consuming, for instance our test setup has several million files in the home filesystem. The "offending" 'find' is the one at line 3014 in the version of gpfs.snap included with Spectrum Scale 4.1.1-5. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 13:15:07 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 12:15:07 +0000 Subject: [gpfsug-discuss] mmbackup and filenames Message-ID: Hi, We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, on one we run CES/SMB and run a sync and share tool as well. This means we sometimes end up with filenames containing characters like newline (e.g. >From OSX clients). Mmbackup fails on these filenames, any suggestions on how we can get it to work? Thanks Simon From jonathan at buzzard.me.uk Wed Apr 20 13:28:18 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 20 Apr 2016 13:28:18 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: Message-ID: <1461155298.1434.83.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-20 at 12:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: > Hi, > > We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, > on one we run CES/SMB and run a sync and share tool as well. This means we > sometimes end up with filenames containing characters like newline (e.g. > From OSX clients). Mmbackup fails on these filenames, any suggestions on > how we can get it to work? > OMG, it's like seven/eight years since I reported that as a bug in mmbackup and they *STILL* haven't fixed it!!! I bet it still breaks with back ticks and other wacko characters too. I seem to recall it failed with very long path lengths as well; specifically ones longer than MAX_PATH (google it MAX_PATH is not something you can rely on). Back then mmbackup would just fail completely and not back anything up. Is it still the same or is it just failing on the files with wacko characters? I concluded back then that mmbackup was not suitable for production use. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Wed Apr 20 13:38:21 2016 From: oehmes at us.ibm.com (Sven Oehme) Date: Wed, 20 Apr 2016 12:38:21 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: Message-ID: <201604201239.u3KCdrAb016643@d01av04.pok.ibm.com> Which version of gpfs are you running on this cluster ? Sent from IBM Verse Simon Thompson (Research Computing - IT Services) --- [gpfsug-discuss] mmbackup and filenames --- From:"Simon Thompson (Research Computing - IT Services)" To:gpfsug-discuss at spectrumscale.orgDate:Wed, Apr 20, 2016 5:15 AMSubject:[gpfsug-discuss] mmbackup and filenames Hi,We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems,on one we run CES/SMB and run a sync and share tool as well. This means wesometimes end up with filenames containing characters like newline (e.g.From OSX clients). Mmbackup fails on these filenames, any suggestions onhow we can get it to work?ThanksSimon_______________________________________________gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 13:42:16 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 12:42:16 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <201604201239.u3KCdrAb016643@d01av04.pok.ibm.com> References: , <201604201239.u3KCdrAb016643@d01av04.pok.ibm.com> Message-ID: This is a 4.2 cluster with 7.1.3 protect client. (Probably 4.2.0.0) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sven Oehme [oehmes at us.ibm.com] Sent: 20 April 2016 13:38 To: gpfsug main discussion list Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] mmbackup and filenames Which version of gpfs are you running on this cluster ? Sent from IBM Verse Simon Thompson (Research Computing - IT Services) --- [gpfsug-discuss] mmbackup and filenames --- From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug-discuss at spectrumscale.org Date: Wed, Apr 20, 2016 5:15 AM Subject: [gpfsug-discuss] mmbackup and filenames ________________________________ Hi, We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, on one we run CES/SMB and run a sync and share tool as well. This means we sometimes end up with filenames containing characters like newline (e.g. >From OSX clients). Mmbackup fails on these filenames, any suggestions on how we can get it to work? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Wed Apr 20 15:42:29 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 20 Apr 2016 10:42:29 -0400 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: Message-ID: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. Each path must be specified on a single line. A line can contain only one path. Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 20 16:05:16 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 20 Apr 2016 15:05:16 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> References: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <0F66BEED-E30F-410A-BE20-2F706A5BAC9B@vanderbilt.edu> All, I would like to see this issue get resolved as it has caused us problems as well. We recently had an issue that necessitated us restoring 9.6 million files (out of 260 million) in a filesystem. We were able to restore a little over 8 million of those files relatively easily, but more than a million have been problematic due to various special characters in the filenames. I think there needs to be a recognition that TSM is going to be asked to back up filesystems that are used by Windows and Mac clients via NFS, SAMBA/CTDB, CES, etc., and that the users of those clients cannot be expected to not choose filenames that Unix-savvy users would never in a million years choose. And since I had to write some scripts to generate md5sums of files we restored and therefore had to deal with things in filenames that had me asking ?what in the world were they thinking?!?", I fully recognize that this is not an easy nut to crack. My 2 cents worth? Kevin On Apr 20, 2016, at 9:42 AM, Marc A Kaplan > wrote: The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 16:15:10 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 15:15:10 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 16:19:38 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 15:19:38 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> The Public RFE process sounds like a good way to go... somebody put it up and I'll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 16:27:08 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 15:27:08 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 16:28:47 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 15:28:47 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> Well what a lame restriction... I don't understand why all IBM products don't have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go... somebody put it up and I'll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Wed Apr 20 16:35:04 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 20 Apr 2016 11:35:04 -0400 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <201604201535.u3KFZC28024194@d03av04.boulder.ibm.com> >From a computer science point of view, this is a simple matter of programming. Provide yet-another-option on filelist processing that supports encoding or escaping of special characters. Pick your poison! We and many others have worked through this issue and provided solutions in products apart from TSM. In Spectrum Scale Filesystem, we code filelists with escapes \n and \\. Or if you prefer, use the ESCAPE option. See the Advanced Admin Guide, on or near page 24 in the ILM chapter 2. IBM is a very large organization and sometimes, for some issues, customers have the best, most effective means of communicating requirements to particular product groups within IBM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 16:41:00 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 15:41:00 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: OK, I might have managed to create a public RFE for this: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Well what a lame restriction? I don?t understand why all IBM products don?t have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 20 16:46:17 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 20 Apr 2016 16:46:17 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <1461167177.1434.89.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-20 at 15:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: [SNIP] > Who should we approach at IBM as a user community to get this on the > TSM fix list? > I personally raised this with IBM seven or eight years ago and was told that they where aware of the problem and it would be fixed. Clearly they have not fixed it or they did and then let it break again and thus have never heard of a unit test. The basic problem back then was that mmbackup used various standard Unix text processing utilities and was doomed to break if you put "special" but perfectly valid characters in your file names. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From r.horton at imperial.ac.uk Wed Apr 20 16:58:54 2016 From: r.horton at imperial.ac.uk (Robert Horton) Date: Wed, 20 Apr 2016 16:58:54 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: Message-ID: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> On Wed, 2016-04-20 at 12:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: > We use mmbackup with Spectrum Protect (TSM!) to backup our > file-systems, > on one we run CES/SMB and run a sync and share tool as well. This > means we > sometimes end up with filenames containing characters like newline > (e.g. > From OSX clients). Mmbackup fails on these filenames, any suggestions > on > how we can get it to work? I've not had to do do anything with TSM for a couple of years but when I did as a workaround to that I had a wrapper that called mmbackup and then parsed the output and for any files it couldn't handle due to non-ascii characters then called the tsm backup command directly on the whole directory. This does mean some stuff is getting backed up more than necessary but if it's only a handful of files it's a reasonable workaround. Rob -- Robert Horton HPC Systems Support Analyst Imperial College London +44 (0) 20 7594 5759 From scottcumbie at dynamixgroup.com Wed Apr 20 17:23:08 2016 From: scottcumbie at dynamixgroup.com (Scott Cumbie) Date: Wed, 20 Apr 2016 16:23:08 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> References: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> Message-ID: <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> You should open a PMR. This is not a ?feature? request, this is a failure of the code to work as it should. Scott Cumbie, Dynamix Group scottcumbie at dynamixgroup.com Office: (336) 765-9290 Cell: (336) 782-1590 On Apr 20, 2016, at 11:58 AM, Robert Horton > wrote: On Wed, 2016-04-20 at 12:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, on one we run CES/SMB and run a sync and share tool as well. This means we sometimes end up with filenames containing characters like newline (e.g. From OSX clients). Mmbackup fails on these filenames, any suggestions on how we can get it to work? I've not had to do do anything with TSM for a couple of years but when I did as a workaround to that I had a wrapper that called mmbackup and then parsed the output and for any files it couldn't handle due to non-ascii characters then called the tsm backup command directly on the whole directory. This does mean some stuff is getting backed up more than necessary but if it's only a handful of files it's a reasonable workaround. Rob -- Robert Horton HPC Systems Support Analyst Imperial College London +44 (0) 20 7594 5759 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 20 19:26:27 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 20 Apr 2016 19:26:27 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> References: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> Message-ID: <5717C9D3.8050501@buzzard.me.uk> On 20/04/16 17:23, Scott Cumbie wrote: > You should open a PMR. This is not a ?feature? request, this is a > failure of the code to work as it should. > I did at least seven years ago. I shall see if I can find the reference in my old notebooks tomorrow. Unfortunately one has gone missing so I might not have the reference. I do however wonder if the newlines really are newlines and not some UTF multibyte character that looks like a newline when you parse it as ASCII/ISO-8859-1 or some other legacy encoding? In my experience you have to try really really hard to actually get a newline into a file name. Mostly because the GUI will interpret pressing the return/enter key to think you have finished typing the file name rather than inserting a newline into the file name. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From bbanister at jumptrading.com Wed Apr 20 19:28:54 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 18:28:54 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> I voted for this! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:41 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames OK, I might have managed to create a public RFE for this: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Well what a lame restriction... I don't understand why all IBM products don't have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go... somebody put it up and I'll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 20 19:42:10 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 20 Apr 2016 18:42:10 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <4F3BBBF1-34BF-4FE6-8FB4-D21430C4BFCE@vanderbilt.edu> Me too! And I have to say (and those of you in the U.S. will understand this best) that it was kind of nice to really *want* to cast a vote instead of saying, ?I sure wish ?none of the above? was an option?? ;-) Kevin On Apr 20, 2016, at 1:28 PM, Bryan Banister > wrote: I voted for this! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:41 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames OK, I might have managed to create a public RFE for this: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Well what a lame restriction? I don?t understand why all IBM products don?t have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Wed Apr 20 19:56:42 2016 From: viccornell at gmail.com (viccornell at gmail.com) Date: Wed, 20 Apr 2016 19:56:42 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <584AAC36-28C1-4138-893E-DFC00760C8B0@gmail.com> Me too. Sent from my iPhone > On 20 Apr 2016, at 19:28, Bryan Banister wrote: > > I voted for this! > -B > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) > Sent: Wednesday, April 20, 2016 10:41 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > OK, I might have managed to create a public RFE for this: > > https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 > > Simon > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] > Sent: 20 April 2016 16:28 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > Well what a lame restriction? I don?t understand why all IBM products don?t have public RFE options, > -B > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) > Sent: Wednesday, April 20, 2016 10:27 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) > > Simon > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] > Sent: 20 April 2016 16:19 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! > -B > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) > Sent: Wednesday, April 20, 2016 10:15 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > Hi Mark, > > I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... > > I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. > > Who should we approach at IBM as a user community to get this on the TSM fix list? > > Simon > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] > Sent: 20 April 2016 15:42 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: > > http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html > > ... > The files (entries) listed in the filelist must adhere to the following rules: > Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. > Each path must be specified on a single line. A line can contain only one path. > Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). > By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... > IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 20:02:08 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 19:02:08 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed Apr 20 20:05:26 2016 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 20 Apr 2016 19:05:26 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: It?s there for sending data to support, primarily. But we do make use of it for report generation. -- Jonathan Fosburgh Principal Application Systems Analyst Storage Team IT Operations jfosburg at mdanderson.org (713) 745-9346 From: > on behalf of Bryan Banister > Reply-To: gpfsug main discussion list > Date: Wednesday, April 20, 2016 at 2:02 PM To: "gpfsug main discussion list (gpfsug-discuss at spectrumscale.org)" > Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Apparently, though not documented in man pages or any of the GPFS docs that I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS commands that provides output in machine readable fashion?. That?s right kids? no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dan.Foster at bristol.ac.uk Wed Apr 20 21:23:15 2016 From: Dan.Foster at bristol.ac.uk (Dan Foster) Date: Wed, 20 Apr 2016 21:23:15 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: On 20 April 2016 at 20:02, Bryan Banister wrote: > Apparently, though not documented in man pages or any of the GPFS docs that > I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS > commands that provides output in machine readable fashion?. > > That?s right kids? no more parsing obscure, often changed output columns > with your favorite bash/awk/python/magic. This is really useful, thanks for sharing! :) -- Dan Foster | Senior Storage Systems Administrator Advanced Computing Research Centre, University of Bristol From bevans at pixitmedia.com Wed Apr 20 21:38:42 2016 From: bevans at pixitmedia.com (Barry Evans) Date: Wed, 20 Apr 2016 21:38:42 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <5717E8D2.2080107@pixitmedia.com> If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any of the GPFS docs > that I?ve read (at least that I recall), there is a ?-Y? option to > many/most GPFS commands that provides output in machine readable fashion?. > > That?s right kids? no more parsing obscure, often changed output > columns with your favorite bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From duersch at us.ibm.com Wed Apr 20 21:43:11 2016 From: duersch at us.ibm.com (Steve Duersch) Date: Wed, 20 Apr 2016 16:43:11 -0400 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: References: Message-ID: We try our hardest to keep those columns static. Rarely are they changed. We are aware that folks are programming against them and we don't rearrange where things are. Steve Duersch Spectrum Scale (GPFS) FVTest IBM Poughkeepsie, New York >If you build a monitoring pipeline using -Y output, make sure you test >between revisions before upgrading. The columns do have a tendency to >change from time to time. > >Cheers, >Barry >On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any of the GPFS docs > that I?ve read (at least that I recall), there is a ?-Y? option to > many/most GPFS commands that provides output in machine readable fashion?. > > That?s right kids? no more parsing obscure, often changed output > columns with your favorite bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 21:46:04 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 20:46:04 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717E8D2.2080107@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtucker at pixitmedia.com Wed Apr 20 22:12:10 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Wed, 20 Apr 2016 22:12:10 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <5717F0AA.8050901@pixitmedia.com> Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: > > What?s nice is that the ?-Y? output provides a HEADER so that you can > still programmatically determine fields of interest? this is the best! > > I recommend adding ?-Y? option documentation to all supporting GPFS > commands for others to be informed. > > -Bryan > > *From:*gpfsug-discuss-bounces at spectrumscale.org > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry > Evans > *Sent:* Wednesday, April 20, 2016 3:39 PM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > If you build a monitoring pipeline using -Y output, make sure you test > between revisions before upgrading. The columns do have a tendency to > change from time to time. > > Cheers, > Barry > > On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any of the GPFS > docs that I?ve read (at least that I recall), there is a ?-Y? > option to many/most GPFS commands that provides output in machine > readable fashion?. > > That?s right kids? no more parsing obscure, often changed output > columns with your favorite bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the named > addressee(s) only and may contain proprietary, confidential or > privileged information. If you are not the intended recipient, you > are hereby notified that any review, dissemination or copying of > this email is strictly prohibited, and to please notify the sender > immediately and destroy this email and any attachments. Email > transmission cannot be guaranteed to be secure or error-free. The > Company, therefore, does not make any guarantees as to the > completeness or accuracy of this email or any attachments. This > email is for informational purposes only and does not constitute a > recommendation, offer, request or solicitation of any kind to buy, > sell, subscribe, redeem or perform any type of transaction of a > financial product. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtucker at pixitmedia.com Wed Apr 20 22:18:28 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Wed, 20 Apr 2016 22:18:28 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717F0AA.8050901@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> Message-ID: <5717F224.2010100@pixitmedia.com> So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: > Indeed. > > jtucker at elmo:~$ mmlsfs all -Y > mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: > > I must say I've not seen any headers increment above 0:1 since ... er > .. 3.3(?), so they're pretty static. > > Now, if only mmlspool supported -Y ... or if _all_ commands supported > -Y, I might even FedEX beer. > > Jez > > > On 20/04/16 21:46, Bryan Banister wrote: >> >> What?s nice is that the ?-Y? output provides a HEADER so that you can >> still programmatically determine fields of interest? this is the best! >> >> I recommend adding ?-Y? option documentation to all supporting GPFS >> commands for others to be informed. >> >> -Bryan >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of >> *Barry Evans >> *Sent:* Wednesday, April 20, 2016 3:39 PM >> *To:* gpfsug-discuss at spectrumscale.org >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >> didn't... game changer >> >> If you build a monitoring pipeline using -Y output, make sure you >> test between revisions before upgrading. The columns do have a >> tendency to change from time to time. >> >> Cheers, >> Barry >> >> On 20/04/2016 20:02, Bryan Banister wrote: >> >> Apparently, though not documented in man pages or any of the GPFS >> docs that I?ve read (at least that I recall), there is a ?-Y? >> option to many/most GPFS commands that provides output in machine >> readable fashion?. >> >> That?s right kids? no more parsing obscure, often changed output >> columns with your favorite bash/awk/python/magic. >> >> Why IBM would not document this is beyond me, >> >> -B >> >> ------------------------------------------------------------------------ >> >> >> Note: This email is for the confidential use of the named >> addressee(s) only and may contain proprietary, confidential or >> privileged information. If you are not the intended recipient, >> you are hereby notified that any review, dissemination or copying >> of this email is strictly prohibited, and to please notify the >> sender immediately and destroy this email and any attachments. >> Email transmission cannot be guaranteed to be secure or >> error-free. The Company, therefore, does not make any guarantees >> as to the completeness or accuracy of this email or any >> attachments. This email is for informational purposes only and >> does not constitute a recommendation, offer, request or >> solicitation of any kind to buy, sell, subscribe, redeem or >> perform any type of transaction of a financial product. >> >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> This email is confidential in that it is intended for the exclusive >> attention of the addressee(s) indicated. If you are not the intended >> recipient, this email should not be read or disclosed to any other >> person. Please notify the sender immediately and delete this email >> from your computer system. Any opinions expressed are not necessarily >> those of the company from which this email was sent and, whilst to >> the best of our knowledge no viruses or defects exist, no >> responsibility can be accepted for any loss or damage arising from >> its receipt or subsequent use of this email. >> >> >> ------------------------------------------------------------------------ >> >> Note: This email is for the confidential use of the named >> addressee(s) only and may contain proprietary, confidential or >> privileged information. If you are not the intended recipient, you >> are hereby notified that any review, dissemination or copying of this >> email is strictly prohibited, and to please notify the sender >> immediately and destroy this email and any attachments. Email >> transmission cannot be guaranteed to be secure or error-free. The >> Company, therefore, does not make any guarantees as to the >> completeness or accuracy of this email or any attachments. This email >> is for informational purposes only and does not constitute a >> recommendation, offer, request or solicitation of any kind to buy, >> sell, subscribe, redeem or perform any type of transaction of a >> financial product. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -- > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 20 22:24:01 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 20 Apr 2016 21:24:01 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717F0AA.8050901@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> Message-ID: <3360F57F-BC94-4116-82F6-9E1CDFC2919F@vanderbilt.edu> All, Does the unit of measure for *all* fields default to the same as if you ran the command without ?-Y?? For example: mmlsquota:user:HEADER:version:reserved:reserved:filesystemName:quotaType:id:name:blockUsage:blockQuota:blockLimit:blockInDoubt:blockGrace:filesUsage:filesQuota:filesLimit:filesInDoubt:filesGrace:remarks:fid:filesetname: blockUsage, blockLimit, and blockInDoubt are in KB, which makes sense, since that?s the default. But what about blockGrace if a user is over quota? Will it also contain output in varying units of measure (?6 days? or ?2 hours? or ?expired?) just like without the ?-Y?? I think this points to Bryan being right ?-Y? should be documented somewhere / somehow. Thanks? Kevin On Apr 20, 2016, at 4:12 PM, Jez Tucker > wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What?s nice is that the ?-Y? output provides a HEADER so that you can still programmatically determine fields of interest? this is the best! I recommend adding ?-Y? option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS commands that provides output in machine readable fashion?. That?s right kids? no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bevans at pixitmedia.com Wed Apr 20 22:58:27 2016 From: bevans at pixitmedia.com (Barry Evans) Date: Wed, 20 Apr 2016 22:58:27 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717F224.2010100@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> Message-ID: <5717FB83.6020805@pixitmedia.com> Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: > So mmlspool does in 4.1.1.3... perhaps my memory fails me. > I'm pretty certain Yuri told me that mmlspool was completely > unsupported and didn't have -Y a couple of years ago when we did the > original GPFS UG RFEs prior to 4.x. I figure that earns cookies. > > Perhaps in light of the mmbackup thread; "Will fix RFEs for > cookies?". Name your price ;-) > > Jez > > On 20/04/16 22:12, Jez Tucker wrote: >> Indeed. >> >> jtucker at elmo:~$ mmlsfs all -Y >> mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: >> >> I must say I've not seen any headers increment above 0:1 since ... er >> .. 3.3(?), so they're pretty static. >> >> Now, if only mmlspool supported -Y ... or if _all_ commands supported >> -Y, I might even FedEX beer. >> >> Jez >> >> >> On 20/04/16 21:46, Bryan Banister wrote: >>> >>> What?s nice is that the ?-Y? output provides a HEADER so that you >>> can still programmatically determine fields of interest? this is the >>> best! >>> >>> I recommend adding ?-Y? option documentation to all supporting GPFS >>> commands for others to be informed. >>> >>> -Bryan >>> >>> *From:*gpfsug-discuss-bounces at spectrumscale.org >>> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of >>> *Barry Evans >>> *Sent:* Wednesday, April 20, 2016 3:39 PM >>> *To:* gpfsug-discuss at spectrumscale.org >>> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >>> didn't... game changer >>> >>> If you build a monitoring pipeline using -Y output, make sure you >>> test between revisions before upgrading. The columns do have a >>> tendency to change from time to time. >>> >>> Cheers, >>> Barry >>> >>> On 20/04/2016 20:02, Bryan Banister wrote: >>> >>> Apparently, though not documented in man pages or any of the >>> GPFS docs that I?ve read (at least that I recall), there is a >>> ?-Y? option to many/most GPFS commands that provides output in >>> machine readable fashion?. >>> >>> That?s right kids? no more parsing obscure, often changed output >>> columns with your favorite bash/awk/python/magic. >>> >>> Why IBM would not document this is beyond me, >>> >>> -B >>> >>> ------------------------------------------------------------------------ >>> >>> >>> Note: This email is for the confidential use of the named >>> addressee(s) only and may contain proprietary, confidential or >>> privileged information. If you are not the intended recipient, >>> you are hereby notified that any review, dissemination or >>> copying of this email is strictly prohibited, and to please >>> notify the sender immediately and destroy this email and any >>> attachments. Email transmission cannot be guaranteed to be >>> secure or error-free. The Company, therefore, does not make any >>> guarantees as to the completeness or accuracy of this email or >>> any attachments. This email is for informational purposes only >>> and does not constitute a recommendation, offer, request or >>> solicitation of any kind to buy, sell, subscribe, redeem or >>> perform any type of transaction of a financial product. >>> >>> >>> >>> _______________________________________________ >>> >>> gpfsug-discuss mailing list >>> >>> gpfsug-discuss at spectrumscale.org >>> >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >>> >>> This email is confidential in that it is intended for the exclusive >>> attention of the addressee(s) indicated. If you are not the intended >>> recipient, this email should not be read or disclosed to any other >>> person. Please notify the sender immediately and delete this email >>> from your computer system. Any opinions expressed are not >>> necessarily those of the company from which this email was sent and, >>> whilst to the best of our knowledge no viruses or defects exist, no >>> responsibility can be accepted for any loss or damage arising from >>> its receipt or subsequent use of this email. >>> >>> >>> ------------------------------------------------------------------------ >>> >>> Note: This email is for the confidential use of the named >>> addressee(s) only and may contain proprietary, confidential or >>> privileged information. If you are not the intended recipient, you >>> are hereby notified that any review, dissemination or copying of >>> this email is strictly prohibited, and to please notify the sender >>> immediately and destroy this email and any attachments. Email >>> transmission cannot be guaranteed to be secure or error-free. The >>> Company, therefore, does not make any guarantees as to the >>> completeness or accuracy of this email or any attachments. This >>> email is for informational purposes only and does not constitute a >>> recommendation, offer, request or solicitation of any kind to buy, >>> sell, subscribe, redeem or perform any type of transaction of a >>> financial product. >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> -- >> Jez Tucker >> Head of Research & Development >> Pixit Media >> Mobile: +44 (0) 776 419 3820 >> www.pixitmedia.com > > -- > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 23:02:50 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 22:02:50 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717FB83.6020805@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A3684@CHI-EXCHANGEW1.w2k.jumptrading.com> That's a separate topic from having GPFS CLI commands output machine readable format, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 4:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Apr 20 23:06:18 2016 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 20 Apr 2016 22:06:18 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717FB83.6020805@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> Message-ID: <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> +1 to a real python API. We have written our own, albeit incomplete, library to expose most of what we need. We would be happy to share some general ideas on what should be included, but a real IBM implementation wouldn't have to do what we did. (Think lots of subprocess.Popen + subprocess.communicate and shredding the output of mm commands. And yes, we wrote a parser which could shred the output of either -Y or tabular format.) Thx Paul From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 5:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 23:08:39 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 22:08:39 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> Sounds like a candidate for the GPFS UG Git Hub!! https://github.com/gpfsug/gpfsug-tools -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Sanchez, Paul Sent: Wednesday, April 20, 2016 5:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer +1 to a real python API. We have written our own, albeit incomplete, library to expose most of what we need. We would be happy to share some general ideas on what should be included, but a real IBM implementation wouldn't have to do what we did. (Think lots of subprocess.Popen + subprocess.communicate and shredding the output of mm commands. And yes, we wrote a parser which could shred the output of either -Y or tabular format.) Thx Paul From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 5:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtucker at pixitmedia.com Thu Apr 21 01:05:39 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Thu, 21 Apr 2016 01:05:39 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <57181953.9090506@pixitmedia.com> I'd suggest you attend the UK UG in May then ... ref Agenda: http://www.gpfsug.org/may-2016-uk-user-group/ On 20/04/16 23:08, Bryan Banister wrote: > > Sounds like a candidate for the GPFS UG Git Hub!! > > https://github.com/gpfsug/gpfsug-tools > > -B > > *From:*gpfsug-discuss-bounces at spectrumscale.org > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of > *Sanchez, Paul > *Sent:* Wednesday, April 20, 2016 5:06 PM > *To:* gpfsug main discussion list > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > +1 to a real python API. > > We have written our own, albeit incomplete, library to expose most of > what we need. We would be happy to share some general ideas on what > should be included, but a real IBM implementation wouldn?t have to do > what we did. (Think lots of subprocess.Popen + subprocess.communicate > and shredding the output of mm commands. And yes, we wrote a parser > which could shred the output of either ?Y or tabular format.) > > Thx > > Paul > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry > Evans > *Sent:* Wednesday, April 20, 2016 5:58 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > Someone should just make a python API that just abstracts all of this > > On 20/04/2016 22:18, Jez Tucker wrote: > > So mmlspool does in 4.1.1.3... perhaps my memory fails me. > I'm pretty certain Yuri told me that mmlspool was completely > unsupported and didn't have -Y a couple of years ago when we did > the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. > > Perhaps in light of the mmbackup thread; "Will fix RFEs for > cookies?". Name your price ;-) > > Jez > > On 20/04/16 22:12, Jez Tucker wrote: > > Indeed. > > jtucker at elmo:~$ mmlsfs all -Y > mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: > > I must say I've not seen any headers increment above 0:1 since > ... er .. 3.3(?), so they're pretty static. > > Now, if only mmlspool supported -Y ... or if _all_ commands > supported -Y, I might even FedEX beer. > > Jez > > On 20/04/16 21:46, Bryan Banister wrote: > > What?s nice is that the ?-Y? output provides a HEADER so > that you can still programmatically determine fields of > interest? this is the best! > > I recommend adding ?-Y? option documentation to all > supporting GPFS commands for others to be informed. > > -Bryan > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On > Behalf Of *Barry Evans > *Sent:* Wednesday, April 20, 2016 3:39 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? > I sure didn't... game changer > > If you build a monitoring pipeline using -Y output, make > sure you test between revisions before upgrading. The > columns do have a tendency to change from time to time. > > Cheers, > Barry > > On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any > of the GPFS docs that I?ve read (at least that I > recall), there is a ?-Y? option to many/most GPFS > commands that provides output in machine readable > fashion?. > > That?s right kids? no more parsing obscure, often > changed output columns with your favorite > bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the > named addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not > the intended recipient, you are hereby notified that > any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender > immediately and destroy this email and any > attachments. Email transmission cannot be guaranteed > to be secure or error-free. The Company, therefore, > does not make any guarantees as to the completeness or > accuracy of this email or any attachments. This email > is for informational purposes only and does not > constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, > redeem or perform any type of transaction of a > financial product. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you > are not the intended recipient, this email should not be > read or disclosed to any other person. Please notify the > sender immediately and delete this email from your > computer system. Any opinions expressed are not > necessarily those of the company from which this email was > sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for > any loss or damage arising from its receipt or subsequent > use of this email. > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the named > addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not the > intended recipient, you are hereby notified that any > review, dissemination or copying of this email is strictly > prohibited, and to please notify the sender immediately > and destroy this email and any attachments. Email > transmission cannot be guaranteed to be secure or > error-free. The Company, therefore, does not make any > guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational > purposes only and does not constitute a recommendation, > offer, request or solicitation of any kind to buy, sell, > subscribe, redeem or perform any type of transaction of a > financial product. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you are not > the intended recipient, this email should not be read or disclosed > to any other person. Please notify the sender immediately and > delete this email from your computer system. Any opinions > expressed are not necessarily those of the company from which this > email was sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for any loss > or damage arising from its receipt or subsequent use of this email. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Barry Evans > Technical Director & Co-Founder > Pixit Media > > http://www.pixitmedia.com > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jez.tucker at gpfsug.org Thu Apr 21 01:10:07 2016 From: jez.tucker at gpfsug.org (Jez Tucker) Date: Thu, 21 Apr 2016 01:10:07 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <57181A5F.4070909@gpfsug.org> Btw. If anyone wants to add anything to the UG github, just send a pull request. Jez On 20/04/16 23:08, Bryan Banister wrote: > > Sounds like a candidate for the GPFS UG Git Hub!! > > https://github.com/gpfsug/gpfsug-tools > > -B > > *From:*gpfsug-discuss-bounces at spectrumscale.org > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of > *Sanchez, Paul > *Sent:* Wednesday, April 20, 2016 5:06 PM > *To:* gpfsug main discussion list > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > +1 to a real python API. > > We have written our own, albeit incomplete, library to expose most of > what we need. We would be happy to share some general ideas on what > should be included, but a real IBM implementation wouldn?t have to do > what we did. (Think lots of subprocess.Popen + subprocess.communicate > and shredding the output of mm commands. And yes, we wrote a parser > which could shred the output of either ?Y or tabular format.) > > Thx > > Paul > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry > Evans > *Sent:* Wednesday, April 20, 2016 5:58 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > Someone should just make a python API that just abstracts all of this > > On 20/04/2016 22:18, Jez Tucker wrote: > > So mmlspool does in 4.1.1.3... perhaps my memory fails me. > I'm pretty certain Yuri told me that mmlspool was completely > unsupported and didn't have -Y a couple of years ago when we did > the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. > > Perhaps in light of the mmbackup thread; "Will fix RFEs for > cookies?". Name your price ;-) > > Jez > > On 20/04/16 22:12, Jez Tucker wrote: > > Indeed. > > jtucker at elmo:~$ mmlsfs all -Y > mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: > > I must say I've not seen any headers increment above 0:1 since > ... er .. 3.3(?), so they're pretty static. > > Now, if only mmlspool supported -Y ... or if _all_ commands > supported -Y, I might even FedEX beer. > > Jez > > On 20/04/16 21:46, Bryan Banister wrote: > > What?s nice is that the ?-Y? output provides a HEADER so > that you can still programmatically determine fields of > interest? this is the best! > > I recommend adding ?-Y? option documentation to all > supporting GPFS commands for others to be informed. > > -Bryan > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On > Behalf Of *Barry Evans > *Sent:* Wednesday, April 20, 2016 3:39 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? > I sure didn't... game changer > > If you build a monitoring pipeline using -Y output, make > sure you test between revisions before upgrading. The > columns do have a tendency to change from time to time. > > Cheers, > Barry > > On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any > of the GPFS docs that I?ve read (at least that I > recall), there is a ?-Y? option to many/most GPFS > commands that provides output in machine readable > fashion?. > > That?s right kids? no more parsing obscure, often > changed output columns with your favorite > bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the > named addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not > the intended recipient, you are hereby notified that > any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender > immediately and destroy this email and any > attachments. Email transmission cannot be guaranteed > to be secure or error-free. The Company, therefore, > does not make any guarantees as to the completeness or > accuracy of this email or any attachments. This email > is for informational purposes only and does not > constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, > redeem or perform any type of transaction of a > financial product. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you > are not the intended recipient, this email should not be > read or disclosed to any other person. Please notify the > sender immediately and delete this email from your > computer system. Any opinions expressed are not > necessarily those of the company from which this email was > sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for > any loss or damage arising from its receipt or subsequent > use of this email. > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the named > addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not the > intended recipient, you are hereby notified that any > review, dissemination or copying of this email is strictly > prohibited, and to please notify the sender immediately > and destroy this email and any attachments. Email > transmission cannot be guaranteed to be secure or > error-free. The Company, therefore, does not make any > guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational > purposes only and does not constitute a recommendation, > offer, request or solicitation of any kind to buy, sell, > subscribe, redeem or perform any type of transaction of a > financial product. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you are not > the intended recipient, this email should not be read or disclosed > to any other person. Please notify the sender immediately and > delete this email from your computer system. Any opinions > expressed are not necessarily those of the company from which this > email was sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for any loss > or damage arising from its receipt or subsequent use of this email. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Barry Evans > Technical Director & Co-Founder > Pixit Media > > http://www.pixitmedia.com > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From stijn.deweirdt at ugent.be Thu Apr 21 07:49:03 2016 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 21 Apr 2016 08:49:03 +0200 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <57181A5F.4070909@gpfsug.org> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> <57181A5F.4070909@gpfsug.org> Message-ID: <571877DF.6070600@ugent.be> we have a parser, but not an actual API, in case someone is interested. https://github.com/hpcugent/vsc-filesystems/blob/master/lib/vsc/filesystem/gpfs.py anyway, from my experience, the best man page for the mm* commands is reading the bash scripts themself, they often contain other useful but undocumented options ;) stijn On 04/21/2016 02:10 AM, Jez Tucker wrote: > Btw. If anyone wants to add anything to the UG github, just send a pull > request. > > Jez > > On 20/04/16 23:08, Bryan Banister wrote: >> >> Sounds like a candidate for the GPFS UG Git Hub!! >> >> https://github.com/gpfsug/gpfsug-tools >> >> -B >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of >> *Sanchez, Paul >> *Sent:* Wednesday, April 20, 2016 5:06 PM >> *To:* gpfsug main discussion list >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >> didn't... game changer >> >> +1 to a real python API. >> >> We have written our own, albeit incomplete, library to expose most of >> what we need. We would be happy to share some general ideas on what >> should be included, but a real IBM implementation wouldn?t have to do >> what we did. (Think lots of subprocess.Popen + subprocess.communicate >> and shredding the output of mm commands. And yes, we wrote a parser >> which could shred the output of either ?Y or tabular format.) >> >> Thx >> >> Paul >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry >> Evans >> *Sent:* Wednesday, April 20, 2016 5:58 PM >> *To:* gpfsug-discuss at spectrumscale.org >> >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >> didn't... game changer >> >> Someone should just make a python API that just abstracts all of this >> >> On 20/04/2016 22:18, Jez Tucker wrote: >> >> So mmlspool does in 4.1.1.3... perhaps my memory fails me. >> I'm pretty certain Yuri told me that mmlspool was completely >> unsupported and didn't have -Y a couple of years ago when we did >> the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. >> >> Perhaps in light of the mmbackup thread; "Will fix RFEs for >> cookies?". Name your price ;-) >> >> Jez >> >> On 20/04/16 22:12, Jez Tucker wrote: >> >> Indeed. >> >> jtucker at elmo:~$ mmlsfs all -Y >> >> mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: >> >> >> I must say I've not seen any headers increment above 0:1 since >> ... er .. 3.3(?), so they're pretty static. >> >> Now, if only mmlspool supported -Y ... or if _all_ commands >> supported -Y, I might even FedEX beer. >> >> Jez >> >> On 20/04/16 21:46, Bryan Banister wrote: >> >> What?s nice is that the ?-Y? output provides a HEADER so >> that you can still programmatically determine fields of >> interest? this is the best! >> >> I recommend adding ?-Y? option documentation to all >> supporting GPFS commands for others to be informed. >> >> -Bryan >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On >> Behalf Of *Barry Evans >> *Sent:* Wednesday, April 20, 2016 3:39 PM >> *To:* gpfsug-discuss at spectrumscale.org >> >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? >> I sure didn't... game changer >> >> If you build a monitoring pipeline using -Y output, make >> sure you test between revisions before upgrading. The >> columns do have a tendency to change from time to time. >> >> Cheers, >> Barry >> >> On 20/04/2016 20:02, Bryan Banister wrote: >> >> Apparently, though not documented in man pages or any >> of the GPFS docs that I?ve read (at least that I >> recall), there is a ?-Y? option to many/most GPFS >> commands that provides output in machine readable >> fashion?. >> >> That?s right kids? no more parsing obscure, often >> changed output columns with your favorite >> bash/awk/python/magic. >> >> Why IBM would not document this is beyond me, >> >> -B >> >> >> ------------------------------------------------------------------------ >> >> >> Note: This email is for the confidential use of the >> named addressee(s) only and may contain proprietary, >> confidential or privileged information. If you are not >> the intended recipient, you are hereby notified that >> any review, dissemination or copying of this email is >> strictly prohibited, and to please notify the sender >> immediately and destroy this email and any >> attachments. Email transmission cannot be guaranteed >> to be secure or error-free. The Company, therefore, >> does not make any guarantees as to the completeness or >> accuracy of this email or any attachments. This email >> is for informational purposes only and does not >> constitute a recommendation, offer, request or >> solicitation of any kind to buy, sell, subscribe, >> redeem or perform any type of transaction of a >> financial product. >> >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> This email is confidential in that it is intended for the >> exclusive attention of the addressee(s) indicated. If you >> are not the intended recipient, this email should not be >> read or disclosed to any other person. Please notify the >> sender immediately and delete this email from your >> computer system. Any opinions expressed are not >> necessarily those of the company from which this email was >> sent and, whilst to the best of our knowledge no viruses >> or defects exist, no responsibility can be accepted for >> any loss or damage arising from its receipt or subsequent >> use of this email. >> >> >> ------------------------------------------------------------------------ >> >> >> Note: This email is for the confidential use of the named >> addressee(s) only and may contain proprietary, >> confidential or privileged information. If you are not the >> intended recipient, you are hereby notified that any >> review, dissemination or copying of this email is strictly >> prohibited, and to please notify the sender immediately >> and destroy this email and any attachments. Email >> transmission cannot be guaranteed to be secure or >> error-free. The Company, therefore, does not make any >> guarantees as to the completeness or accuracy of this >> email or any attachments. This email is for informational >> purposes only and does not constitute a recommendation, >> offer, request or solicitation of any kind to buy, sell, >> subscribe, redeem or perform any type of transaction of a >> financial product. >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> -- >> Jez Tucker >> Head of Research & Development >> Pixit Media >> Mobile: +44 (0) 776 419 3820 >> www.pixitmedia.com >> >> -- >> Jez Tucker >> Head of Research & Development >> Pixit Media >> Mobile: +44 (0) 776 419 3820 >> www.pixitmedia.com >> >> This email is confidential in that it is intended for the >> exclusive attention of the addressee(s) indicated. If you are not >> the intended recipient, this email should not be read or disclosed >> to any other person. Please notify the sender immediately and >> delete this email from your computer system. Any opinions >> expressed are not necessarily those of the company from which this >> email was sent and, whilst to the best of our knowledge no viruses >> or defects exist, no responsibility can be accepted for any loss >> or damage arising from its receipt or subsequent use of this email. >> >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> -- >> >> Barry Evans >> Technical Director & Co-Founder >> Pixit Media >> >> http://www.pixitmedia.com >> >> This email is confidential in that it is intended for the exclusive >> attention of the addressee(s) indicated. If you are not the intended >> recipient, this email should not be read or disclosed to any other >> person. Please notify the sender immediately and delete this email >> from your computer system. Any opinions expressed are not necessarily >> those of the company from which this email was sent and, whilst to the >> best of our knowledge no viruses or defects exist, no responsibility >> can be accepted for any loss or damage arising from its receipt or >> subsequent use of this email. >> >> >> ------------------------------------------------------------------------ >> >> Note: This email is for the confidential use of the named addressee(s) >> only and may contain proprietary, confidential or privileged >> information. If you are not the intended recipient, you are hereby >> notified that any review, dissemination or copying of this email is >> strictly prohibited, and to please notify the sender immediately and >> destroy this email and any attachments. Email transmission cannot be >> guaranteed to be secure or error-free. The Company, therefore, does >> not make any guarantees as to the completeness or accuracy of this >> email or any attachments. This email is for informational purposes >> only and does not constitute a recommendation, offer, request or >> solicitation of any kind to buy, sell, subscribe, redeem or perform >> any type of transaction of a financial product. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From mweil at genome.wustl.edu Thu Apr 21 16:31:03 2016 From: mweil at genome.wustl.edu (Matt Weil) Date: Thu, 21 Apr 2016 10:31:03 -0500 Subject: [gpfsug-discuss] PMR 78846,122,000 Message-ID: <5718F237.4040705@genome.wustl.edu> Apr 21 07:41:53 linuscs88 mmfs: Shutting down abnormally due to error in /project/sprelfks1/build/rfks1s007a/src/avs/fs/mmfs/ts/tm/tree.C line 1025 retCode 12, reasonCode 56 any ideas? ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. From jonathan at buzzard.me.uk Thu Apr 21 16:51:01 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Thu, 21 Apr 2016 16:51:01 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <5717C9D3.8050501@buzzard.me.uk> References: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> <5717C9D3.8050501@buzzard.me.uk> Message-ID: <1461253861.1434.110.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-20 at 19:26 +0100, Jonathan Buzzard wrote: > On 20/04/16 17:23, Scott Cumbie wrote: > > You should open a PMR. This is not a ?feature? request, this is a > > failure of the code to work as it should. > > > > I did at least seven years ago. I shall see if I can find the reference > in my old notebooks tomorrow. Unfortunately one has gone missing so I > might not have the reference. > PMR 30456 is what I have written in my notebook, with a date of 11th June 2009, all under a title of "mmbackup is busted". Though I guess IBM might claim that not backing up the file is a fix because back then mmbackup would crash out completely and not backup anything at all. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From russell.steffen1 at navy.mil Thu Apr 21 22:25:30 2016 From: russell.steffen1 at navy.mil (Steffen, Russell CIV FNMOC, N63) Date: Thu, 21 Apr 2016 21:25:30 +0000 Subject: [gpfsug-discuss] [Non-DoD Source] Re: Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com>, <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> Message-ID: <366F49EE121F9F488D7EA78AA37C01620DF75583@NAWEMUGUXM01V.nadsuswe.nads.navy.mil> Last year I wrote a python package to plot the I/O volume our clusters were generating. In order to do that I ended up reverse-engineering the mmsdrfs file format so that I could determine which NSDs were in which filesystems and served by which NSD servers - basic cluster topology. Everything I was able to figure out is in this python module: https://bitbucket.org/rrs42/iographer/src/6d410073fc39b448a4742da7bb1a9ecf258d611c/iographer/GPFS.py?at=master&fileviewer=file-view-default And if anyone is interested in the package the repository is hosted here: https://bitbucket.org/rrs42/iographer -- Russell Steffen HPC Systems Analyst/Systems Administrator, N63 Fleet Numerical Meteorology and Oceanograph Center russell.steffen1 at navy.mil, Phone 831-656-4218 ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sanchez, Paul [Paul.Sanchez at deshaw.com] Sent: Wednesday, April 20, 2016 3:06 PM To: gpfsug main discussion list Subject: [Non-DoD Source] Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer +1 to a real python API. We have written our own, albeit incomplete, library to expose most of what we need. We would be happy to share some general ideas on what should be included, but a real IBM implementation wouldn?t have to do what we did. (Think lots of subprocess.Popen + subprocess.communicate and shredding the output of mm commands. And yes, we wrote a parser which could shred the output of either ?Y or tabular format.) Thx Paul From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 5:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What?s nice is that the ?-Y? output provides a HEADER so that you can still programmatically determine fields of interest? this is the best! I recommend adding ?-Y? option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS commands that provides output in machine readable fashion?. That?s right kids? no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [X] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [X] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [X] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. From chair at spectrumscale.org Fri Apr 22 08:38:55 2016 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Fri, 22 Apr 2016 08:38:55 +0100 Subject: [gpfsug-discuss] ISC June Meeting Message-ID: Hi All, IBM are hoping to put together a short agenda for a meeting at ISC in June this year. They have asked if there are any US based people likely to be attending who would be interested in giving a talk at the ISC, Germany meeting. If you are US based and planning to attend, please let me know and I'll put you in touch with the right people. Its likely to be on the Monday at the start of ISC, further details when its all sorted! Thanks Simon From Kevin.Buterbaugh at Vanderbilt.Edu Fri Apr 22 16:43:00 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 22 Apr 2016 15:43:00 +0000 Subject: [gpfsug-discuss] make InstallImages errors Message-ID: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> Hi All, We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the following errors: /usr/lpp/mmfs/src root at testnsd3# make InstallImages (cd gpl-linux; /usr/bin/make InstallImages; \ exit $?) || exit 1 make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... depmod: ERROR: fstatat(4, mmfs26.ko): No such file or directory depmod: ERROR: fstatat(4, mmfslinux.ko): No such file or directory depmod: ERROR: fstatat(4, tracedev.ko): No such file or directory make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' /usr/lpp/mmfs/src root at testnsd3# However, they don?t seem to actually impact anything ? i.e. GPFS starts up just fine on the box and the upgrade is apparently successful: /root root at testnsd3# mmgetstate Node number Node name GPFS state ------------------------------------------ 3 testnsd3 active /root root at testnsd3# mmdiag --version === mmdiag: version === Current GPFS build: "4.2.0.2 ". Built on Mar 7 2016 at 10:28:55 Running 5 minutes 5 secs /root root at testnsd3# So just to satisfy my own curiosity, has anyone else seen this and can anybody explain what that?s all about? OS is latest CentOS 7, BTW. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 22 20:52:35 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 22 Apr 2016 19:52:35 +0000 Subject: [gpfsug-discuss] make InstallImages errors In-Reply-To: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> References: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> Message-ID: Did you do a kernel upgrade as well? I've seen similar when you get dangling symlinks in the weak updates kernel module directory. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 22 April 2016 16:43 To: gpfsug main discussion list Subject: [gpfsug-discuss] make InstallImages errors Hi All, We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the following errors: /usr/lpp/mmfs/src root at testnsd3# make InstallImages (cd gpl-linux; /usr/bin/make InstallImages; \ exit $?) || exit 1 make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... depmod: ERROR: fstatat(4, mmfs26.ko): No such file or directory depmod: ERROR: fstatat(4, mmfslinux.ko): No such file or directory depmod: ERROR: fstatat(4, tracedev.ko): No such file or directory make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' /usr/lpp/mmfs/src root at testnsd3# However, they don?t seem to actually impact anything ? i.e. GPFS starts up just fine on the box and the upgrade is apparently successful: /root root at testnsd3# mmgetstate Node number Node name GPFS state ------------------------------------------ 3 testnsd3 active /root root at testnsd3# mmdiag --version === mmdiag: version === Current GPFS build: "4.2.0.2 ". Built on Mar 7 2016 at 10:28:55 Running 5 minutes 5 secs /root root at testnsd3# So just to satisfy my own curiosity, has anyone else seen this and can anybody explain what that?s all about? OS is latest CentOS 7, BTW. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 From ewahl at osc.edu Fri Apr 22 21:12:20 2016 From: ewahl at osc.edu (Edward Wahl) Date: Fri, 22 Apr 2016 16:12:20 -0400 Subject: [gpfsug-discuss] make InstallImages errors In-Reply-To: References: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> Message-ID: <20160422161220.135f209a@osc.edu> On Fri, 22 Apr 2016 19:52:35 +0000 "Simon Thompson (Research Computing - IT Services)" wrote: > > Did you do a kernel upgrade as well? > > I've seen similar when you get dangling symlinks in the weak updates kernel > module directory. > Simon I've had exactly the same experience here. From 4.x going back to early 3.4 with this error. Ed > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org > [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Buterbaugh, Kevin L > [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 22 April 2016 16:43 To: gpfsug main > discussion list Subject: [gpfsug-discuss] make InstallImages errors > > Hi All, > > We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) > to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the > following errors: > > /usr/lpp/mmfs/src > root at testnsd3# make InstallImages > (cd gpl-linux; /usr/bin/make InstallImages; \ > exit $?) || exit 1 > make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux' > Pre-kbuild step 1... > depmod: ERROR: fstatat(4, mmfs26.ko): No such file or directory > depmod: ERROR: fstatat(4, mmfslinux.ko): No such file or directory > depmod: ERROR: fstatat(4, tracedev.ko): No such file or directory > make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' > /usr/lpp/mmfs/src > root at testnsd3# > > However, they don?t seem to actually impact anything ? i.e. GPFS starts up > just fine on the box and the upgrade is apparently successful: > > /root > root at testnsd3# mmgetstate > > Node number Node name GPFS state > ------------------------------------------ > 3 testnsd3 active > /root > root at testnsd3# mmdiag --version > > === mmdiag: version === > Current GPFS build: "4.2.0.2 ". > Built on Mar 7 2016 at 10:28:55 > Running 5 minutes 5 secs > /root > root at testnsd3# > > So just to satisfy my own curiosity, has anyone else seen this and can > anybody explain what that?s all about? OS is latest CentOS 7, BTW. Thanks? > > Kevin > > ? > Kevin Buterbaugh - Senior System Administrator > Vanderbilt University - Advanced Computing Center for Research and Education > Kevin.Buterbaugh at vanderbilt.edu - > (615)875-9633 > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Ed Wahl Ohio Supercomputer Center 614-292-9302 From jan.finnerman at load.se Mon Apr 25 21:27:13 2016 From: jan.finnerman at load.se (Jan Finnerman Load) Date: Mon, 25 Apr 2016 20:27:13 +0000 Subject: [gpfsug-discuss] Dell Multipath Message-ID: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Hi, I realize this might not be strictly GPFS related but I?m getting a little desperate here? I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer and struggle on a question of disk multipathing for the intended NSD disks with their direct attached SAS disk systems. If I do a multipath ?ll, after a few seconds I just get the prompt back. I expected to see the usual big amount of path info, but nothing there. If I do a multipathd ?k and then a show config, I see all the Dell disk luns with reasonably right parameters. I can see them as /dev/sdf, /dev/sdg, etc. devices. I can also add them in PowerKVM:s Kimchi web interface and even deploy a GPFS installation on it. The big question is, though, how do I get multipathing to work ? Do I need any special driver or setting in the multipath.conf file ? I found some of that but more generic e.g. for RedHat 6, but now we are in PowerKVM country. The platform consists of: 4x IBM S812L servers SAS controller PowerKVM 3.1 Red Hat 7.1 2x Dell MD3460 SAS disk systems No switches Jan ///Jan [cid:E11C3C62-0896-4FE2-9DCF-FFA5CF812B75] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:621A25E3-E641-4D21-B2C3-0C93AB8B73B6] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3[5].png Type: image/png Size: 5565 bytes Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3[5].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA[5].png Type: image/png Size: 8584 bytes Desc: E895055E-B11B-47C3-BA29-E12D29D394FA[5].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1][5].png Type: image/png Size: 6664 bytes Desc: CertPowerSystems_sm[1][5].png URL: From jenocram at gmail.com Mon Apr 25 21:37:18 2016 From: jenocram at gmail.com (Jeno Cram) Date: Mon, 25 Apr 2016 16:37:18 -0400 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Message-ID: Is multipathd running? Also make sure you don't have them blacklisted in your multipath.conf. On Apr 25, 2016 4:27 PM, "Jan Finnerman Load" wrote: > Hi, > > I realize this might not be strictly GPFS related but I?m getting a little > desperate here? > I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer and > struggle on a question of disk multipathing for the intended NSD disks with > their direct attached SAS disk systems. > If I do a *multipath ?ll*, after a few seconds I just get the prompt > back. I expected to see the usual big amount of path info, but nothing > there. > > If I do a *multipathd ?k* and then a show config, I see all the Dell disk > luns with reasonably right parameters. I can see them as /dev/sdf, > /dev/sdg, etc. devices. > I can also add them in PowerKVM:s Kimchi web interface and even deploy a > GPFS installation on it. The big question is, though, how do I get > multipathing to work ? > Do I need any special driver or setting in the multipath.conf file ? > I found some of that but more generic e.g. for RedHat 6, but now we are in > PowerKVM country. > > The platform consists of: > 4x IBM S812L servers > SAS controller > PowerKVM 3.1 > Red Hat 7.1 > 2x Dell MD3460 *SAS* disk systems > No switches > > Jan > > ///Jan > > > Jan Finnerman > > Senior Technical consultant > > > [image: CertTiv_sm] > > > Kista Science Tower > > 164 51 Kista > > Mobil: +46 (0)70 631 66 26 > > Kontor: +46 (0)8 633 66 00/26 > > jan.finnerman at load.se > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3[5].png Type: image/png Size: 5565 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1][5].png Type: image/png Size: 6664 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA[5].png Type: image/png Size: 8584 bytes Desc: not available URL: From ewahl at osc.edu Mon Apr 25 21:48:07 2016 From: ewahl at osc.edu (Edward Wahl) Date: Mon, 25 Apr 2016 16:48:07 -0400 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Message-ID: <20160425164807.52f40d7a@osc.edu> Sounds like too wide of a blacklist. Have you specifically added the MD devices to the blacklist_exceptions? What does the overall blacklist and blacklist_exceptions look like? A quick 'lsscsi' should give you the vendor/product to stick into the blacklist_exception. Wildcards work with quotes there, as well if you have multiple similar but not exact enclosures. eg: "IBM 1818 FAStT" can become: device { vendor "IBM" product "1818*" } or Dell MD*, etc. If you have issues with things working in the interactive mode or debug mode (which usually turns out to be a timing problem) run a "multipath -v3" and check the output. It will normally tell you exactly why each disk device is being skipped. Things like "device node name blacklisted" or whitelisted. Ed Wahl OSC On Mon, 25 Apr 2016 20:27:13 +0000 Jan Finnerman Load wrote: > Hi, > > I realize this might not be strictly GPFS related but I?m getting a little > desperate here? I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a > customer and struggle on a question of disk multipathing for the intended NSD > disks with their direct attached SAS disk systems. If I do a multipath ?ll, > after a few seconds I just get the prompt back. I expected to see the usual > big amount of path info, but nothing there. > > If I do a multipathd ?k and then a show config, I see all the Dell disk luns > with reasonably right parameters. I can see them as /dev/sdf, /dev/sdg, etc. > devices. I can also add them in PowerKVM:s Kimchi web interface and even > deploy a GPFS installation on it. The big question is, though, how do I get > multipathing to work ? Do I need any special driver or setting in the > multipath.conf file ? I found some of that but more generic e.g. for RedHat > 6, but now we are in PowerKVM country. > > The platform consists of: > 4x IBM S812L servers > SAS controller > PowerKVM 3.1 > Red Hat 7.1 > 2x Dell MD3460 SAS disk systems > No switches > > Jan > ///Jan > > [cid:E11C3C62-0896-4FE2-9DCF-FFA5CF812B75] > Jan Finnerman > Senior Technical consultant > > [CertTiv_sm] > > [cid:621A25E3-E641-4D21-B2C3-0C93AB8B73B6] > Kista Science Tower > 164 51 Kista > Mobil: +46 (0)70 631 66 26 > Kontor: +46 (0)8 633 66 00/26 > jan.finnerman at load.se -- Ed Wahl Ohio Supercomputer Center 614-292-9302 From mweil at genome.wustl.edu Mon Apr 25 21:50:02 2016 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 25 Apr 2016 15:50:02 -0500 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Message-ID: <571E82FA.2000008@genome.wustl.edu> enable mpathconf --enable --with_multipathd y show config multipathd show config On 4/25/16 3:27 PM, Jan Finnerman Load wrote: > Hi, > > I realize this might not be strictly GPFS related but I?m getting a > little desperate here? > I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer > and struggle on a question of disk multipathing for the intended NSD > disks with their direct attached SAS disk systems. > If I do a /*multipath ?ll*/, after a few seconds I just get the > prompt back. I expected to see the usual big amount of path info, but > nothing there. > > If I do a /*multipathd ?k*/ and then a show config, I see all the Dell > disk luns with reasonably right parameters. I can see them as > /dev/sdf, /dev/sdg, etc. devices. > I can also add them in PowerKVM:s Kimchi web interface and even deploy > a GPFS installation on it. The big question is, though, how do I get > multipathing to work ? > Do I need any special driver or setting in the multipath.conf file ? > I found some of that but more generic e.g. for RedHat 6, but now we > are in PowerKVM country. > > The platform consists of: > 4x IBM S812L servers > SAS controller > PowerKVM 3.1 > Red Hat 7.1 > 2x Dell MD3460 *SAS* disk systems > No switches > > Jan > > ///Jan > > > Jan Finnerman > > Senior Technical consultant > > > CertTiv_sm > > > Kista Science Tower > > 164 51 Kista > > Mobil: +46 (0)70 631 66 26 > > Kontor: +46 (0)8 633 66 00/26 > > jan.finnerman at load.se > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 8584 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 5565 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 6664 bytes Desc: not available URL: From stefan.dietrich at desy.de Tue Apr 26 22:01:52 2016 From: stefan.dietrich at desy.de (Dietrich, Stefan) Date: Tue, 26 Apr 2016 23:01:52 +0200 (CEST) Subject: [gpfsug-discuss] CES behind DNS RR and 16 group limitation? Message-ID: <183207187.6100390.1461704512921.JavaMail.zimbra@desy.de> Hello, we will soon start to deploy CES in our clusters, however two questions popped up. - According to the "CES NFS Support" in the "Implementing Cluster Export Services" documentation, DNS round-robin might lead to corrupted data with NFSv3: If a DNS Round Robin (RR) entry name is used to mount an NFSv3 export, data corruption and data unavailability might occur. The lock manager on the GPFS file system is not clustered-system-aware. The documentation does not state anything about NFSv4, so this restriction does not apply? Has somebody already experience with NFS and SMB mounts/exports behind a DNS RR entry? - For NFSv3 there is the known 16 supplementary group limitation. The CES option MANAGE_GIDS lifts this limitation and group lookup is performed on the protocl node itself. However, the NFS version is not mentioned in the docs. Would this work for NFSv4 with secType=sys as well or is this limited to NFSv3? With NFSv4 and secType=krb the 16 group limit does not apply, but I can think of some use-cases where the ticket handling might be problematic. Regards, Stefan -- ------------------------------------------------------------------------ Stefan Dietrich Deutsches Elektronen-Synchrotron (IT-Systems) Ein Forschungszentrum der Helmholtz-Gemeinschaft Notkestr. 85 phone: +49-40-8998-4696 22607 Hamburg e-mail: stefan.dietrich at desy.de Germany ------------------------------------------------------------------------ From S.J.Thompson at bham.ac.uk Tue Apr 26 22:09:18 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Tue, 26 Apr 2016 21:09:18 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: Hi, We've had some reports from some of our users that out CES SMB exports are slow to access. It appears that this is only when the client is a Linux system and using SMB to access the file-system. In fact if we dual boot the same box, we can get sensible speeds out of it (I.e. Not network problems to the client system). They also report that access to real Windows based file-servers works at sensible speeds. Maybe the Win file servers support SMB1, but has anyone else seen this, or have any suggestions? Thanks Simon From jonathan at buzzard.me.uk Tue Apr 26 22:27:24 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 26 Apr 2016 22:27:24 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <571FDD3C.3080801@buzzard.me.uk> On 26/04/16 22:09, Simon Thompson (Research Computing - IT Services) wrote: > Hi, > > We've had some reports from some of our users that out CES SMB > exports are slow to access. > > It appears that this is only when the client is a Linux system and > using SMB to access the file-system. In fact if we dual boot the same > box, we can get sensible speeds out of it (I.e. Not network problems > to the client system). > > They also report that access to real Windows based file-servers works > at sensible speeds. Maybe the Win file servers support SMB1, but has > anyone else seen this, or have any suggestions? > In the past I have seen huge difference between opening up a terminal and doing a mount -t cifs ... and mapping the drive in Gnome. The later is a fraction of the performance of the first. I suspect that KDE is similar but I have not used KDE in anger now for 17 years. I would say we need to know what version of Linux you are having issues with and what method of attaching to the server you are using. In general best performance comes from a proper mount. If you have not tried that yet do so first. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at gmail.com Tue Apr 26 23:48:23 2016 From: oehmes at gmail.com (Sven Oehme) Date: Tue, 26 Apr 2016 15:48:23 -0700 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: can you check what OS level this is ? i have seen reports from issues with RHEL 7 clients and SMB On Tue, Apr 26, 2016 at 2:09 PM, Simon Thompson (Research Computing - IT Services) wrote: > Hi, > > We've had some reports from some of our users that out CES SMB exports are > slow to access. > > It appears that this is only when the client is a Linux system and using > SMB to access the file-system. In fact if we dual boot the same box, we can > get sensible speeds out of it (I.e. Not network problems to the client > system). > > They also report that access to real Windows based file-servers works at > sensible speeds. Maybe the Win file servers support SMB1, but has anyone > else seen this, or have any suggestions? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Wed Apr 27 01:21:09 2016 From: YARD at il.ibm.com (Yaron Daniel) Date: Wed, 27 Apr 2016 03:21:09 +0300 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> Hi Please run this command: # mmsmb export list export path guest ok smb encrypt cifs /gpfs1/cifs no disabled mixed /gpfs1/mixed no disabled cifs-text /gpfs/gpfs2/cifs-text/ no auto nfs-text /gpfs/gpfs2/nfs-text/ no auto Try to disable "smb encrypt" value, and try again. Example: #mmsmb export change --option "smb encrypt=disabled" cifs-text Regards Yaron Daniel 94 Em Ha'Moshavot Rd Server, Storage and Data Services - Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Sven Oehme To: gpfsug main discussion list Date: 04/27/2016 01:48 AM Subject: Re: [gpfsug-discuss] SMB access speed Sent by: gpfsug-discuss-bounces at spectrumscale.org can you check what OS level this is ? i have seen reports from issues with RHEL 7 clients and SMB On Tue, Apr 26, 2016 at 2:09 PM, Simon Thompson (Research Computing - IT Services) wrote: Hi, We've had some reports from some of our users that out CES SMB exports are slow to access. It appears that this is only when the client is a Linux system and using SMB to access the file-system. In fact if we dual boot the same box, we can get sensible speeds out of it (I.e. Not network problems to the client system). They also report that access to real Windows based file-servers works at sensible speeds. Maybe the Win file servers support SMB1, but has anyone else seen this, or have any suggestions? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From A.K.Ghumra at bham.ac.uk Wed Apr 27 09:11:35 2016 From: A.K.Ghumra at bham.ac.uk (Aslam Ghumra (IT Services, Facilities Management)) Date: Wed, 27 Apr 2016 08:11:35 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear -------------- next part -------------- An HTML attachment was scrubbed... URL: From secretary at gpfsug.org Wed Apr 27 10:46:18 2016 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Wed, 27 Apr 2016 10:46:18 +0100 Subject: [gpfsug-discuss] US GPFS/Spectrum Scale Events Message-ID: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We'd like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 [1] Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: * 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 * Enhancements for CORAL from IBM * Panel discussion with customers, topic TBD * AFM and integration with Spectrum Protect * Best practices for GPFS or Spectrum Scale Tuning. * At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ---- 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ---- We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal Links: ------ [1] https://www.spxxl.org/?q=New-York-City-2016 -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.K.Ghumra at bham.ac.uk Wed Apr 27 12:35:55 2016 From: A.K.Ghumra at bham.ac.uk (Aslam Ghumra (IT Services, Facilities Management)) Date: Wed, 27 Apr 2016 11:35:55 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: Apologies, I meant Mbps not Gbps Regards, Aslam Research Computing Team DDI: +44 (121) 414 5877 | Skype: JanitorX | Twitter: @aslamghumra | a.k.ghumra at bham.ac.uk | intranet.birmingham.ac.uk/bear -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of gpfsug-discuss-request at spectrumscale.org Sent: 27 April 2016 12:00 To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 51, Issue 48 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. SMB access speed (Aslam Ghumra (IT Services, Facilities Management)) 2. US GPFS/Spectrum Scale Events (Secretary GPFS UG) ---------------------------------------------------------------------- Message: 1 Date: Wed, 27 Apr 2016 08:11:35 +0000 From: "Aslam Ghumra (IT Services, Facilities Management)" To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] SMB access speed Message-ID: Content-Type: text/plain; charset="iso-8859-1" As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Wed, 27 Apr 2016 10:46:18 +0100 From: Secretary GPFS UG To: gpfsug main discussion list Cc: "usa-principal-gpfsug.org" , usa-co-principal at gpfsug.org, Chair , Gorini Stefano Claudio Subject: [gpfsug-discuss] US GPFS/Spectrum Scale Events Message-ID: <21b651c4a310b67c139fccff707dce97 at webmail.gpfsug.org> Content-Type: text/plain; charset="us-ascii" Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We'd like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 [1] Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: * 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 * Enhancements for CORAL from IBM * Panel discussion with customers, topic TBD * AFM and integration with Spectrum Protect * Best practices for GPFS or Spectrum Scale Tuning. * At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ---- 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ---- We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal Links: ------ [1] https://www.spxxl.org/?q=New-York-City-2016 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 51, Issue 48 ********************************************** From jonathan at buzzard.me.uk Wed Apr 27 12:40:37 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 27 Apr 2016 12:40:37 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <1461757237.1434.178.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-27 at 08:11 +0000, Aslam Ghumra (IT Services, Facilities Management) wrote: > As Simon has reported, the speed of access on Linux system are slow. > > > We've just used the mount command as below > > > mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o > noperm //<> /media/mnt1 > Try dialing back on the SMB version would be my first port of call. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 27 14:10:32 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 27 Apr 2016 13:10:32 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Hi All, Question - why are you SAMBA mounting to Linux clients instead of CNFS mounting? We don?t use CES (yet) here, but our ?rules? are: 1) if you?re a Linux client, you CNFS mount. 2) if you?re a Windows client, you SAMBA mount. 3) if you?re a Mac client, you can do either. (C)NFS seems to be must more stable and less problematic than SAMBA, in our experience. Just trying to understand? Kevin On Apr 27, 2016, at 3:11 AM, Aslam Ghumra (IT Services, Facilities Management) > wrote: As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 27 14:16:57 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 27 Apr 2016 13:16:57 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> References: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Message-ID: We don't manage the Linux systems, wr have no control over identity or authentication on them, but we do for SMB access. Simon -----Original Message----- From: Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: Wednesday, April 27, 2016 02:11 PM GMT Standard Time To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB access speed Hi All, Question - why are you SAMBA mounting to Linux clients instead of CNFS mounting? We don?t use CES (yet) here, but our ?rules? are: 1) if you?re a Linux client, you CNFS mount. 2) if you?re a Windows client, you SAMBA mount. 3) if you?re a Mac client, you can do either. (C)NFS seems to be must more stable and less problematic than SAMBA, in our experience. Just trying to understand? Kevin On Apr 27, 2016, at 3:11 AM, Aslam Ghumra (IT Services, Facilities Management) > wrote: As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 27 19:57:33 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 27 Apr 2016 19:57:33 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> References: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Message-ID: <57210B9D.8080906@buzzard.me.uk> On 27/04/16 14:10, Buterbaugh, Kevin L wrote: > Hi All, > > Question - why are you SAMBA mounting to Linux clients instead of CNFS > mounting? We don?t use CES (yet) here, but our ?rules? are: > > 1) if you?re a Linux client, you CNFS mount. > 2) if you?re a Windows client, you SAMBA mount. > 3) if you?re a Mac client, you can do either. > > (C)NFS seems to be must more stable and less problematic than SAMBA, in > our experience. Just trying to understand? > My rule that trumps all those is that a given share is available via SMB *OR* NFS, but never both. Therein lies the path to great pain in the future. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From bpappas at dstonline.com Wed Apr 27 20:38:06 2016 From: bpappas at dstonline.com (Bill Pappas) Date: Wed, 27 Apr 2016 19:38:06 +0000 Subject: [gpfsug-discuss] GPFS discussions Message-ID: Where do other users in this group meet to discuss GPFS advancements and share experiences/how-tos? How often? I am speaking of conferences, etc. Thank you, Bill Pappas 901-619-0585 bpappas at dstonline.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 27 20:47:55 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 27 Apr 2016 20:47:55 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Message-ID: <5721176B.5020809@buzzard.me.uk> On 27/04/16 14:16, Simon Thompson (Research Computing - IT Services) wrote: > We don't manage the Linux systems, wr have no control over identity or > authentication on them, but we do for SMB access. > Does not the combination of Ganesha and NFSv4 with Kerberos fix that? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From S.J.Thompson at bham.ac.uk Wed Apr 27 20:52:46 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 27 Apr 2016 19:52:46 +0000 Subject: [gpfsug-discuss] GPFS discussions In-Reply-To: References: Message-ID: Hi Bill, As a user community, we organise events in the UK and USA, we post them on the mailing list and the group website - www.spectrumscale.org. There are a few types of events, meet the devs, which are typically a small group of customers, an integrator or two, and a few developers. We also do @conference events, for example at Super Computing (USA), Computing Insights UK, ibm are also trying to get a meeting running at ISC as well. We then have the larger annual events, for example in the UK we have a meeting in May. These are typically larger meetings with IBM speakers, customer talks and partner talks. Finally there are events organsied/advertised with other groups, for example SPXXL, where in the UK last year we ran with SPXXL's meeting. This is also happening in NYC in a few weeks. In the UK we have a much smaller geographic problem than the USA, we've also been going a lot longer - the USA side chapter only launched September last year, and Kristy and Bob are building the activity over there. I think if there was interest in a an informal (e.g.) state meeting that people wanted to coordinate with Kristy/Bob, then we could advertise to the list. Of course all of those involved in organising from the user side of things have real jobs as well and getting big meetings up and running takes quite a lot of work (agendas, speakers, venues, lunches, registration...) Simon (uk group chair) ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bill Pappas [bpappas at dstonline.com] Sent: 27 April 2016 20:38 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] GPFS discussions Where do other users in this group meet to discuss GPFS advancements and share experiences/how-tos? How often? I am speaking of conferences, etc. Thank you, Bill Pappas 901-619-0585 bpappas at dstonline.com From Greg.Lehmann at csiro.au Thu Apr 28 00:27:03 2016 From: Greg.Lehmann at csiro.au (Greg.Lehmann at csiro.au) Date: Wed, 27 Apr 2016 23:27:03 +0000 Subject: [gpfsug-discuss] GPFS discussions In-Reply-To: References: Message-ID: Hi Bill, In Australia, I've been lobbying IBM to do something locally, after the great UG meeting at SC15 in Austin. It is looking like they might tack something onto the annual tech symposium they have here - no time frame yet but August has been when it happened for the last couple of years. At that event we should be able to gauge interest on whether we can form a local UG. The advantage of the timing is that a lot of experts will be in the country for the Tech Symposium. They are also talking about another local HPC focused event in the same time frame. My guess is it may well be all bundled together. Here's hoping it comes off. It might give some of you an excuse to come to Australia! Seriously, I am jealous of the events I see happening in the UK. Cheers, Greg Lehmann Senior High Performance Data Specialist Data Services | Scientific Computing Platforms CSIRO Information Management and Technology Phone: +61 7 3327 4137 | Fax: +61 1 3327 4455 Greg.Lehmann at csiro.au | www.csiro.au Address: 1 Technology Court, Pullenvale, QLD 4069 PLEASE NOTE The information contained in this email may be confidential or privileged. Any unauthorised use or disclosure is prohibited. If you have received this email in error, please delete it immediately and notify the sender by return email. Thank you. To the extent permitted by law, CSIRO does not represent, warrant and/or guarantee that the integrity of this communication has been maintained or that the communication is free of errors, virus, interception or interference. Please consider the environment before printing this email. -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Thursday, 28 April 2016 5:53 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFS discussions Hi Bill, As a user community, we organise events in the UK and USA, we post them on the mailing list and the group website - www.spectrumscale.org. There are a few types of events, meet the devs, which are typically a small group of customers, an integrator or two, and a few developers. We also do @conference events, for example at Super Computing (USA), Computing Insights UK, ibm are also trying to get a meeting running at ISC as well. We then have the larger annual events, for example in the UK we have a meeting in May. These are typically larger meetings with IBM speakers, customer talks and partner talks. Finally there are events organsied/advertised with other groups, for example SPXXL, where in the UK last year we ran with SPXXL's meeting. This is also happening in NYC in a few weeks. In the UK we have a much smaller geographic problem than the USA, we've also been going a lot longer - the USA side chapter only launched September last year, and Kristy and Bob are building the activity over there. I think if there was interest in a an informal (e.g.) state meeting that people wanted to coordinate with Kristy/Bob, then we could advertise to the list. Of course all of those involved in organising from the user side of things have real jobs as well and getting big meetings up and running takes quite a lot of work (agendas, speakers, venues, lunches, registration...) Simon (uk group chair) ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bill Pappas [bpappas at dstonline.com] Sent: 27 April 2016 20:38 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] GPFS discussions Where do other users in this group meet to discuss GPFS advancements and share experiences/how-tos? How often? I am speaking of conferences, etc. Thank you, Bill Pappas 901-619-0585 bpappas at dstonline.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From usa-principal at gpfsug.org Thu Apr 28 15:19:51 2016 From: usa-principal at gpfsug.org (GPFS UG USA Principal) Date: Thu, 28 Apr 2016 10:19:51 -0400 Subject: [gpfsug-discuss] US GPFS/Spectrum Scale Events In-Reply-To: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> References: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> Message-ID: Thank you Claire. All, please note on the SPXXL registration page referenced below there is now a $0 May 26 GPFS Day Registration option. -Kristy > On Apr 27, 2016, at 5:46 AM, Secretary GPFS UG wrote: > > Dear All, > > Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. > > Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 > > This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 > > If you wish to register, please do so via the Eventbrite page. > > Kind regards, > > -- > Claire O'Toole > Spectrum Scale/GPFS User Group Secretary > +44 (0)7508 033896 > www.spectrumscaleug.org > > > --- > > Hello all, > > We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. > > 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. > > > Tentative Agenda: > ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 > ? Enhancements for CORAL from IBM > ? Panel discussion with customers, topic TBD > ? AFM and integration with Spectrum Protect > ? Best practices for GPFS or Spectrum Scale Tuning. > ? At least one site update > > Location: > New York Academy of Medicine > 1216 Fifth Avenue > New York, NY 10029 > > ?? > > 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! > > Location: Argonne National Lab more details and final agenda will come later. > > Tentative Agenda: > > > 9:00a-12:30p > 9-9:30a - Opening Remarks > 9:30-10a Deep Dive - Update on ESS > 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) > 11-11:30 Break > 11:30a-Noon - Deep Dive - Protect & Scale integration > Noon-12:30p HDFS/Hadoop > > 12:30 - 1:30p Lunch > > 1:30p-5:00p > 1:30 - 2:00p IBM AFM Update > 2:00-2:30p ANL: AFM as a burst buffer > 2:30-3:00p ANL: GHI (GPFS HPSS Integration) > 3:00-3:30p Break > 3:30p - 4:00p LANL: ? or other site preso > 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences > 4:30p -5:00p Closing comments and Open Forum for Questions > > 5:00 - ? > Beer hunting? > > > ?? > > > We hope you can attend one or both of these events. > > Best, > Kristy Kallback-Rose & Bob Oesterlin > GPFS Users Group - USA Chapter - Principal & Co-principal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mark.Roberts at awe.co.uk Thu Apr 28 15:40:18 2016 From: Mark.Roberts at awe.co.uk (Mark.Roberts at awe.co.uk) Date: Thu, 28 Apr 2016 14:40:18 +0000 Subject: [gpfsug-discuss] EXTERNAL: Re: US GPFS/Spectrum Scale Events In-Reply-To: References: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> Message-ID: <201604281438.u3SEckmo029951@msw1.awe.co.uk> Kirsty, Thank you for the heads up. I?m guessing that those people who have already registered for XXL prior to this option should proceed to the Eventbrite page and register the GPFS day ? Regards Mark Roberts AWE From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of GPFS UG USA Principal Sent: 28 April 2016 15:20 To: Secretary GPFS UG Cc: usa-co-principal at gpfsug.org; Chair ; gpfsug main discussion list ; Gorini Stefano Claudio Subject: EXTERNAL: Re: [gpfsug-discuss] US GPFS/Spectrum Scale Events Thank you Claire. All, please note on the SPXXL registration page referenced below there is now a $0 May 26 GPFS Day Registration option. -Kristy On Apr 27, 2016, at 5:46 AM, Secretary GPFS UG > wrote: Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 ? Enhancements for CORAL from IBM ? Panel discussion with customers, topic TBD ? AFM and integration with Spectrum Protect ? Best practices for GPFS or Spectrum Scale Tuning. ? At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ?? 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ?? We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From kallbac at iu.edu Thu Apr 28 15:47:18 2016 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Thu, 28 Apr 2016 14:47:18 +0000 Subject: [gpfsug-discuss] EXTERNAL: Re: US GPFS/Spectrum Scale Events In-Reply-To: <201604281438.u3SEckmo029951@msw1.awe.co.uk> References: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> <201604281438.u3SEckmo029951@msw1.awe.co.uk> Message-ID: Stefano, Can you take this one? Thanks, Kristy On Apr 28, 2016, at 10:40 AM, Mark.Roberts at awe.co.uk wrote: Kirsty, Thank you for the heads up. I?m guessing that those people who have already registered for XXL prior to this option should proceed to the Eventbrite page and register the GPFS day ? Regards Mark Roberts AWE From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of GPFS UG USA Principal Sent: 28 April 2016 15:20 To: Secretary GPFS UG > Cc: usa-co-principal at gpfsug.org; Chair >; gpfsug main discussion list >; Gorini Stefano Claudio > Subject: EXTERNAL: Re: [gpfsug-discuss] US GPFS/Spectrum Scale Events Thank you Claire. All, please note on the SPXXL registration page referenced below there is now a $0 May 26 GPFS Day Registration option. -Kristy On Apr 27, 2016, at 5:46 AM, Secretary GPFS UG > wrote: Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 ? Enhancements for CORAL from IBM ? Panel discussion with customers, topic TBD ? AFM and integration with Spectrum Protect ? Best practices for GPFS or Spectrum Scale Tuning. ? At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ?? 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ?? We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Apr 28 22:04:58 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 28 Apr 2016 21:04:58 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> References: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> Message-ID: Ok, we are going to try this out and see if this makes a difference. The Windows server which is "faster" from Linux is running Server 2008R2, so I guess isn't doing encrypted SMB. Will report back next week once we've run some tests. Simon -----Original Message----- From: Yaron Daniel [YARD at il.ibm.com] Sent: Wednesday, April 27, 2016 01:21 AM GMT Standard Time To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB access speed Hi Please run this command: # mmsmb export list export path guest ok smb encrypt cifs /gpfs1/cifs no disabled mixed /gpfs1/mixed no disabled cifs-text /gpfs/gpfs2/cifs-text/ no auto nfs-text /gpfs/gpfs2/nfs-text/ no auto Try to disable "smb encrypt" value, and try again. Example: #mmsmb export change --option "smb encrypt=disabled" cifs-text Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd [cid:_1_0D90DCD00D90D73C0001EFFAC2257FA2] Server, Storage and Data Services- Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Sven Oehme To: gpfsug main discussion list Date: 04/27/2016 01:48 AM Subject: Re: [gpfsug-discuss] SMB access speed Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ can you check what OS level this is ? i have seen reports from issues with RHEL 7 clients and SMB On Tue, Apr 26, 2016 at 2:09 PM, Simon Thompson (Research Computing - IT Services) > wrote: Hi, We've had some reports from some of our users that out CES SMB exports are slow to access. It appears that this is only when the client is a Linux system and using SMB to access the file-system. In fact if we dual boot the same box, we can get sensible speeds out of it (I.e. Not network problems to the client system). They also report that access to real Windows based file-servers works at sensible speeds. Maybe the Win file servers support SMB1, but has anyone else seen this, or have any suggestions? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00001.gif Type: image/gif Size: 1851 bytes Desc: ATT00001.gif URL: From usa-principal at gpfsug.org Thu Apr 28 22:44:32 2016 From: usa-principal at gpfsug.org (GPFS UG USA Principal) Date: Thu, 28 Apr 2016 17:44:32 -0400 Subject: [gpfsug-discuss] GPFS/Spectrum Scale Upcoming US Events - Save the Dates In-Reply-To: <4192D644-E3AB-4B7B-AF7A-96D3D617FA7B@gpfsug.org> References: <4192D644-E3AB-4B7B-AF7A-96D3D617FA7B@gpfsug.org> Message-ID: <9489DBA2-1F12-4B05-A968-5D4855FBEA40@gpfsug.org> All, the registration page for the second event listed below at Argonne National Lab on June 10th is now up. An updated agenda is also at this site. Please register here: https://www.regonline.com/Spectrumscalemeeting We look forward to seeing some of you at these upcoming events. Feel free to send suggestions for future events in your area. Cheers, -Kristy > On Apr 4, 2016, at 4:52 PM, GPFS UG USA Principal wrote: > > Hello all, > > We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. > > 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. > > Tentative Agenda: > ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 > ? Enhancements for CORAL from IBM > ? Panel discussion with customers, topic TBD > ? AFM and integration with Spectrum Protect > ? Best practices for GPFS or Spectrum Scale Tuning. > ? At least one site update > > Location: > New York Academy of Medicine > 1216 Fifth Avenue > New York, NY 10029 > > ?? > > 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! > > Location: Argonne National Lab more details and final agenda will come later. > > Tentative Agenda: > > 9:00a-12:30p > 9-9:30a - Opening Remarks > 9:30-10a Deep Dive - Update on ESS > 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) > 11-11:30 Break > 11:30a-Noon - Deep Dive - Protect & Scale integration > Noon-12:30p HDFS/Hadoop > > 12:30 - 1:30p Lunch > > 1:30p-5:00p > 1:30 - 2:00p IBM AFM Update > 2:00-2:30p ANL: AFM as a burst buffer > 2:30-3:00p ANL: GHI (GPFS HPSS Integration) > 3:00-3:30p Break > 3:30p - 4:00p LANL: ? or other site preso > 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences > 4:30p -5:00p Closing comments and Open Forum for Questions > > 5:00 - ? > Beer hunting? > > ?? > > We hope you can attend one or both of these events. > > Best, > Kristy Kallback-Rose & Bob Oesterlin > GPFS Users Group - USA Chapter - Principal & Co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Thu Apr 28 23:57:42 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Thu, 28 Apr 2016 23:57:42 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> Message-ID: <57229566.7060009@buzzard.me.uk> On 28/04/16 22:04, Simon Thompson (Research Computing - IT Services) wrote: > Ok, we are going to try this out and see if this makes a difference. The > Windows server which is "faster" from Linux is running Server 2008R2, so > I guess isn't doing encrypted SMB. > A quick poke in the Linux source code suggests that the CIFS encryption is handled with standard kernel crypto routines, but and here is the big but, whether you get any hardware acceleration is going to depend heavily on the CPU in the machine. Don't have the right CPU and you won't get it being done in hardware and the performance would I expect take a dive. I imagine it is like scp; making sure all your ducks are lined up and both server and client are doing hardware accelerated encryption is more complicated that it appears at first sight. Lots of lower end CPU's seem to be missing hardware accelerated encryption. Anyway boot into Windows 7 and you get don't get encryption, connect to 2008R2 and you don't get encryption and it all looks better. A quick Google suggests encryption didn't hit till Windows 8 and Server 2012. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From zgiles at gmail.com Fri Apr 29 05:22:03 2016 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 29 Apr 2016 00:22:03 -0400 Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? Message-ID: Fellow GPFS Users, I have a silly question about file replicas... I've been playing around with copies=2 (or 3) and hoping that this would protect against data corruption on poor-quality RAID controllers.. but it seems that if I purposefully corrupt blocks on a LUN used by GPFS, the "replica" doesn't take over, rather GPFS just returns corrupt data. This includes if just "dd" into the disk, or if I break the RAID controller somehow by yanking whole chassis and the controller responds poorly for a few seconds. Originally my thinking was that replicas were for mirroring and GPFS would somehow return whichever is the "good" copy of your data, but now I'm thinking it's just intended for better file placement.. such as having a near replica and a far replica so you dont have to cross buildings for access, etc. That, and / or, disk outages where the outage is not corruption, just simply outage either by failure or for disk-moves, SAN rewiring, etc. In those cases you wouldn't have to "move" all the data since you already have a second copy. I can see how that would makes sense.. Somehow I guess I always knew this.. but it seems many people say they will just turn on copies=2 and be "safe".. but it's not the case.. Which way is the intended? Has anyone else had experience with this realization? Thanks, -Zach -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From UWEFALKE at de.ibm.com Fri Apr 29 10:22:10 2016 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Fri, 29 Apr 2016 11:22:10 +0200 Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? In-Reply-To: References: Message-ID: <201604290922.u3T9MGYY000400@d06av01.portsmouth.uk.ibm.com> Zach, GPFS replication does not include automatically a comparison of the replica copies. It protects against one part (i.e. one FG, or two with 3-fold replication) of the storage being down. How should GPFS know what version is the good one if both replica copies are readable? There are tools in 4.x to compare the replicas, but do use them only from 4.2 onward (problems with prior versions). Still then you need to decide what is the "good" copy (there is a consistency check on MD replicas though, but correct/incorrect data blocks cannot be auto-detected for obvious reasons). E2E Check-summing (as in GNR) would of course help here. Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Frank Hammer, Thorsten Moehring Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: Zachary Giles To: gpfsug main discussion list Date: 04/29/2016 06:22 AM Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? Sent by: gpfsug-discuss-bounces at spectrumscale.org Fellow GPFS Users, I have a silly question about file replicas... I've been playing around with copies=2 (or 3) and hoping that this would protect against data corruption on poor-quality RAID controllers.. but it seems that if I purposefully corrupt blocks on a LUN used by GPFS, the "replica" doesn't take over, rather GPFS just returns corrupt data. This includes if just "dd" into the disk, or if I break the RAID controller somehow by yanking whole chassis and the controller responds poorly for a few seconds. Originally my thinking was that replicas were for mirroring and GPFS would somehow return whichever is the "good" copy of your data, but now I'm thinking it's just intended for better file placement.. such as having a near replica and a far replica so you dont have to cross buildings for access, etc. That, and / or, disk outages where the outage is not corruption, just simply outage either by failure or for disk-moves, SAN rewiring, etc. In those cases you wouldn't have to "move" all the data since you already have a second copy. I can see how that would makes sense.. Somehow I guess I always knew this.. but it seems many people say they will just turn on copies=2 and be "safe".. but it's not the case.. Which way is the intended? Has anyone else had experience with this realization? Thanks, -Zach -- Zach Giles zgiles at gmail.com_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From zgiles at gmail.com Fri Apr 29 13:18:29 2016 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 29 Apr 2016 08:18:29 -0400 Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? In-Reply-To: <201604290922.u3T9MGYY000400@d06av01.portsmouth.uk.ibm.com> References: <201604290922.u3T9MGYY000400@d06av01.portsmouth.uk.ibm.com> Message-ID: Hi Uwe, You're right.. how would it know which one is the good one? I had imagined it would at least compare some piece of metadata to the block's metadata on retrieval, maybe generation number, something... However, when I think about that, it doesnt make any sense. The block on-disk is purely the data, no metadata. Thus, there won't be any structural issues when retrieving a bad block. What is the tool in 4.2 that you are referring to for comparing replicas? I'd be interested in trying it out. I didn't happen to pass-by any mmrestripefs options for that.. maybe I missed something. E2E I guess is what I'm looking for, but not on GNR. I'm just trying to investigate failure cases possible on standard-RAID hardware. I'm sure we've all had a RAID controller or two that have failed in interesting ways... -Zach On Fri, Apr 29, 2016 at 5:22 AM, Uwe Falke wrote: > Zach, > GPFS replication does not include automatically a comparison of the > replica copies. > It protects against one part (i.e. one FG, or two with 3-fold replication) > of the storage being down. > How should GPFS know what version is the good one if both replica copies > are readable? > > There are tools in 4.x to compare the replicas, but do use them only from > 4.2 onward (problems with prior versions). Still then you need to decide > what is the "good" copy (there is a consistency check on MD replicas > though, but correct/incorrect data blocks cannot be auto-detected for > obvious reasons). E2E Check-summing (as in GNR) would of course help here. > > > Mit freundlichen Gr??en / Kind regards > > > Dr. Uwe Falke > > IT Specialist > High Performance Computing Services / Integrated Technology Services / > Data Center Services > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland > Rathausstr. 7 > 09111 Chemnitz > Phone: +49 371 6978 2165 > Mobile: +49 175 575 2877 > E-Mail: uwefalke at de.ibm.com > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: > Frank Hammer, Thorsten Moehring > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, > HRB 17122 > > > > > From: Zachary Giles > To: gpfsug main discussion list > Date: 04/29/2016 06:22 AM > Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Fellow GPFS Users, > > I have a silly question about file replicas... I've been playing around > with copies=2 (or 3) and hoping that this would protect against data > corruption on poor-quality RAID controllers.. but it seems that if I > purposefully corrupt blocks on a LUN used by GPFS, the "replica" doesn't > take over, rather GPFS just returns corrupt data. This includes if just > "dd" into the disk, or if I break the RAID controller somehow by yanking > whole chassis and the controller responds poorly for a few seconds. > > Originally my thinking was that replicas were for mirroring and GPFS would > somehow return whichever is the "good" copy of your data, but now I'm > thinking it's just intended for better file placement.. such as having a > near replica and a far replica so you dont have to cross buildings for > access, etc. That, and / or, disk outages where the outage is not > corruption, just simply outage either by failure or for disk-moves, SAN > rewiring, etc. In those cases you wouldn't have to "move" all the data > since you already have a second copy. I can see how that would makes > sense.. > > Somehow I guess I always knew this.. but it seems many people say they > will just turn on copies=2 and be "safe".. but it's not the case.. > > Which way is the intended? > Has anyone else had experience with this realization? > > Thanks, > -Zach > > > -- > Zach Giles > zgiles at gmail.com_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.K.Ghumra at bham.ac.uk Fri Apr 29 17:07:17 2016 From: A.K.Ghumra at bham.ac.uk (Aslam Ghumra (IT Services, Facilities Management)) Date: Fri, 29 Apr 2016 16:07:17 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: Many thanks Yaron, after the change to disable encryption we were able to increase the speed via Ubuntu of copying files from the local desktop to our gpfs filestore with average speeds of 60Mbps. We also tried changing the mount from vers=3.0 to vers=2.1, which gave similar figures However, using the Ubuntu gui ( Unity ) the speed drops down to 7Mbps, however, we're not concerned as the user will use rsync / cp. The other issue is copying data from gpfs filestore to the local HDD, which resulted in 4Mbps. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.A.Hurst at bham.ac.uk Fri Apr 29 17:22:48 2016 From: L.A.Hurst at bham.ac.uk (Laurence Alexander Hurst (IT Services)) Date: Fri, 29 Apr 2016 16:22:48 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: On 29/04/2016 17:07, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Aslam Ghumra (IT Services, Facilities Management)" wrote: >Many thanks Yaron, after the change to disable encryption we were able to >increase the speed via Ubuntu of copying files from the local desktop to >our gpfs filestore with average speeds of 60Mbps. > >We also tried changing the mount from vers=3.0 to vers=2.1, which gave >similar figures > >However, using the Ubuntu gui ( Unity ) the speed drops down to 7Mbps, >however, we?re not concerned as the user will use rsync / cp. > > >The other issue is copying data from gpfs filestore to the local HDD, >which resulted in 4Mbps. > >Aslam Ghumra >Research Data Management I wonder if Unity uses what used to be called the "gnome virtual filesystem" to connect. It may be using it's own implementation that's not such a well written samba/cifs (which ever they are using) client than the implementation used if you mount it "properly" with mount.smb/mount.cifs. Laurence -- Laurence Hurst Research Computing, IT Services, University of Birmingham w: http://www.birmingham.ac.uk/bear (http://servicedesk.bham.ac.uk/ for support) e: l.a.hurst at bham.ac.uk From jonathan at buzzard.me.uk Fri Apr 29 21:05:02 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 29 Apr 2016 21:05:02 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <5723BE6E.6000403@buzzard.me.uk> On 29/04/16 17:22, Laurence Alexander Hurst (IT Services) wrote: [SNIP] > I wonder if Unity uses what used to be called the "gnome virtual > filesystem" to connect. It may be using it's own implementation that's > not such a well written samba/cifs (which ever they are using) client than > the implementation used if you mount it "properly" with > mount.smb/mount.cifs. Probably, as I said previously these desktop VFS CIF's clients are significantly slower than the kernel client. It's worth remembering that a few years back the Linux kernel CIFS client was extensively optimized for speed, and was at on point at least giving better performance than the NFS client. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From p.childs at qmul.ac.uk Fri Apr 29 21:58:53 2016 From: p.childs at qmul.ac.uk (Peter Childs) Date: Fri, 29 Apr 2016 20:58:53 +0000 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <571E82FA.2000008@genome.wustl.edu> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se>, <571E82FA.2000008@genome.wustl.edu> Message-ID: >From my experience using a Dell md3460 with zfs (not gpfs). I've not tried it with gpfs but it looks very simular to our IBM dcs3700 we run gpfs on. To get multipath to work correctly, we had to install the storage manager software from the cd that can be downloaded from Dells website, which made a few modifications to multipath.conf. Broadly speaking the blacklist comments others have made are correct. You also need to enable and start multipathd (chkconfig multipathd on) Peter Childs ITS Research and Teaching Support Queen Mary, University of London ---- Matt Weil wrote ---- enable mpathconf --enable --with_multipathd y show config multipathd show config On 4/25/16 3:27 PM, Jan Finnerman Load wrote: Hi, I realize this might not be strictly GPFS related but I?m getting a little desperate here? I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer and struggle on a question of disk multipathing for the intended NSD disks with their direct attached SAS disk systems. If I do a multipath ?ll, after a few seconds I just get the prompt back. I expected to see the usual big amount of path info, but nothing there. If I do a multipathd ?k and then a show config, I see all the Dell disk luns with reasonably right parameters. I can see them as /dev/sdf, /dev/sdg, etc. devices. I can also add them in PowerKVM:s Kimchi web interface and even deploy a GPFS installation on it. The big question is, though, how do I get multipathing to work ? Do I need any special driver or setting in the multipath.conf file ? I found some of that but more generic e.g. for RedHat 6, but now we are in PowerKVM country. The platform consists of: 4x IBM S812L servers SAS controller PowerKVM 3.1 Red Hat 7.1 2x Dell MD3460 SAS disk systems No switches Jan ///Jan [cid:part1.01010308.03000406 at genome.wustl.edu] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:part3.01010404.04060703 at genome.wustl.edu] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00001.png Type: image/png Size: 8584 bytes Desc: ATT00001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00002.png Type: image/png Size: 5565 bytes Desc: ATT00002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00003.png Type: image/png Size: 6664 bytes Desc: ATT00003.png URL: From YARD at il.ibm.com Sat Apr 30 06:17:28 2016 From: YARD at il.ibm.com (Yaron Daniel) Date: Sat, 30 Apr 2016 08:17:28 +0300 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <201604300517.u3U5HcbY022432@d06av12.portsmouth.uk.ibm.com> Hi It could be that GUI use in the "background" default command which use smb v1. Regard copy files from GPFS to Local HDD, it might be related to the local HDD settings. What is the speed transfer between the local HHD ? Cache Settings and so.. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Server, Storage and Data Services - Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: "Aslam Ghumra (IT Services, Facilities Management)" To: "gpfsug-discuss at spectrumscale.org" Date: 04/29/2016 07:07 PM Subject: [gpfsug-discuss] SMB access speed Sent by: gpfsug-discuss-bounces at spectrumscale.org Many thanks Yaron, after the change to disable encryption we were able to increase the speed via Ubuntu of copying files from the local desktop to our gpfs filestore with average speeds of 60Mbps. We also tried changing the mount from vers=3.0 to vers=2.1, which gave similar figures However, using the Ubuntu gui ( Unity ) the speed drops down to 7Mbps, however, we?re not concerned as the user will use rsync / cp. The other issue is copying data from gpfs filestore to the local HDD, which resulted in 4Mbps. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From jan.finnerman at load.se Fri Apr 1 12:04:38 2016 From: jan.finnerman at load.se (Jan Finnerman Load) Date: Fri, 1 Apr 2016 11:04:38 +0000 Subject: [gpfsug-discuss] Failure Group Message-ID: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se> Hi, I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw Device Mapping. They just ran in to an issue with adding some nsd disks. They claim that their current file system?s nsddisks are specified with 4001 as the failure group. This is out of bounds, since the allowed range is ?1>??>4000. So, when they now try to add some new disks with mmcrnsd, with 4001 specified, they get an error message. Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt [Description: cid:image001.png at 01D18B5D.FFCEFE30] His gpfsdisk.txt file looks like this. [Description: cid:image002.png at 01D18B5D.FFCEFE30] A listing of current disks show all as belonging to Failure group 4001 [Description: cid:image003.png at 01D18B5D.FFCEFE30] So, Why can?t he choose failure group 4001 when the existing disks are member of that group ? If he creates a disk in an other failure group, what?s the pros and cons with that ? I guess issues with replication not working as expected?. Brgds ///Jan [cid:95049B1E-9581-4B5E-8878-5BC3F3371B27] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:DB2EE70A-D139-4B15-B58C-5BD987D2FAB5] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png Type: image/png Size: 5565 bytes Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png Type: image/png Size: 8584 bytes Desc: E895055E-B11B-47C3-BA29-E12D29D394FA.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1].png Type: image/png Size: 6664 bytes Desc: CertPowerSystems_sm[1].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png Type: image/png Size: 3320 bytes Desc: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png Type: image/png Size: 1648 bytes Desc: 7A01C40C-085E-430C-BA95-D4238AFE5602.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 446525C9-567E-4B06-ACA0-34865B35B109.png Type: image/png Size: 6144 bytes Desc: 446525C9-567E-4B06-ACA0-34865B35B109.png URL: From Robert.Oesterlin at nuance.com Fri Apr 1 16:08:02 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:08:02 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? Message-ID: There are a number of good guides and Redbooks out from IBM that talk about the implementation of encryption in a Spectrum Scale (GPFS) cluster. What I?m looking for are other white papers, guidelines, reference material on the sizing considerations. For instance, what?s the performance overhead on an NSD server? If I have a well running cluster today, and I start using encryption, will my NSD servers need to be changed? (more of then, more CPU, etc) And references material or practical experience welcome. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 1 16:10:00 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 1 Apr 2016 15:10:00 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: I thought the enc/decrypt was done client side? So nothing on the nsd server? Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 01 April 2016 16:08 To: gpfsug main discussion list Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? There are a number of good guides and Redbooks out from IBM that talk about the implementation of encryption in a Spectrum Scale (GPFS) cluster. What I?m looking for are other white papers, guidelines, reference material on the sizing considerations. For instance, what?s the performance overhead on an NSD server? If I have a well running cluster today, and I start using encryption, will my NSD servers need to be changed? (more of then, more CPU, etc) And references material or practical experience welcome. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From Robert.Oesterlin at nuance.com Fri Apr 1 16:17:20 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:17:20 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? Message-ID: Hrm ? I thought it was done at the server, meaning data in the client (pagepool) was unencrypted? Well, Simon, one of us is wrong here :) Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Fri Apr 1 16:19:58 2016 From: oehmes at gmail.com (Sven Oehme) Date: Fri, 1 Apr 2016 08:19:58 -0700 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: its done on the client On Fri, Apr 1, 2016 at 8:17 AM, Oesterlin, Robert < Robert.Oesterlin at nuance.com> wrote: > Hrm ? I thought it was done at the server, meaning data in the client > (pagepool) was unencrypted? > > Well, Simon, one of us is wrong here :) > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 1 16:26:31 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 1 Apr 2016 15:26:31 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: Hmm. I thought part of the point was that different nodes (clients?) could have different encryption keys. And I also understood that it was encrypted to the client (I.e. Potentially on the wire). Though the docs talk about at rest and decrypted on the way, so a little unclear. But I could be completely wrong on this. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 01 April 2016 16:17 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? Hrm ? I thought it was done at the server, meaning data in the client (pagepool) was unencrypted? Well, Simon, one of us is wrong here :) Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From Robert.Oesterlin at nuance.com Fri Apr 1 16:28:07 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:28:07 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: <91CA6AA1-25A0-47FD-A05C-A1EE52A86E06@nuance.com> Thanks for clearing that up! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? its done on the client -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 1 16:34:42 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 1 Apr 2016 15:34:42 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: <91CA6AA1-25A0-47FD-A05C-A1EE52A86E06@nuance.com> References: , <91CA6AA1-25A0-47FD-A05C-A1EE52A86E06@nuance.com> Message-ID: The docs (https://www.ibm.com/support/knowledgecenter/#!/SSFKCN_4.1.0/com.ibm.cluster.gpfs.v4r1.gpfs200.doc/bl1adv_encryption.htm) Do say at rest. It also says it protects against an untrusted node in multi cluster. I thought if you were root on such a box, whilst you cant read the file, you could delete it? Can we clear that up? Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 01 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? Thanks for clearing that up! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? its done on the client ) From Robert.Oesterlin at nuance.com Fri Apr 1 16:35:28 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:35:28 +0000 Subject: [gpfsug-discuss] Encryption - client performance penalties? Message-ID: Hit send too fast ? so the question is now ? what?s the penalty on the client side? Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Robert Oesterlin > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:28 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? Thanks for clearing that up! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? its done on the client -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mark.Bush at siriuscom.com Fri Apr 1 16:48:17 2016 From: Mark.Bush at siriuscom.com (Mark.Bush at siriuscom.com) Date: Fri, 1 Apr 2016 15:48:17 +0000 Subject: [gpfsug-discuss] ESS cabling guide Message-ID: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> Is there such a thing as this? And if we want to use protocol nodes along with ESS could they use the same HMC as the ESS? Mark R. Bush | Solutions Architect Mobile: 210.237.8415 | mark.bush at siriuscom.com Sirius Computer Solutions | www.siriuscom.com 10100 Reunion Place, Suite 500, San Antonio, TX 78216 This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you. Sirius Computer Solutions -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Fri Apr 1 16:48:51 2016 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 1 Apr 2016 07:48:51 -0800 Subject: [gpfsug-discuss] Encryption - client performance penalties? In-Reply-To: References: Message-ID: <201604011549.u31Fn1u8016410@d01av03.pok.ibm.com> > From: "Oesterlin, Robert" > > Hit send too fast ? so the question is now ? what?s the penalty on > the client side? > Data is encrypted/decrypted on the path to/from the storage device -- it is in cleartext in the buffer pool. If you can read-ahead and write-behind you may not see the overhead of encryption. Random reads and synchronous writes will see it. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsallen at alcf.anl.gov Fri Apr 1 17:51:16 2016 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 1 Apr 2016 16:51:16 +0000 Subject: [gpfsug-discuss] ESS cabling guide In-Reply-To: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> References: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> Message-ID: Mark, There are SAS and networking diagrams in the ESS Install Procedure PDF that ships with the Spectrum Scale RAID download from FixCentral. You can use the same HMC as the ESS with any other Power hardware. There is a maximum of 48 hosts per HMC however. Depending on firmware levels, you may need to upgrade the HMC first for newer hardware. Ben > On Apr 1, 2016, at 10:48 AM, Mark.Bush at siriuscom.com wrote: > > Is there such a thing as this? And if we want to use protocol nodes along with ESS could they use the same HMC as the ESS? > > > Mark R. Bush | Solutions Architect > Mobile: 210.237.8415 | mark.bush at siriuscom.com > Sirius Computer Solutions | www.siriuscom.com > 10100 Reunion Place, Suite 500, San Antonio, TX 78216 > > This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you. > > Sirius Computer Solutions > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From janfrode at tanso.net Fri Apr 1 20:04:58 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 1 Apr 2016 21:04:58 +0200 Subject: [gpfsug-discuss] Failure Group In-Reply-To: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se> References: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se> Message-ID: Hi :-) I seem to remember failure group 4001 was common at some point, but can't see why.. Maybe it was just the default when no failure group was specified ? Have you tried what happens if you use an empty failure group "::", does it default to "-1" on v3.4 -- or maybe "4001"? You might consider changing the failure groups of the existing disks using mmchdisk if you need them to be the same. Pro's and cons of using another failure group.. Depends a bit on if they're using any replication within the filesystem. If all other NSDs are in failure group 4001 -- they can't be doing any replication, so it doesn't matter much. Only side effect I know of is that new block allocations will first go round robin over the failure groups, then round robin within the failure group, so unless you have similar amount of disks in the two failure groups the disk load might become a bit uneven. -jf On Fri, Apr 1, 2016 at 1:04 PM, Jan Finnerman Load wrote: > Hi, > > I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw > Device Mapping. They just ran in to an issue with adding some nsd disks. > They claim that their current file system?s nsddisks are specified with > 4001 as the failure group. This is out of bounds, since the allowed range > is ?1>??>4000. > So, when they now try to add some new disks with mmcrnsd, with 4001 > specified, they get an error message. > > Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt > > [image: Description: cid:image001.png at 01D18B5D.FFCEFE30] > > > > > > His gpfsdisk.txt file looks like this. > > [image: Description: cid:image002.png at 01D18B5D.FFCEFE30] > > > > > > A listing of current disks show all as belonging to Failure group 4001 > > [image: Description: cid:image003.png at 01D18B5D.FFCEFE30] > > > > So, Why can?t he choose failure group 4001 when the existing disks are > member of that group ? > > If he creates a disk in an other failure group, what?s the pros and cons > with that ? I guess issues with replication not working as expected?. > > > Brgds > > ///Jan > > > Jan Finnerman > > Senior Technical consultant > > > [image: CertTiv_sm] > > > Kista Science Tower > > 164 51 Kista > > Mobil: +46 (0)70 631 66 26 > > Kontor: +46 (0)8 633 66 00/26 > > jan.finnerman at load.se > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 446525C9-567E-4B06-ACA0-34865B35B109.png Type: image/png Size: 6144 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1].png Type: image/png Size: 6664 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png Type: image/png Size: 8584 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png Type: image/png Size: 3320 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png Type: image/png Size: 1648 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png Type: image/png Size: 5565 bytes Desc: not available URL: From jan.finnerman at load.se Fri Apr 1 20:16:11 2016 From: jan.finnerman at load.se (Jan Finnerman Load) Date: Fri, 1 Apr 2016 19:16:11 +0000 Subject: [gpfsug-discuss] Failure Group In-Reply-To: References: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se>, Message-ID: <5E3DB2EE-D644-475A-AABA-FE49BFB84D91@load.se> Ok, I checked the replication status with mmlsfs the output is: -r=1, -m=1, -R=2,-M=2, which means they don't use replication, although they could activate it. I told them that they could add the new disks to the file system with a different failure group e.g. 201 It shouldn't matter that much if they coexist with the 4001 disks, since they don't replicate. I'll follow up on Monday. MVH Jan Finnerman Konsult Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 1 apr. 2016 kl. 21:05 skrev Jan-Frode Myklebust >: Hi :-) I seem to remember failure group 4001 was common at some point, but can't see why.. Maybe it was just the default when no failure group was specified ? Have you tried what happens if you use an empty failure group "::", does it default to "-1" on v3.4 -- or maybe "4001"? You might consider changing the failure groups of the existing disks using mmchdisk if you need them to be the same. Pro's and cons of using another failure group.. Depends a bit on if they're using any replication within the filesystem. If all other NSDs are in failure group 4001 -- they can't be doing any replication, so it doesn't matter much. Only side effect I know of is that new block allocations will first go round robin over the failure groups, then round robin within the failure group, so unless you have similar amount of disks in the two failure groups the disk load might become a bit uneven. -jf On Fri, Apr 1, 2016 at 1:04 PM, Jan Finnerman Load > wrote: Hi, I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw Device Mapping. They just ran in to an issue with adding some nsd disks. They claim that their current file system's nsddisks are specified with 4001 as the failure group. This is out of bounds, since the allowed range is -1>-->4000. So, when they now try to add some new disks with mmcrnsd, with 4001 specified, they get an error message. Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt His gpfsdisk.txt file looks like this. <7A01C40C-085E-430C-BA95-D4238AFE5602.png> A listing of current disks show all as belonging to Failure group 4001 <446525C9-567E-4B06-ACA0-34865B35B109.png> So, Why can't he choose failure group 4001 when the existing disks are member of that group ? If he creates a disk in an other failure group, what's the pros and cons with that ? I guess issues with replication not working as expected.... Brgds ///Jan Jan Finnerman Senior Technical consultant Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 446525C9-567E-4B06-ACA0-34865B35B109.png Type: image/png Size: 6144 bytes Desc: 446525C9-567E-4B06-ACA0-34865B35B109.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1].png Type: image/png Size: 6664 bytes Desc: CertPowerSystems_sm[1].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png Type: image/png Size: 8584 bytes Desc: E895055E-B11B-47C3-BA29-E12D29D394FA.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png Type: image/png Size: 3320 bytes Desc: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png Type: image/png Size: 1648 bytes Desc: 7A01C40C-085E-430C-BA95-D4238AFE5602.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png Type: image/png Size: 5565 bytes Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png URL: From janfrode at tanso.net Sat Apr 2 20:27:09 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Sat, 02 Apr 2016 19:27:09 +0000 Subject: [gpfsug-discuss] ESS cabling guide In-Reply-To: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> References: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> Message-ID: Share hmc is no problem, also I think it should be fairly easy to use the xcat-setup on the EMS to deploy and manage the protocol nodes. -jf fre. 1. apr. 2016 kl. 17.48 skrev Mark.Bush at siriuscom.com < Mark.Bush at siriuscom.com>: > Is there such a thing as this? And if we want to use protocol nodes along > with ESS could they use the same HMC as the ESS? > > > Mark R. Bush | Solutions Architect > Mobile: 210.237.8415 | mark.bush at siriuscom.com > Sirius Computer Solutions | www.siriuscom.com > 10100 Reunion Place, Suite 500, San Antonio, TX 78216 > > This message (including any attachments) is intended only for the use of > the individual or entity to which it is addressed and may contain > information that is non-public, proprietary, privileged, confidential, and > exempt from disclosure under applicable law. If you are not the intended > recipient, you are hereby notified that any use, dissemination, > distribution, or copying of this communication is strictly prohibited. This > message may be viewed by parties at Sirius Computer Solutions other than > those named in the message header. This message does not contain an > official representation of Sirius Computer Solutions. If you have received > this communication in error, notify Sirius Computer Solutions immediately > and (i) destroy this message if a facsimile or (ii) delete this message > immediately if this is an electronic communication. Thank you. > Sirius Computer Solutions > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Mon Apr 4 21:52:37 2016 From: usa-principal at gpfsug.org (GPFS UG USA Principal) Date: Mon, 4 Apr 2016 16:52:37 -0400 Subject: [gpfsug-discuss] GPFS/Spectrum Scale Upcoming US Events - Save the Dates Message-ID: <4192D644-E3AB-4B7B-AF7A-96D3D617FA7B@gpfsug.org> Hello all, We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 ? Enhancements for CORAL from IBM ? Panel discussion with customers, topic TBD ? AFM and integration with Spectrum Protect ? Best practices for GPFS or Spectrum Scale Tuning. ? At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ?? 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ?? We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Tue Apr 5 10:50:35 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Tue, 5 Apr 2016 09:50:35 +0000 Subject: [gpfsug-discuss] Excluding AFM Caches from mmbackup Message-ID: Hi All, Is there any intelligence yet for mmbackup to ignore AFM cache filesets? I guess a way to do this would be to dynamically re-write TSM include / exclude rules based on the extended attributes of the fileset; for example: 1. Scan the all the available filesets in the filesystem, determining which ones have the MISC_ATTRIBUTE=%P% set, 2. Lookup the junction points for the list of filesets returned in (1), 3. Write out EXCLUDE statements for TSM for each directory in (2), 4. Proceed with mmbackup using the new EXCLUDE rules. Presumably one could accomplish this by using the -P flag for mmbackup and writing your own rule to do this? But, maybe IBM could do this for me and put another flag on the mmbackup command :) Although... a blanket flag for ignoring AFM caches altogether might not be good if you want to backup changed files in a local-update cache. Anybody want to do this work for me? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From chair at spectrumscale.org Mon Apr 11 10:37:38 2016 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Mon, 11 Apr 2016 10:37:38 +0100 Subject: [gpfsug-discuss] UK May Meeting Message-ID: Hi All, We are down to our last few places for the May user group meeting, if you are planning to come along, please do register: The draft agenda and registration for the day is at: http://www.eventbrite.com/e/spectrum-scale-gpfs-uk-user-group-spring-2016-t ickets-21724951916 If you have registered and aren't able to attend now, please do let us know so that we can free the slot for other members of the group. We also have 1 slot left on the agenda for a user talk, so if you have an interesting deployment or plans and are able to speak, please let me know! Thanks Simon From damir.krstic at gmail.com Mon Apr 11 14:15:30 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Mon, 11 Apr 2016 13:15:30 +0000 Subject: [gpfsug-discuss] backup and disaster recovery solutions Message-ID: We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinto at scinet.utoronto.ca Mon Apr 11 15:34:54 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 11 Apr 2016 10:34:54 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: References: Message-ID: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> Do you want backups or periodic frozen snapshots of the file system? Backups can entail some level of version control, so that you or end-users can get files back on certain points in time, in case of accidental deletions. Besides 1.5PB is a lot of material, so you may not want to take full snapshots that often. In that case, a combination of daily incremental backups using TSM with GPFS's mmbackup can be a good option. TSM also does a very good job at controlling how material is distributed across multiple tapes, and that is something that requires a lot of micro-management if you want a home grown solution of rsync+LTFS. On the other hand, you could use gpfs built-in tools such a mmapplypolicy to identify candidates for incremental backup, and send them to LTFS. Just more micro management, and you may have to come up with your own tool to let end-users restore their stuff, or you'll have to act on their behalf. Jaime Quoting Damir Krstic : > We have implemented 1.5PB ESS solution recently in our HPC environment. > Today we are kicking of backup and disaster recovery discussions so I was > wondering what everyone else is using for their backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life cycle > feature - so if the file is not touched for number of days, it's moved to a > tape (something like LTFS). > > Thanks in advance. > > DAmir > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From jonathan at buzzard.me.uk Mon Apr 11 16:02:45 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 11 Apr 2016 16:02:45 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> Message-ID: <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: > Do you want backups or periodic frozen snapshots of the file system? > > Backups can entail some level of version control, so that you or > end-users can get files back on certain points in time, in case of > accidental deletions. Besides 1.5PB is a lot of material, so you may > not want to take full snapshots that often. In that case, a > combination of daily incremental backups using TSM with GPFS's > mmbackup can be a good option. TSM also does a very good job at > controlling how material is distributed across multiple tapes, and > that is something that requires a lot of micro-management if you want > a home grown solution of rsync+LTFS. Is there any other viable option other than TSM for backing up 1.5PB of data? All other backup software does not handle this at all well. > On the other hand, you could use gpfs built-in tools such a > mmapplypolicy to identify candidates for incremental backup, and send > them to LTFS. Just more micro management, and you may have to come up > with your own tool to let end-users restore their stuff, or you'll > have to act on their behalf. > I was not aware of a way of letting end users restore their stuff from *backup* for any of the major backup software while respecting the file system level security of the original file system. If you let the end user have access to the backup they can restore any file to any location which is generally not a good idea. I do have a concept of creating a read only Fuse mounted file system from a TSM point in time synthetic backup, and then using the shadow copy feature of Samba to enable restores using the "Previous Versions" feature of windows file manager. I got as far as getting a directory tree you could browse through but then had an enforced change of jobs and don't have access to a TSM server any more to continue development. Note if anyone from IBM is listening that would be a super cool feature. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From makaplan at us.ibm.com Mon Apr 11 16:11:24 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 11 Apr 2016 11:11:24 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: References: Message-ID: <201604111511.u3BFBVbg015832@d03av02.boulder.ibm.com> Since you write " so if the file is not touched for number of days, it's moved to a tape" - that is what we call the HSM feature. This is additional function beyond backup. IBM has two implementations. (1) TSM/HSM now called IBM Spectrum Protect. http://www-03.ibm.com/software/products/en/spectrum-protect-for-space-management (2) HPSS http://www.hpss-collaboration.org/ The GPFS (Spectrum Scale File System) policy feature supports both, so that mmapplypolicy and GPFS policy rules can be used to perform accelerated metadata scans to identify which files should be migrated. Also, GPFS supports on-demand recall (on application reads) of data from long term storage (tape) to GPFS storage (disk or SSD). See also DMAPI. From: Damir Krstic To: gpfsug main discussion list Date: 04/11/2016 09:16 AM Subject: [gpfsug-discuss] backup and disaster recovery solutions Sent by: gpfsug-discuss-bounces at spectrumscale.org We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From pinto at scinet.utoronto.ca Mon Apr 11 16:18:47 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 11 Apr 2016 11:18:47 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> Message-ID: <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> I heard as recently as last Friday from IBM support/vendors/developers of GPFS/TSM/HSM that the newest release of Spectrum Protect (7.11) offers a GUI interface that is user centric, and will allow for unprivileged users to restore their own material via a newer WebGUI (one that also works with Firefox, Chrome and on linux, not only IE on Windows). Users may authenticate via AD or LDAP, and traverse only what they would be allowed to via linux permissions and ACLs. Jaime Quoting Jonathan Buzzard : > On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: >> Do you want backups or periodic frozen snapshots of the file system? >> >> Backups can entail some level of version control, so that you or >> end-users can get files back on certain points in time, in case of >> accidental deletions. Besides 1.5PB is a lot of material, so you may >> not want to take full snapshots that often. In that case, a >> combination of daily incremental backups using TSM with GPFS's >> mmbackup can be a good option. TSM also does a very good job at >> controlling how material is distributed across multiple tapes, and >> that is something that requires a lot of micro-management if you want >> a home grown solution of rsync+LTFS. > > Is there any other viable option other than TSM for backing up 1.5PB of > data? All other backup software does not handle this at all well. > >> On the other hand, you could use gpfs built-in tools such a >> mmapplypolicy to identify candidates for incremental backup, and send >> them to LTFS. Just more micro management, and you may have to come up >> with your own tool to let end-users restore their stuff, or you'll >> have to act on their behalf. >> > > I was not aware of a way of letting end users restore their stuff from > *backup* for any of the major backup software while respecting the file > system level security of the original file system. If you let the end > user have access to the backup they can restore any file to any location > which is generally not a good idea. > > I do have a concept of creating a read only Fuse mounted file system > from a TSM point in time synthetic backup, and then using the shadow > copy feature of Samba to enable restores using the "Previous Versions" > feature of windows file manager. > > I got as far as getting a directory tree you could browse through but > then had an enforced change of jobs and don't have access to a TSM > server any more to continue development. > > Note if anyone from IBM is listening that would be a super cool feature. > > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From jtucker at pixitmedia.com Mon Apr 11 16:23:06 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Mon, 11 Apr 2016 16:23:06 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> Message-ID: Hi Having just commissioned three TSM setups and one with HSM, I can say that's not available from the standard APAR updates at present - however it would be rather nice... The current release is 7.1.5 http://www-01.ibm.com/support/docview.wss?uid=swg24041864 Jez On Mon, Apr 11, 2016 at 4:18 PM, Jaime Pinto wrote: > I heard as recently as last Friday from IBM support/vendors/developers of > GPFS/TSM/HSM that the newest release of Spectrum Protect (7.11) offers a > GUI interface that is user centric, and will allow for unprivileged users > to restore their own material via a newer WebGUI (one that also works with > Firefox, Chrome and on linux, not only IE on Windows). Users may > authenticate via AD or LDAP, and traverse only what they would be allowed > to via linux permissions and ACLs. > > Jaime > > > Quoting Jonathan Buzzard : > > On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: >> >>> Do you want backups or periodic frozen snapshots of the file system? >>> >>> Backups can entail some level of version control, so that you or >>> end-users can get files back on certain points in time, in case of >>> accidental deletions. Besides 1.5PB is a lot of material, so you may >>> not want to take full snapshots that often. In that case, a >>> combination of daily incremental backups using TSM with GPFS's >>> mmbackup can be a good option. TSM also does a very good job at >>> controlling how material is distributed across multiple tapes, and >>> that is something that requires a lot of micro-management if you want >>> a home grown solution of rsync+LTFS. >>> >> >> Is there any other viable option other than TSM for backing up 1.5PB of >> data? All other backup software does not handle this at all well. >> >> On the other hand, you could use gpfs built-in tools such a >>> mmapplypolicy to identify candidates for incremental backup, and send >>> them to LTFS. Just more micro management, and you may have to come up >>> with your own tool to let end-users restore their stuff, or you'll >>> have to act on their behalf. >>> >>> >> I was not aware of a way of letting end users restore their stuff from >> *backup* for any of the major backup software while respecting the file >> system level security of the original file system. If you let the end >> user have access to the backup they can restore any file to any location >> which is generally not a good idea. >> >> I do have a concept of creating a read only Fuse mounted file system >> from a TSM point in time synthetic backup, and then using the shadow >> copy feature of Samba to enable restores using the "Previous Versions" >> feature of windows file manager. >> >> I got as far as getting a directory tree you could browse through but >> then had an enforced change of jobs and don't have access to a TSM >> server any more to continue development. >> >> Note if anyone from IBM is listening that would be a super cool feature. >> >> >> JAB. >> >> -- >> Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk >> Fife, United Kingdom. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> > > --- > Jaime Pinto > SciNet HPC Consortium - Compute/Calcul Canada > www.scinet.utoronto.ca - www.computecanada.org > University of Toronto > 256 McCaul Street, Room 235 > Toronto, ON, M5T1W5 > P: 416-978-2755 > C: 416-505-1477 > > ---------------------------------------------------------------- > This message was sent using IMP at SciNet Consortium, University of > Toronto. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominic.mueller at de.ibm.com Mon Apr 11 16:26:45 2016 From: dominic.mueller at de.ibm.com (Dominic Mueller-Wicke01) Date: Mon, 11 Apr 2016 17:26:45 +0200 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 51, Issue 9 In-Reply-To: References: Message-ID: <201604111529.u3BFT51c027238@d06av02.portsmouth.uk.ibm.com> Spectrum Protect backup (under the hood of mmbackup) and Spectrum Protect for Space Management (HSM) can be combined on the same data. There are some valuable integration topics between the products that can reduce the overall network traffic if using backup and HSM on the same files. With the combination of the products you have the ability to free file system space from cold data and migrate them out to tape and to have several versions of frequently used files in backup in the same file system. Greetings, Dominic. ______________________________________________________________________________________________________________ Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com Vorsitzende des Aufsichtsrats: Martina Koederitz; Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen; Registergericht: Amtsgericht Stuttgart, HRB 243294 From: gpfsug-discuss-request at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Date: 11.04.2016 17:11 Subject: gpfsug-discuss Digest, Vol 51, Issue 9 Sent by: gpfsug-discuss-bounces at spectrumscale.org Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. backup and disaster recovery solutions (Damir Krstic) 2. Re: backup and disaster recovery solutions (Jaime Pinto) 3. Re: backup and disaster recovery solutions (Jonathan Buzzard) 4. Re: backup and disaster recovery solutions (Marc A Kaplan) ----- Message from Damir Krstic on Mon, 11 Apr 2016 13:15:30 +0000 ----- To: gpfsug main discussion list Subject: [gpfsug-discuss] backup and disaster recovery solutions We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir ----- Message from Jaime Pinto on Mon, 11 Apr 2016 10:34:54 -0400 ----- To: gpfsug main discussion list , Damir Krstic Subject: Re: [gpfsug-discuss] backup and disaster recovery solutions Do you want backups or periodic frozen snapshots of the file system? Backups can entail some level of version control, so that you or end-users can get files back on certain points in time, in case of accidental deletions. Besides 1.5PB is a lot of material, so you may not want to take full snapshots that often. In that case, a combination of daily incremental backups using TSM with GPFS's mmbackup can be a good option. TSM also does a very good job at controlling how material is distributed across multiple tapes, and that is something that requires a lot of micro-management if you want a home grown solution of rsync+LTFS. On the other hand, you could use gpfs built-in tools such a mmapplypolicy to identify candidates for incremental backup, and send them to LTFS. Just more micro management, and you may have to come up with your own tool to let end-users restore their stuff, or you'll have to act on their behalf. Jaime Quoting Damir Krstic : > We have implemented 1.5PB ESS solution recently in our HPC environment. > Today we are kicking of backup and disaster recovery discussions so I was > wondering what everyone else is using for their backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life cycle > feature - so if the file is not touched for number of days, it's moved to a > tape (something like LTFS). > > Thanks in advance. > > DAmir > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. ----- Message from Jonathan Buzzard on Mon, 11 Apr 2016 16:02:45 +0100 ----- To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] backup and disaster recovery solutions On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: > Do you want backups or periodic frozen snapshots of the file system? > > Backups can entail some level of version control, so that you or > end-users can get files back on certain points in time, in case of > accidental deletions. Besides 1.5PB is a lot of material, so you may > not want to take full snapshots that often. In that case, a > combination of daily incremental backups using TSM with GPFS's > mmbackup can be a good option. TSM also does a very good job at > controlling how material is distributed across multiple tapes, and > that is something that requires a lot of micro-management if you want > a home grown solution of rsync+LTFS. Is there any other viable option other than TSM for backing up 1.5PB of data? All other backup software does not handle this at all well. > On the other hand, you could use gpfs built-in tools such a > mmapplypolicy to identify candidates for incremental backup, and send > them to LTFS. Just more micro management, and you may have to come up > with your own tool to let end-users restore their stuff, or you'll > have to act on their behalf. > I was not aware of a way of letting end users restore their stuff from *backup* for any of the major backup software while respecting the file system level security of the original file system. If you let the end user have access to the backup they can restore any file to any location which is generally not a good idea. I do have a concept of creating a read only Fuse mounted file system from a TSM point in time synthetic backup, and then using the shadow copy feature of Samba to enable restores using the "Previous Versions" feature of windows file manager. I got as far as getting a directory tree you could browse through but then had an enforced change of jobs and don't have access to a TSM server any more to continue development. Note if anyone from IBM is listening that would be a super cool feature. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. ----- Message from "Marc A Kaplan" on Mon, 11 Apr 2016 11:11:24 -0400 ----- To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] backup and disaster recovery solutions Since you write "so if the file is not touched for number of days, it's moved to a tape" - that is what we call the HSM feature. This is additional function beyond backup. IBM has two implementations. (1) TSM/HSM now called IBM Spectrum Protect. http://www-03.ibm.com/software/products/en/spectrum-protect-for-space-management (2) HPSS http://www.hpss-collaboration.org/ The GPFS (Spectrum Scale File System) policy feature supports both, so that mmapplypolicy and GPFS policy rules can be used to perform accelerated metadata scans to identify which files should be migrated. Also, GPFS supports on-demand recall (on application reads) of data from long term storage (tape) to GPFS storage (disk or SSD). See also DMAPI. Marc A Kaplan From: Damir Krstic To: gpfsug main discussion list Date: 04/11/2016 09:16 AM Subject: [gpfsug-discuss] backup and disaster recovery solutions Sent by: gpfsug-discuss-bounces at spectrumscale.org We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0E436792.gif Type: image/gif Size: 21994 bytes Desc: not available URL: From jez.tucker at gpfsug.org Mon Apr 11 16:31:52 2016 From: jez.tucker at gpfsug.org (Jez Tucker) Date: Mon, 11 Apr 2016 16:31:52 +0100 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 51, Issue 9 In-Reply-To: <201604111529.u3BFT51c027238@d06av02.portsmouth.uk.ibm.com> References: <201604111529.u3BFT51c027238@d06av02.portsmouth.uk.ibm.com> Message-ID: <570BC368.9090307@gpfsug.org> Dominic, Speculatively, when is TSM converting from DMAPI to Light Weight Events? Is there an up-to-date slide share we can put on the UG website regarding the 7.1.11 / public roadmap? Jez On 11/04/16 16:26, Dominic Mueller-Wicke01 wrote: > > Spectrum Protect backup (under the hood of mmbackup) and Spectrum > Protect for Space Management (HSM) can be combined on the same data. > There are some valuable integration topics between the products that > can reduce the overall network traffic if using backup and HSM on the > same files. With the combination of the products you have the ability > to free file system space from cold data and migrate them out to tape > and to have several versions of frequently used files in backup in the > same file system. > > Greetings, Dominic. > > ______________________________________________________________________________________________________________ > Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical > Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com > > Vorsitzende des Aufsichtsrats: Martina Koederitz; Gesch?ftsf?hrung: > Dirk Wittkopp > Sitz der Gesellschaft: B?blingen; Registergericht: Amtsgericht > Stuttgart, HRB 243294 > > Inactive hide details for gpfsug-discuss-request---11.04.2016 > 17:11:55---Send gpfsug-discuss mailing list submissions to > gpfsugpfsug-discuss-request---11.04.2016 17:11:55---Send > gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > From: gpfsug-discuss-request at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Date: 11.04.2016 17:11 > Subject: gpfsug-discuss Digest, Vol 51, Issue 9 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > Today's Topics: > > 1. backup and disaster recovery solutions (Damir Krstic) > 2. Re: backup and disaster recovery solutions (Jaime Pinto) > 3. Re: backup and disaster recovery solutions (Jonathan Buzzard) > 4. Re: backup and disaster recovery solutions (Marc A Kaplan) > > ----- Message from Damir Krstic on Mon, 11 > Apr 2016 13:15:30 +0000 ----- > *To:* > gpfsug main discussion list > *Subject:* > [gpfsug-discuss] backup and disaster recovery solutions > > We have implemented 1.5PB ESS solution recently in our HPC > environment. Today we are kicking of backup and disaster recovery > discussions so I was wondering what everyone else is using for their > backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life > cycle feature - so if the file is not touched for number of days, it's > moved to a tape (something like LTFS). > > Thanks in advance. > > DAmir > ----- Message from Jaime Pinto on Mon, 11 > Apr 2016 10:34:54 -0400 ----- > *To:* > gpfsug main discussion list , Damir > Krstic > *Subject:* > Re: [gpfsug-discuss] backup and disaster recovery solutions > > Do you want backups or periodic frozen snapshots of the file system? > > Backups can entail some level of version control, so that you or > end-users can get files back on certain points in time, in case of > accidental deletions. Besides 1.5PB is a lot of material, so you may > not want to take full snapshots that often. In that case, a > combination of daily incremental backups using TSM with GPFS's > mmbackup can be a good option. TSM also does a very good job at > controlling how material is distributed across multiple tapes, and > that is something that requires a lot of micro-management if you want > a home grown solution of rsync+LTFS. > > On the other hand, you could use gpfs built-in tools such a > mmapplypolicy to identify candidates for incremental backup, and send > them to LTFS. Just more micro management, and you may have to come up > with your own tool to let end-users restore their stuff, or you'll > have to act on their behalf. > > Jaime > > > > > Quoting Damir Krstic : > > > We have implemented 1.5PB ESS solution recently in our HPC environment. > > Today we are kicking of backup and disaster recovery discussions so > I was > > wondering what everyone else is using for their backup? > > > > In our old storage environment we simply rsync-ed home and software > > directories and projects were not backed up. > > > > With ESS we are looking for more of a GPFS based backup solution - > > something to tape possibly and also something that will have life cycle > > feature - so if the file is not touched for number of days, it's > moved to a > > tape (something like LTFS). > > > > Thanks in advance. > > > > DAmir > > > > > > > > > ************************************ > TELL US ABOUT YOUR SUCCESS STORIES > http://www.scinethpc.ca/testimonials > ************************************ > --- > Jaime Pinto > SciNet HPC Consortium - Compute/Calcul Canada > www.scinet.utoronto.ca - www.computecanada.org > University of Toronto > 256 McCaul Street, Room 235 > Toronto, ON, M5T1W5 > P: 416-978-2755 > C: 416-505-1477 > > ---------------------------------------------------------------- > This message was sent using IMP at SciNet Consortium, University of > Toronto. > > > > > ----- Message from Jonathan Buzzard on Mon, > 11 Apr 2016 16:02:45 +0100 ----- > *To:* > gpfsug-discuss at spectrumscale.org > *Subject:* > Re: [gpfsug-discuss] backup and disaster recovery solutions > > On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: > > Do you want backups or periodic frozen snapshots of the file system? > > > > Backups can entail some level of version control, so that you or > > end-users can get files back on certain points in time, in case of > > accidental deletions. Besides 1.5PB is a lot of material, so you may > > not want to take full snapshots that often. In that case, a > > combination of daily incremental backups using TSM with GPFS's > > mmbackup can be a good option. TSM also does a very good job at > > controlling how material is distributed across multiple tapes, and > > that is something that requires a lot of micro-management if you want > > a home grown solution of rsync+LTFS. > > Is there any other viable option other than TSM for backing up 1.5PB of > data? All other backup software does not handle this at all well. > > > On the other hand, you could use gpfs built-in tools such a > > mmapplypolicy to identify candidates for incremental backup, and send > > them to LTFS. Just more micro management, and you may have to come up > > with your own tool to let end-users restore their stuff, or you'll > > have to act on their behalf. > > > > I was not aware of a way of letting end users restore their stuff from > *backup* for any of the major backup software while respecting the file > system level security of the original file system. If you let the end > user have access to the backup they can restore any file to any location > which is generally not a good idea. > > I do have a concept of creating a read only Fuse mounted file system > from a TSM point in time synthetic backup, and then using the shadow > copy feature of Samba to enable restores using the "Previous Versions" > feature of windows file manager. > > I got as far as getting a directory tree you could browse through but > then had an enforced change of jobs and don't have access to a TSM > server any more to continue development. > > Note if anyone from IBM is listening that would be a super cool feature. > > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > > > > ----- Message from "Marc A Kaplan" on Mon, 11 > Apr 2016 11:11:24 -0400 ----- > *To:* > gpfsug main discussion list > *Subject:* > Re: [gpfsug-discuss] backup and disaster recovery solutions > > Since you write "so if the file is not touched for number of days, > it's moved to a tape" - > that is what we call the HSM feature. This is additional function > beyond backup. IBM has two implementations. > > (1) TSM/HSM now called IBM Spectrum Protect. > _http://www-03.ibm.com/software/products/en/spectrum-protect-for-space-management_ > > (2) HPSS _http://www.hpss-collaboration.org/_ > > The GPFS (Spectrum Scale File System) policy feature supports both, so > that mmapplypolicy and GPFS policy rules can be used to perform > accelerated metadata scans to identify which files should be migrated. > > Also, GPFS supports on-demand recall (on application reads) of data > from long term storage (tape) to GPFS storage (disk or SSD). See also > DMAPI. > > > > Marc A Kaplan > > > > From: Damir Krstic > To: gpfsug main discussion list > Date: 04/11/2016 09:16 AM > Subject: [gpfsug-discuss] backup and disaster recovery solutions > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------------------------------------------------ > > > > We have implemented 1.5PB ESS solution recently in our HPC > environment. Today we are kicking of backup and disaster recovery > discussions so I was wondering what everyone else is using for their > backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life > cycle feature - so if the file is not touched for number of days, it's > moved to a tape (something like LTFS). > > Thanks in advance. > > DAmir _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org_ > __http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From makaplan at us.ibm.com Mon Apr 11 16:50:03 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 11 Apr 2016 11:50:03 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca><1460386965.19299.108.camel@buzzard.phy.strath.ac.uk><20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> Message-ID: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> IBM HSM products have always supported unprivileged, user triggered recall of any file. I am not familiar with any particular GUI, but from the CLI, it's easy enough: dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & # pulling the first few blocks will trigger a complete recall if the file happens to be on HSM We also had IBM HSM for mainframe MVS, years and years ago, which is now called DFHSM for z/OS. (I remember using this from TSO...) If the file has been migrated to a tape archive, accessing the file will trigger a tape mount which can take a while, depending on how fast your tape mounting (robot?), operates and what other requests may be queued ahead of yours....! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Mon Apr 11 17:01:19 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 11 Apr 2016 17:01:19 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> Message-ID: <1460390479.19299.125.camel@buzzard.phy.strath.ac.uk> On Mon, 2016-04-11 at 11:50 -0400, Marc A Kaplan wrote: > IBM HSM products have always supported unprivileged, user triggered > recall of any file. I am not familiar with any particular GUI, but > from the CLI, it's easy enough: Sure, but HSM != Backup. Right now secure aka with the appropriate level of privilege recall of *BACKUPS* ain't supported to my knowledge. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jez.tucker at gpfsug.org Mon Apr 11 17:01:37 2016 From: jez.tucker at gpfsug.org (Jez Tucker) Date: Mon, 11 Apr 2016 17:01:37 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> Message-ID: <570BCA61.4010900@gpfsug.org> Yes, but since the dsmrootd in 6.3.4+ removal be aware that several commands now require sudo: jtucker at tsm-demo-01:~$ dsmls /mmfs1/afile IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 7, Release 1, Level 4.4 Client date/time: 11/04/16 16:58:18 (c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved. ActS ResS ResB FSt FName ANS9505E dsmls: cannot initialize the DMAPI interface. Reason: Operation not permitted jtucker at tsm-demo-01:~$ sudo dsmls /mmfs1/afile [sudo] password for jtucker: IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 7, Release 1, Level 4.4 Client date/time: 11/04/16 16:58:25 (c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved. ActS ResS ResB FSt FName 8 8 0 p afile Though, yes, a straight cat of the file as an unpriv user works fine. Jez On 11/04/16 16:50, Marc A Kaplan wrote: > IBM HSM products have always supported unprivileged, user triggered > recall of any file. I am not familiar with any particular GUI, but > from the CLI, it's easy enough: > > dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & # > pulling the first few blocks will trigger a complete recall if the > file happens to be on HSM > > We also had IBM HSM for mainframe MVS, years and years ago, which is > now called DFHSM for z/OS. (I remember using this from TSO...) > > If the file has been migrated to a tape archive, accessing the file > will trigger a tape mount which can take a while, depending on how > fast your tape mounting (robot?), operates and what other requests may > be queued ahead of yours....! > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinto at scinet.utoronto.ca Mon Apr 11 17:03:00 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 11 Apr 2016 12:03:00 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca><1460386965.19299.108.camel@buzzard.phy.strath.ac.uk><20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> Message-ID: <20160411120300.171861d6i1iu1ltg@support.scinet.utoronto.ca> Hi Mark Personally I'm aware of the HSM features. However I was specifically referring to TSM Backup restore. I was told the new GUI for unprivileged users looks identical to what root would see, but unprivileged users would only be able to see material for which they have read permissions, and restore only to paths they have write permissions. The GUI is supposed to be a difference platform then the java/WebSphere like we have seen in the past to manage TSM. I'm looking forward to it as well. Jaime Quoting Marc A Kaplan : > IBM HSM products have always supported unprivileged, user triggered recall > of any file. I am not familiar with any particular GUI, but from the CLI, > it's easy enough: > > dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & # > pulling the first few blocks will trigger a complete recall if the file > happens to be on HSM > > We also had IBM HSM for mainframe MVS, years and years ago, which is now > called DFHSM for z/OS. (I remember using this from TSO...) > > If the file has been migrated to a tape archive, accessing the file will > trigger a tape mount which can take a while, depending on how fast your > tape mounting (robot?), operates and what other requests may be queued > ahead of yours....! > > > > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From jonathan at buzzard.me.uk Mon Apr 11 17:03:04 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 11 Apr 2016 17:03:04 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> Message-ID: <1460390584.19299.127.camel@buzzard.phy.strath.ac.uk> On Mon, 2016-04-11 at 11:18 -0400, Jaime Pinto wrote: > I heard as recently as last Friday from IBM support/vendors/developers > of GPFS/TSM/HSM that the newest release of Spectrum Protect (7.11) > offers a GUI interface that is user centric, and will allow for > unprivileged users to restore their own material via a newer WebGUI > (one that also works with Firefox, Chrome and on linux, not only IE on > Windows). Users may authenticate via AD or LDAP, and traverse only > what they would be allowed to via linux permissions and ACLs. > Hum, if they are they are not exactly advertising the feature or my Google foo is in extremely short supply today. Do you have a pointer to this on the web anywhere? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From mweil at genome.wustl.edu Mon Apr 11 17:05:17 2016 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 11 Apr 2016 11:05:17 -0500 Subject: [gpfsug-discuss] GPFS 4.2 SMB with IPA Message-ID: <570BCB3D.1020602@genome.wustl.edu> Hello all, Is there any good documentation out there to integrate IPA with CES? Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. From janfrode at tanso.net Mon Apr 11 17:43:21 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 11 Apr 2016 16:43:21 +0000 Subject: [gpfsug-discuss] GPFS 4.2 SMB with IPA In-Reply-To: <570BCB3D.1020602@genome.wustl.edu> References: <570BCB3D.1020602@genome.wustl.edu> Message-ID: As IPA is just an LDAP directory + kerberos, I believe you can follow example 7 in the mmuserauth manual. Another way would be to install your CES nodes into your domain outside of GPFS, and use the userdefined mmuserauth config. That's how I would have preferred to do it in an IPA managed linux environment. But, I believe there are still some problems with it overwriting /etc/krb5.keytab and /etc/nsswitch.conf, and stopping "sssd" unnecessarily on mmshutdown. So you might want to make the keytab and nsswitch immutable (chatter +i), and have some logic in f.ex. /var/mmfs/etc/mmfsup that restarts or somehow makes sure sssd is running. Oh.. and you'll need a shared NFS service principal in the krb5.keytab on all nodes to be able to use failover addresses.. and same for samba (which I think hides the ticket in /var/lib/samba/private/netlogon_creds_cli.tdb). -jf man. 11. apr. 2016 kl. 18.05 skrev Matt Weil : > Hello all, > > Is there any good documentation out there to integrate IPA with CES? > > Thanks > > Matt > > ____ > This email message is a private communication. The information > transmitted, including attachments, is intended only for the person or > entity to which it is addressed and may contain confidential, privileged, > and/or proprietary material. Any review, duplication, retransmission, > distribution, or other use of, or taking of any action in reliance upon, > this information by persons or entities other than the intended recipient > is unauthorized by the sender and is prohibited. If you have received this > message in error, please contact the sender immediately by return email and > delete the original message from all computer systems. Thank you. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr.roland.pabel at gmail.com Tue Apr 12 09:03:34 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Tue, 12 Apr 2016 10:03:34 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes Message-ID: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> Hi everyone, we are using GPFS 4.1.0.8 with 4 servers and 850 clients. Our GPFS setup is fairly new, we are still in the testing phase. A few days ago, we had some problems in the cluster which seemed to have started with deadlocks on a small number of nodes. To be better prepared for this scenario, I would like to install a callback for Event deadlockDetected. But this is a local event and the callback is executed on the client nodes, from which I cannot even send an email. Is it possible using mm-commands to instead delegate the callback to the servers (Nodeclass nsdNodes)? I guess it would be possible to use a callback of the form "ssh nsd0 /root/bin/deadlock-callback.sh", but then it is contingent upon server nsd0 being available. The mm-command style "-N nsdNodes" would more reliable in my opinion, because it would be run on all servers. On the servers, I can then check to actually only execute the script on the cluster manager. Thanks Roland -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From Robert.Oesterlin at nuance.com Tue Apr 12 12:54:39 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 12 Apr 2016 11:54:39 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> Message-ID: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Some general thoughts on ?deadlocks? and automated deadlock detection. I personally don?t like the term ?deadlock? as it implies a condition that won?t ever resolve itself. In GPFS terms, a deadlock is really a ?long RPC waiter? over a certain threshold. RPCs that wait on certain events can and do occur and they can take some time to complete. This is not necessarily a condition that is a problem, but you should be looking into them. GPFS does have automated deadlock detection and collection, but in the early releases it was ? well.. it?s not very ?robust?. With later releases (4.2) it?s MUCH better. I personally don?t rely on it because in larger clusters it can be too aggressive and depending on what?s really going on it can make things worse. This statement is my opinion and it doesn?t mean it?s not a good thing to have. :-) On the point of what commands to execute and what to collect ? be careful about long running callback scripts and executing commands on other nodes. Depending on what the issues is, you could end up causing a deadlock or making it worse. Some basic data collection, local to the node with the long RPC waiter is a good thing. Test them well before deploying them. And make sure that you don?t conflict with the automated collections. (which you might consider turning off) For my larger clusters, I dump the cluster waiters on a regular basis (once a minute: mmlsnode ?N waiters ?L), count the types and dump them into a database for graphing via Grafana. This doesn?t help me with true deadlock alerting, but it does give me insight into overall cluster behavior. If I see large numbers of long waiters I will (usually) go and investigate them on a cases by case basis. If you have large numbers of long RPC waiters on an ongoing basis, it's an indication of a larger problem that should be investigated. A few here and there is not a cause for real alarm in my experience. Last ? if you have a chance to upgrade to 4.1.1 or 4.2, I would encourage you to do so as the deadlock detection has improved quite a bit. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid robert.oesterlin at nuance.com From: > on behalf of Roland Pabel > Organization: RRZK Uni K?ln Reply-To: gpfsug main discussion list > Date: Tuesday, April 12, 2016 at 3:03 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Executing Callbacks on other Nodes Hi everyone, we are using GPFS 4.1.0.8 with 4 servers and 850 clients. Our GPFS setup is fairly new, we are still in the testing phase. A few days ago, we had some problems in the cluster which seemed to have started with deadlocks on a small number of nodes. To be better prepared for this scenario, I would like to install a callback for Event deadlockDetected. But this is a local event and the callback is executed on the client nodes, from which I cannot even send an email. Is it possible using mm-commands to instead delegate the callback to the servers (Nodeclass nsdNodes)? I guess it would be possible to use a callback of the form "ssh nsd0 /root/bin/deadlock-callback.sh", but then it is contingent upon server nsd0 being available. The mm-command style "-N nsdNodes" would more reliable in my opinion, because it would be run on all servers. On the servers, I can then check to actually only execute the script on the cluster manager. Thanks Roland -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=CwIFAw&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=c7jzNm-H6SdZMztP1xkwgySivoe4FlOcI2pS2SCJ8K8&s=AfohxS7tz0ky5C8ImoufbQmQpdwpo4wEO7cSCzHPCD0&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr.roland.pabel at gmail.com Tue Apr 12 14:25:33 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Tue, 12 Apr 2016 15:25:33 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Message-ID: <2149839.vuvB37DuRo@soliton.rrz.uni-koeln.de> Hi Bob, thanks for your remarks. I already understood that deadlocks are more timeouts than "tangled up balls of code". I was not (yet) planning on changing the whole routine, I'd just like to get a notice when something unexpected happens in the cluster. So, first, I just want to write these notices into a file and email it once it reaches a certain size. >From what you are saying, it sounds like it is worth upgrading to 4.1.1.x . We are planning a maintenance next month, I'll try to get this into the todo- list. Upgrading beyond this is going require a longer preparation, unless the prerequisite of "RHEL 6.4 or later" as stated on the IBM FAQ is irrelevant. Our clients still run RHEL 6.3. Best regards, Roland > Some general thoughts on ?deadlocks? and automated deadlock detection. > > I personally don?t like the term ?deadlock? as it implies a condition that > won?t ever resolve itself. In GPFS terms, a deadlock is really a ?long RPC > waiter? over a certain threshold. RPCs that wait on certain events can and > do occur and they can take some time to complete. This is not necessarily a > condition that is a problem, but you should be looking into them. > GPFS does have automated deadlock detection and collection, but in the early > releases it was ? well.. it?s not very ?robust?. With later releases (4.2) > it?s MUCH better. I personally don?t rely on it because in larger clusters > it can be too aggressive and depending on what?s really going on it can > make things worse. This statement is my opinion and it doesn?t mean it?s > not a good thing to have. :-) > On the point of what commands to execute and what to collect ? be careful > about long running callback scripts and executing commands on other nodes. > Depending on what the issues is, you could end up causing a deadlock or > making it worse. Some basic data collection, local to the node with the > long RPC waiter is a good thing. Test them well before deploying them. And > make sure that you don?t conflict with the automated collections. (which > you might consider turning off) > For my larger clusters, I dump the cluster waiters on a regular basis (once > a minute: mmlsnode ?N waiters ?L), count the types and dump them into a > database for graphing via Grafana. This doesn?t help me with true deadlock > alerting, but it does give me insight into overall cluster behavior. If I > see large numbers of long waiters I will (usually) go and investigate them > on a cases by case basis. If you have large numbers of long RPC waiters on > an ongoing basis, it's an indication of a larger problem that should be > investigated. A few here and there is not a cause for real alarm in my > experience. > Last ? if you have a chance to upgrade to 4.1.1 or 4.2, I would encourage > you to do so as the deadlock detection has improved quite a bit. > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > robert.oesterlin at nuance.com > > From: > ctrumscale.org>> on behalf of Roland Pabel > > > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > > > Date: Tuesday, April 12, 2016 at 3:03 AM > To: gpfsug main discussion list > > > Subject: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi everyone, > > we are using GPFS 4.1.0.8 with 4 servers and 850 clients. Our GPFS setup is > fairly new, we are still in the testing phase. A few days ago, we had some > problems in the cluster which seemed to have started with deadlocks on a > small number of nodes. To be better prepared for this scenario, I would > like to install a callback for Event deadlockDetected. But this is a local > event and the callback is executed on the client nodes, from which I cannot > even send an email. > > Is it possible using mm-commands to instead delegate the callback to the > servers (Nodeclass nsdNodes)? > > I guess it would be possible to use a callback of the form "ssh nsd0 > /root/bin/deadlock-callback.sh", but then it is contingent upon server nsd0 > being available. The mm-command style "-N nsdNodes" would more reliable in > my opinion, because it would be run on all servers. On the servers, I can > then check to actually only execute the script on the cluster manager. > Thanks > > Roland > -- > Dr. Roland Pabel > Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) > Weyertal 121, Raum 3.07 > D-50931 K?ln > > Tel.: +49 (221) 470-89589 > E-Mail: pabel at uni-koeln.de > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listi > nfo_gpfsug-2Ddiscuss&d=CwIFAw&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY& > r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=c7jzNm-H6SdZMztP1xkwgySivoe4 > FlOcI2pS2SCJ8K8&s=AfohxS7tz0ky5C8ImoufbQmQpdwpo4wEO7cSCzHPCD0&e= -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From Robert.Oesterlin at nuance.com Tue Apr 12 15:09:10 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 12 Apr 2016 14:09:10 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <2149839.vuvB37DuRo@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <2149839.vuvB37DuRo@soliton.rrz.uni-koeln.de> Message-ID: <59C81E1E-59CC-40C4-8A7E-73CC88F0741F@nuance.com> Hi Roland I ran into that issue as well ? if you are running 6.3 you need to update to get to the later levels. RH 6.3 is getting a bit dated, so an upgrade might be a good idea ? but I all too well how hard it is to push through those updates! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Roland Pabel > Organization: RRZK Uni K?ln Reply-To: gpfsug main discussion list > Date: Tuesday, April 12, 2016 at 8:25 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes Hi Bob, thanks for your remarks. I already understood that deadlocks are more timeouts than "tangled up balls of code". I was not (yet) planning on changing the whole routine, I'd just like to get a notice when something unexpected happens in the cluster. So, first, I just want to write these notices into a file and email it once it reaches a certain size. From what you are saying, it sounds like it is worth upgrading to 4.1.1.x . We are planning a maintenance next month, I'll try to get this into the todo- list. Upgrading beyond this is going require a longer preparation, unless the prerequisite of "RHEL 6.4 or later" as stated on the IBM FAQ is irrelevant. Our clients still run RHEL 6.3. Best regards, Roland -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Apr 12 23:01:40 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 12 Apr 2016 18:01:40 -0400 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Message-ID: <201604122201.u3CM1o7d031628@d01av02.pok.ibm.com> My understanding is (someone will correct me if I'm wrong) ... GPFS does not have true deadlock detection. As you say it has time outs. The argument is: As a practical matter, it makes not much difference to a sysadmin or user -- if things are gummed up "too long" they start to smell like a deadlock, so we may as well intervene as though there were a true technical deadlock. A genuine true deadlock is a situation where things are gummed up, there is no progress, and one can prove that there will be no progress, no matter how long one waits. E.g. Classically, you have locked resource A and I have locked resource B and now I decide I need resource A and I am waiting indefinitely long for that. And you have decided you need resouce B and you are waiting indefinitely for that. We are then deadlocked. Deadlock can occur on a single node or over multiple nodes. Technically it may be possible to execute a deadlock detection protocol that would identify cyclic, deadlocking dependencies, but it was decided that, for GPFS, it would be more practical to detect "very long waiters"... From: "Oesterlin, Robert" Some general thoughts on ?deadlocks? and automated deadlock detection. I personally don?t like the term ?deadlock? as it implies a condition that won?t ever resolve itself. In GPFS terms, a deadlock is really a ?long RPC waiter? over a certain threshold. RPCs that wait on certain events can and do occur and they can take some time to complete. This is not necessarily a condition that is a problem, but you should be looking into them. GPFS does have automated deadlock detection and collection, but in the early releases it was ? well.. it?s not very ?robust?. With later releases (4.2) it?s MUCH better. I personally don?t rely on it because in larger clusters it can be too aggressive and depending on what?s really going on it can make things worse. This statement is my opinion and it doesn?t mean it?s not a good thing to have. :-) ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Thu Apr 14 15:19:58 2016 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Thu, 14 Apr 2016 15:19:58 +0100 Subject: [gpfsug-discuss] May user group, call for help! Message-ID: Hi All, For the UK May user group meeting, we are hoping to be able to film the sessions so that we can post as many as talks as possible (permission permitting!) online after the event. In order to do this, we require some kit to film the sessions with ... If you are attending the day and have a video camera that we might be able to borrow, please let me or Claire know! If we don't get support from the community then we won't be able to film and share the talks afterwards! So if you are coming along and have something you'd be happy for us to use for the two days, please do let us know! Thanks Simon (UK Group Chair) From Robert.Oesterlin at nuance.com Thu Apr 14 19:10:20 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 14 Apr 2016 18:10:20 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore Message-ID: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> I?m getting these messages (repeating) in the mmfslog after I restored an NSD node ( relocated to a new physical system) with mmsddrestore - the server seems normal otherwise - what should I do? Thu Apr 14 13:44:48.800 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.1' failed (2) Thu Apr 14 13:44:48.801 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) Thu Apr 14 13:44:48.802 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.2' failed (2) Thu Apr 14 13:44:48.803 2016: [N] Load both paxos local files bad Thu Apr 14 13:44:48.804 2016: [N] Open /var/mmfs/ccr/ccr.paxos.1 failed (2) Thu Apr 14 13:44:48.805 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.1' failed (2) Thu Apr 14 13:44:48.806 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) Thu Apr 14 13:44:48.807 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.2' failed (2) Thu Apr 14 13:44:48.808 2016: [N] Load both paxos local files bad Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Thu Apr 14 19:22:41 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 14 Apr 2016 18:22:41 +0000 Subject: [gpfsug-discuss] GPFS 4.2 and 4.1 in multi-cluster environment Message-ID: <7635681D-31ED-461B-82A0-F17DA19DDFF4@vanderbilt.edu> Hi All, We have a multi-cluster environment consisting of: 1) a ?traditional? HPC cluster running on commodity hardware, and 2) a DDN based cluster which is mounted to the HPC cluster and also exports to researchers around campus using both CNFS and SAMBA / CTDB. Both of these cluster are currently running GPFS 4.1.0.8 efix 21. We are considering doing upgrades in May. I would like to take the HPC cluster to GPFS 4.2.0.x not just because that?s the current version, but to get some of the QoS features introduced in 4.2. However, it may not be possible to take the DDN cluster to GPFS 4.2. I?ve got another inquiry in to them about their plans, but the latest information I have is that they only support up thru GPFS 4.1.1.x. I know that it should be possible to run with the HPC cluster at GPFS 4.2.0.x and the DDN cluster at 4.1.1.x ? my question is - is anyone actually doing that? Any suggestions / warnings? I should mention that this question is motivated by the fact that a couple of years ago when both clusters were running GPFS 3.5.0.x, we got them out of sync on the PTF levels (I think the HPC cluster was at PTF 19 and the DDN cluster at PTF 11) and it caused problems. Because of that, we have tried to keep them in sync as much as possible. Thanks in advance, all? ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Thu Apr 14 20:33:17 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 14 Apr 2016 19:33:17 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore In-Reply-To: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> References: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> Message-ID: I would try switching from CCR to primary/secondary config servers, maybe delete the paxos files, and then back to CCR. I believe that's how I got out of a similar situation on a v4.1.1.x installation this january.. -jf tor. 14. apr. 2016 kl. 20.10 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > I?m getting these messages (repeating) in the mmfslog after I restored an > NSD node ( relocated to a new physical system) with mmsddrestore - the > server seems normal otherwise - what should I do? > > Thu Apr 14 13:44:48.800 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.1' failed (2) > Thu Apr 14 13:44:48.801 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) > Thu Apr 14 13:44:48.802 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.2' failed (2) > Thu Apr 14 13:44:48.803 2016: [N] Load both paxos local files bad > Thu Apr 14 13:44:48.804 2016: [N] Open /var/mmfs/ccr/ccr.paxos.1 failed (2) > Thu Apr 14 13:44:48.805 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.1' failed (2) > Thu Apr 14 13:44:48.806 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) > Thu Apr 14 13:44:48.807 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.2' failed (2) > Thu Apr 14 13:44:48.808 2016: [N] Load both paxos local files bad > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Apr 14 20:39:02 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 14 Apr 2016 19:39:02 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore In-Reply-To: References: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> Message-ID: <4668D451-7C58-456C-B160-54642C07C155@nuance.com> Yea ? turning of CCR means shutting down the entire cluster. Not an option. CCR is VERY POORLY documented. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Jan-Frode Myklebust > Reply-To: gpfsug main discussion list > Date: Thursday, April 14, 2016 at 2:33 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] CCR error messages after mmsdrrestore I would try switching from CCR to primary/secondary config servers, maybe delete the paxos files, and then back to CCR. I believe that's how I got out of a similar situation on a v4.1.1.x installation this january.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Apr 14 21:35:46 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 14 Apr 2016 20:35:46 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore In-Reply-To: <4668D451-7C58-456C-B160-54642C07C155@nuance.com> References: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> <4668D451-7C58-456C-B160-54642C07C155@nuance.com> Message-ID: <035C8381-5C9E-41A5-9DBC-55AEF25B14CC@nuance.com> Following up to my own problem?. It would appear mmsdrrestore doesn?t work (well) with quorum nodes in a CCR enabled cluster. So: change node to non-quorum mmsdrrestore change back to quorum Hey IBM ? how about we document this! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Robert Oesterlin > Reply-To: gpfsug main discussion list > Date: Thursday, April 14, 2016 at 2:39 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] CCR error messages after mmsdrrestore Yea ? turning of CCR means shutting down the entire cluster. Not an option. CCR is VERY POORLY documented. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Jan-Frode Myklebust > Reply-To: gpfsug main discussion list > Date: Thursday, April 14, 2016 at 2:33 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] CCR error messages after mmsdrrestore I would try switching from CCR to primary/secondary config servers, maybe delete the paxos files, and then back to CCR. I believe that's how I got out of a similar situation on a v4.1.1.x installation this january.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chekh at stanford.edu Fri Apr 15 00:30:51 2016 From: chekh at stanford.edu (Alex Chekholko) Date: Thu, 14 Apr 2016 16:30:51 -0700 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Message-ID: <5710282B.6060603@stanford.edu> ++ On 04/12/2016 04:54 AM, Oesterlin, Robert wrote: > For my larger clusters, I dump the cluster waiters on a regular basis > (once a minute: mmlsnode ?N waiters ?L), count the types and dump them > into a database for graphing via Grafana. -- Alex Chekholko chekh at stanford.edu 347-401-4860 From dr.roland.pabel at gmail.com Fri Apr 15 16:50:21 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Fri, 15 Apr 2016 17:50:21 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <5710282B.6060603@stanford.edu> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> Message-ID: <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> Hi, In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So running it every 30 seconds is a bit close. I'll try running it once a minute and then incorporating this into our graphing. Maybe the command is so slow for me because a few nodes are down? Is there a parameter to mmlsnode to configure the timeout? Thanks, Roland > ++ > > On 04/12/2016 04:54 AM, Oesterlin, Robert wrote: > > For my larger clusters, I dump the cluster waiters on a regular basis > > (once a minute: mmlsnode ?N waiters ?L), count the types and dump them > > into a database for graphing via Grafana. -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From Robert.Oesterlin at nuance.com Fri Apr 15 17:02:08 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 15 Apr 2016 16:02:08 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> Message-ID: <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> This command is just using ssh to all the nodes and dumping the waiter information and collecting it. That means if the node is down, slow to respond, or there are a large number of nodes, it could take a while to return. In my 400-500 node clusters this command usually take less than 10 seconds. I do prefix the command with a timeout value in case a node is hung up and ssh never returns (which it sometimes does, and that?s not the fault of GPFS) Something like this: timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L This means I get incomplete information, but if you don?t you end up piling up a lot of hung up commands. I would check over your cluster carefully to see if there are other issues that might cause ssh to hang up ? which could impact other GPFS commands that distribute via ssh. Another approach would be to dump the waiters locally on each node, send node specific information to the database, and then sum it up using the graphing software. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Roland Pabel > Organization: RRZK Uni K?ln Reply-To: gpfsug main discussion list > Date: Friday, April 15, 2016 at 10:50 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes Hi, In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So running it every 30 seconds is a bit close. I'll try running it once a minute and then incorporating this into our graphing. Maybe the command is so slow for me because a few nodes are down? Is there a parameter to mmlsnode to configure the timeout? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tortay at cc.in2p3.fr Fri Apr 15 17:06:41 2016 From: tortay at cc.in2p3.fr (Loic Tortay) Date: Fri, 15 Apr 2016 18:06:41 +0200 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Message-ID: <57111191.4050200@cc.in2p3.fr> Hello, I have a testbed cluster where I have setup AFM for an incremental NFS migration between 2 GPFS filesystems in the same cluster. This is with Spectrum Scale 4.1.1-5 on Linux (CentOS 7). The documentation states: "On a GPFS data source, AFM moves all user extended attributes and ACLs, and file sparseness is maintained." (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) If I'm not mistaken, I have a GPFS data source (since I'm doing a migration from GPFS to GPFS). While file sparseness is mostly maintained, user extended attributes and ACLs in the source/home filesystem do not appear to be migrated to the target/cache filesystem (same goes for basic tests with ACLs): % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 getfattr: Removing leading '/' from absolute path names # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 user.mfiles:sha2-256 % While on the target filesystem: % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 % Am I missing something ? Is there another meaning to "user extended attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | From oehmes at gmail.com Fri Apr 15 17:12:26 2016 From: oehmes at gmail.com (Sven Oehme) Date: Fri, 15 Apr 2016 12:12:26 -0400 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> Message-ID: If you can wait a few more month we will have stats for this in Zimon. Sven On Apr 15, 2016 12:02 PM, "Oesterlin, Robert" wrote: > This command is just using ssh to all the nodes and dumping the waiter > information and collecting it. That means if the node is down, slow to > respond, or there are a large number of nodes, it could take a while to > return. In my 400-500 node clusters this command usually take less than 10 > seconds. I do prefix the command with a timeout value in case a node is > hung up and ssh never returns (which it sometimes does, and that?s not the > fault of GPFS) Something like this: > > timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L > > This means I get incomplete information, but if you don?t you end up > piling up a lot of hung up commands. I would check over your cluster > carefully to see if there are other issues that might cause ssh to hang up > ? which could impact other GPFS commands that distribute via ssh. > > Another approach would be to dump the waiters locally on each node, send > node specific information to the database, and then sum it up using the > graphing software. > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > From: on behalf of Roland > Pabel > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > Date: Friday, April 15, 2016 at 10:50 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi, > > In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So > running it every 30 seconds is a bit close. I'll try running it once a > minute > and then incorporating this into our graphing. > > Maybe the command is so slow for me because a few nodes are down? > Is there a parameter to mmlsnode to configure the timeout? > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri Apr 15 17:48:14 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 15 Apr 2016 16:48:14 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> Message-ID: Excellent! I have Zimon fully deployed and this will make my life much easier. :-) Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 15, 2016 at 11:12 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes If you can wait a few more month we will have stats for this in Zimon. Sven -------------- next part -------------- An HTML attachment was scrubbed... URL: From vpuvvada at in.ibm.com Sat Apr 16 10:23:32 2016 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Sat, 16 Apr 2016 14:53:32 +0530 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <57111191.4050200@cc.in2p3.fr> References: <57111191.4050200@cc.in2p3.fr> Message-ID: <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> Hi, Can you check if AFM was enabled at home cluster using "mmafmconfig enable" command? What is the fileset mode are you using ? Regards, Venkat ------------------------------------------------------------------- Venkateswara R Puvvada/India/IBM at IBMIN vpuvvada at in.ibm.com From: Loic Tortay To: gpfsug-discuss at spectrumscale.org Date: 04/15/2016 09:35 PM Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello, I have a testbed cluster where I have setup AFM for an incremental NFS migration between 2 GPFS filesystems in the same cluster. This is with Spectrum Scale 4.1.1-5 on Linux (CentOS 7). The documentation states: "On a GPFS data source, AFM moves all user extended attributes and ACLs, and file sparseness is maintained." (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) If I'm not mistaken, I have a GPFS data source (since I'm doing a migration from GPFS to GPFS). While file sparseness is mostly maintained, user extended attributes and ACLs in the source/home filesystem do not appear to be migrated to the target/cache filesystem (same goes for basic tests with ACLs): % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 getfattr: Removing leading '/' from absolute path names # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 user.mfiles:sha2-256 % While on the target filesystem: % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 % Am I missing something ? Is there another meaning to "user extended attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From tortay at cc.in2p3.fr Sat Apr 16 10:40:12 2016 From: tortay at cc.in2p3.fr (Loic Tortay) Date: Sat, 16 Apr 2016 11:40:12 +0200 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> References: <57111191.4050200@cc.in2p3.fr> <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> Message-ID: <5712087C.9060608@cc.in2p3.fr> On 16/04/2016 11:23, Venkateswara R Puvvada wrote: > Hi, > > Can you check if AFM was enabled at home cluster using "mmafmconfig > enable" command? What is the fileset mode are you using ? > Hello, AFM was enabled for the 2 home filesets/NFS exports with "mmafmconfig enable /fs1/zone1" & "mmafmconfig enable /fs1/zone2". The fileset mode is read-only for botch cache filesets. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | > Regards, > Venkat > ------------------------------------------------------------------- > Venkateswara R Puvvada/India/IBM at IBMIN > vpuvvada at in.ibm.com > > > > > From: Loic Tortay > To: gpfsug-discuss at spectrumscale.org > Date: 04/15/2016 09:35 PM > Subject: [gpfsug-discuss] Extended attributes and ACLs with > AFM-based "NFS migration" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello, > I have a testbed cluster where I have setup AFM for an incremental NFS > migration between 2 GPFS filesystems in the same cluster. This is with > Spectrum Scale 4.1.1-5 on Linux (CentOS 7). > > The documentation states: "On a GPFS data source, AFM moves all user > extended attributes and ACLs, and file sparseness is maintained." > (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) > > If I'm not mistaken, I have a GPFS data source (since I'm doing a > migration from GPFS to GPFS). > > While file sparseness is mostly maintained, user extended attributes and > ACLs in the source/home filesystem do not appear to be migrated to the > target/cache filesystem (same goes for basic tests with ACLs): > % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > getfattr: Removing leading '/' from absolute path names > # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > user.mfiles:sha2-256 > % > While on the target filesystem: > % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > % > > Am I missing something ? Is there another meaning to "user extended > attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2931 bytes Desc: S/MIME Cryptographic Signature URL: From viccornell at gmail.com Mon Apr 18 14:41:36 2016 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 18 Apr 2016 14:41:36 +0100 Subject: [gpfsug-discuss] AFM Question Message-ID: Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinto at scinet.utoronto.ca Mon Apr 18 14:54:14 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 18 Apr 2016 09:54:14 -0400 Subject: [gpfsug-discuss] GPFS on ZFS? Message-ID: <20160418095414.10636zytueeqmupy@support.scinet.utoronto.ca> Since we can not get GNR outside ESS/GSS appliances, is anybody using ZFS for software raid on commodity storage? Thanks Jaime --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From dr.roland.pabel at gmail.com Mon Apr 18 16:10:02 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Mon, 18 Apr 2016 17:10:02 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> Message-ID: <7692100.SyKvSf6dcU@soliton.rrz.uni-koeln.de> Hi Bob, I'll try the second approach, i.e, collecting "mmfsadm dump waiters" locally and then summing the values up, since it can be done without the overhead of ssh. You mentioned mmlsnode starts all these ssh commands and that made me look into the file itself. I then noticed most of the mm commands are actually scripts. This helps a lot with regards to my original question. mmdsh seems to do what I need. Thanks, Roland > This command is just using ssh to all the nodes and dumping the waiter > information and collecting it. That means if the node is down, slow to > respond, or there are a large number of nodes, it could take a while to > return. In my 400-500 node clusters this command usually take less than 10 > seconds. I do prefix the command with a timeout value in case a node is > hung up and ssh never returns (which it sometimes does, and that?s not the > fault of GPFS) Something like this: > timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L > > This means I get incomplete information, but if you don?t you end up piling > up a lot of hung up commands. I would check over your cluster carefully to > see if there are other issues that might cause ssh to hang up ? which could > impact other GPFS commands that distribute via ssh. > Another approach would be to dump the waiters locally on each node, send > node specific information to the database, and then sum it up using the > graphing software. > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > From: > ctrumscale.org>> on behalf of Roland Pabel > > > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > > > Date: Friday, April 15, 2016 at 10:50 AM > To: gpfsug main discussion list > > > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi, > > In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So > running it every 30 seconds is a bit close. I'll try running it once a > minute and then incorporating this into our graphing. > > Maybe the command is so slow for me because a few nodes are down? > Is there a parameter to mmlsnode to configure the timeout? > > -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From JRLang at uwyo.edu Mon Apr 18 17:28:25 2016 From: JRLang at uwyo.edu (Jeffrey R. Lang) Date: Mon, 18 Apr 2016 16:28:25 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <7692100.SyKvSf6dcU@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> <7692100.SyKvSf6dcU@soliton.rrz.uni-koeln.de> Message-ID: Roland Here's a tool written by NCAR that provides waiter information on a per node bases using a light weight daemon on the monitored node. I have been using it for a while and it has helped me find and figure out long waiter nodes. It might do what you are looking for. https://sourceforge.net/projects/gpfsmonitorsuite/ jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Roland Pabel Sent: Monday, April 18, 2016 9:10 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes Hi Bob, I'll try the second approach, i.e, collecting "mmfsadm dump waiters" locally and then summing the values up, since it can be done without the overhead of ssh. You mentioned mmlsnode starts all these ssh commands and that made me look into the file itself. I then noticed most of the mm commands are actually scripts. This helps a lot with regards to my original question. mmdsh seems to do what I need. Thanks, Roland > This command is just using ssh to all the nodes and dumping the waiter > information and collecting it. That means if the node is down, slow to > respond, or there are a large number of nodes, it could take a while > to return. In my 400-500 node clusters this command usually take less > than 10 seconds. I do prefix the command with a timeout value in case > a node is hung up and ssh never returns (which it sometimes does, and > that?s not the fault of GPFS) Something like this: > timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L > > This means I get incomplete information, but if you don?t you end up > piling up a lot of hung up commands. I would check over your cluster > carefully to see if there are other issues that might cause ssh to > hang up ? which could impact other GPFS commands that distribute via ssh. > Another approach would be to dump the waiters locally on each node, > send node specific information to the database, and then sum it up > using the graphing software. > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > From: > s at spe ctrumscale.org>> on behalf of Roland Pabel > > > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > org>> > Date: Friday, April 15, 2016 at 10:50 AM > To: gpfsug main discussion list > org>> > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi, > > In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. > So running it every 30 seconds is a bit close. I'll try running it > once a minute and then incorporating this into our graphing. > > Maybe the command is so slow for me because a few nodes are down? > Is there a parameter to mmlsnode to configure the timeout? > > -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From shankbal at in.ibm.com Tue Apr 19 06:47:11 2016 From: shankbal at in.ibm.com (Shankar Balasubramanian) Date: Tue, 19 Apr 2016 11:17:11 +0530 Subject: [gpfsug-discuss] AFM Question In-Reply-To: References: Message-ID: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> SW mode does not support failover. IW does, so this will not work. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell To: gpfsug main discussion list Date: 04/18/2016 07:13 PM Subject: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From vpuvvada at in.ibm.com Tue Apr 19 07:01:07 2016 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Tue, 19 Apr 2016 11:31:07 +0530 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <5712087C.9060608@cc.in2p3.fr> References: <57111191.4050200@cc.in2p3.fr><201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> <5712087C.9060608@cc.in2p3.fr> Message-ID: <201604190602.u3J62bl314745928@d28relay02.in.ibm.com> Hi, AFM usually logs the following message at gateway node if it cannot open control file to read ACLs/EAs. AFM: Cannot find control file for file system fileset in the exported file system at home. ACLs and extended attributes will not be synchronized. Sparse files will have zeros written for holes. If the above message didn't not appear in logs and if AFM failed to bring ACLs, this may be a defect. Please open PMR with supporting traces to debug this issue further. Thanks. Regards, Venkat ------------------------------------------------------------------- Venkateswara R Puvvada/India/IBM at IBMIN vpuvvada at in.ibm.com From: Loic Tortay To: gpfsug main discussion list Date: 04/16/2016 03:10 PM Subject: Re: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Sent by: gpfsug-discuss-bounces at spectrumscale.org On 16/04/2016 11:23, Venkateswara R Puvvada wrote: > Hi, > > Can you check if AFM was enabled at home cluster using "mmafmconfig > enable" command? What is the fileset mode are you using ? > Hello, AFM was enabled for the 2 home filesets/NFS exports with "mmafmconfig enable /fs1/zone1" & "mmafmconfig enable /fs1/zone2". The fileset mode is read-only for botch cache filesets. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | > Regards, > Venkat > ------------------------------------------------------------------- > Venkateswara R Puvvada/India/IBM at IBMIN > vpuvvada at in.ibm.com > > > > > From: Loic Tortay > To: gpfsug-discuss at spectrumscale.org > Date: 04/15/2016 09:35 PM > Subject: [gpfsug-discuss] Extended attributes and ACLs with > AFM-based "NFS migration" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello, > I have a testbed cluster where I have setup AFM for an incremental NFS > migration between 2 GPFS filesystems in the same cluster. This is with > Spectrum Scale 4.1.1-5 on Linux (CentOS 7). > > The documentation states: "On a GPFS data source, AFM moves all user > extended attributes and ACLs, and file sparseness is maintained." > (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) > > If I'm not mistaken, I have a GPFS data source (since I'm doing a > migration from GPFS to GPFS). > > While file sparseness is mostly maintained, user extended attributes and > ACLs in the source/home filesystem do not appear to be migrated to the > target/cache filesystem (same goes for basic tests with ACLs): > % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > getfattr: Removing leading '/' from absolute path names > # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > user.mfiles:sha2-256 > % > While on the target filesystem: > % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > % > > Am I missing something ? Is there another meaning to "user extended > attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? > [attachment "smime.p7s" deleted by Venkateswara R Puvvada/India/IBM] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Tue Apr 19 11:46:00 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Tue, 19 Apr 2016 10:46:00 +0000 Subject: [gpfsug-discuss] AFM Question In-Reply-To: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> Message-ID: Hi Shankar, Vic, Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Shankar Balasubramanian Sent: 19 April 2016 06:47 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM Question SW mode does not support failover. IW does, so this will not work. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell > To: gpfsug main discussion list > Date: 04/18/2016 07:13 PM Subject: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Tue Apr 19 12:04:31 2016 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 19 Apr 2016 12:04:31 +0100 Subject: [gpfsug-discuss] AFM Question In-Reply-To: References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> Message-ID: Thanks Luke, The whole business of ?promoting? a cache from one type to another isn?t documented very well in the places that I am looking. I would be grateful to anyone with more info to share. I am in the process of investigating Async DR for new customers. It would just be useful to see what can be done with existing ones who have no interest in upgrading. Also Async DR means that I have to create snapshots (and worse delete them) on the ?working? side of a replication pair and this is something I?m not in a tearing hurry to do. Regards, Vic > On 19 Apr 2016, at 11:46, Luke Raimbach wrote: > > Hi Shankar, Vic, > > Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? > > Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. > > I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? > > Cheers, > Luke. > ? <> > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org ] On Behalf Of Shankar Balasubramanian > Sent: 19 April 2016 06:47 > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] AFM Question > > SW mode does not support failover. IW does, so this will not work. > > > Best Regards, > Shankar Balasubramanian > AFM & Async DR Development > IBM Systems > Bangalore - Embassy Golf Links > India > > > > > > From: Vic Cornell > > To: gpfsug main discussion list > > Date: 04/18/2016 07:13 PM > Subject: [gpfsug-discuss] AFM Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hi All, > Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? > > If it is not immediately obvious why this might be useful, see the following scenario: > > Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. > > The system hosting A fails and all data on fileset A is lost. > > Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. > > Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. > > So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? > > Cheers, > > Vic > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From shankbal at in.ibm.com Tue Apr 19 12:07:27 2016 From: shankbal at in.ibm.com (Shankar Balasubramanian) Date: Tue, 19 Apr 2016 16:37:27 +0530 Subject: [gpfsug-discuss] AFM Question In-Reply-To: References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> Message-ID: <201604191117.u3JBHYqi27525232@d28relay04.in.ibm.com> You can disable snapshots creation on DR by simply disabling RPO feature on DR. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell To: gpfsug main discussion list Date: 04/19/2016 04:34 PM Subject: Re: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Luke, The whole business of ?promoting? a cache from one type to another isn?t documented very well in the places that I am looking. I would be grateful to anyone with more info to share. I am in the process of investigating Async DR for new customers. It would just be useful to see what can be done with existing ones who have no interest in upgrading. Also Async DR means that I have to create snapshots (and worse delete them) on the ?working? side of a replication pair and this is something I?m not in a tearing hurry to do. Regards, Vic On 19 Apr 2016, at 11:46, Luke Raimbach wrote: Hi Shankar, Vic, Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Shankar Balasubramanian Sent: 19 April 2016 06:47 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM Question SW mode does not support failover. IW does, so this will not work. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell To: gpfsug main discussion list Date: 04/18/2016 07:13 PM Subject: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Tue Apr 19 12:20:08 2016 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 19 Apr 2016 12:20:08 +0100 Subject: [gpfsug-discuss] AFM Question In-Reply-To: <201604191117.u3JBHYqi27525232@d28relay04.in.ibm.com> References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> <201604191117.u3JBHYqi27525232@d28relay04.in.ibm.com> Message-ID: <377D783D-27EE-4E40-9F23-047F73FAFDF4@gmail.com> Thanks Shankar - that was the bit I was looking for. Vic > On 19 Apr 2016, at 12:07, Shankar Balasubramanian wrote: > > You can disable snapshots creation on DR by simply disabling RPO feature on DR. > > > Best Regards, > Shankar Balasubramanian > AFM & Async DR Development > IBM Systems > Bangalore - Embassy Golf Links > India > > > > > > From: Vic Cornell > To: gpfsug main discussion list > Date: 04/19/2016 04:34 PM > Subject: Re: [gpfsug-discuss] AFM Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Thanks Luke, > > The whole business of ?promoting? a cache from one type to another isn?t documented very well in the places that I am looking. I would be grateful to anyone with more info to share. > > I am in the process of investigating Async DR for new customers. It would just be useful to see what can be done with existing ones who have no interest in upgrading. > > Also Async DR means that I have to create snapshots (and worse delete them) on the ?working? side of a replication pair and this is something I?m not in a tearing hurry to do. > > > Regards, > > Vic > > On 19 Apr 2016, at 11:46, Luke Raimbach > wrote: > > Hi Shankar, Vic, > > Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? > > Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. > > I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? > > Cheers, > Luke. > <> > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org ] On Behalf Of Shankar Balasubramanian > Sent: 19 April 2016 06:47 > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] AFM Question > > SW mode does not support failover. IW does, so this will not work. > > > Best Regards, > Shankar Balasubramanian > AFM & Async DR Development > IBM Systems > Bangalore - Embassy Golf Links > India > > > > > > From: Vic Cornell > > To: gpfsug main discussion list > > Date: 04/18/2016 07:13 PM > Subject: [gpfsug-discuss] AFM Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > Hi All, > Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? > > If it is not immediately obvious why this might be useful, see the following scenario: > > Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. > > The system hosting A fails and all data on fileset A is lost. > > Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. > > Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. > > So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? > > Cheers, > > Vic > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tortay at cc.in2p3.fr Tue Apr 19 14:43:53 2016 From: tortay at cc.in2p3.fr (Loic Tortay) Date: Tue, 19 Apr 2016 15:43:53 +0200 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <201604190602.u3J62bl314745928@d28relay02.in.ibm.com> References: <57111191.4050200@cc.in2p3.fr> <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> <5712087C.9060608@cc.in2p3.fr> <201604190602.u3J62bl314745928@d28relay02.in.ibm.com> Message-ID: <57163619.6000500@cc.in2p3.fr> On 04/19/2016 08:01 AM, Venkateswara R Puvvada wrote: > Hi, > > AFM usually logs the following message at gateway node if it cannot open > control file to read ACLs/EAs. > > AFM: Cannot find control file for file system fileset > in the exported file system at home. > ACLs and extended attributes will not be synchronized. > Sparse files will have zeros written for holes. > > If the above message didn't not appear in logs and if AFM failed to bring > ACLs, this may be a defect. Please open PMR with supporting traces to > debug this issue further. Thanks. > Hello, There is no such message on any node in the test cluster. I have opened a PMR (50962,650,706), the "gpfs.snap" output is on ecurep.ibm.com in "/toibm/linux/gpfs.snap.50962.650.706.tar". BTW, it would probably be useful if "gpfs.snap" avoided doing a "find /var/mmfs ..." on AFM gateway nodes (or used appropriate find options), since the NFS mountpoints for AFM are in "/var/mmfs/afm" and their content is scanned too. This can be quite time consuming, for instance our test setup has several million files in the home filesystem. The "offending" 'find' is the one at line 3014 in the version of gpfs.snap included with Spectrum Scale 4.1.1-5. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | From SAnderson at convergeone.com Tue Apr 19 18:56:25 2016 From: SAnderson at convergeone.com (Shaun Anderson) Date: Tue, 19 Apr 2016 17:56:25 +0000 Subject: [gpfsug-discuss] Hello from Idaho Message-ID: <12ff9317b22e40ffb7d56e11bab19a58@NACR502.nacr.com> My name is Shaun Anderson and I work for an IBM Business Partner in Boise, ID, USA. Our main vertical is Health-Care but we do other work in other sectors as well. My experience with GPFS has been via the storage product line (Sonas, V7kU) and now with ESS/Spectrum Archive. I stumbled upon SpectrumScale.org today and am glad to have found it while I prepare to implement a cNFS/CTDB(SAMBA) cluster. Shaun Anderson Storage Architect M 214.263.7014 o 208.577.2112 [http://info.spanlink.com/hubfs/Email_images/C1-EmailSignature-logo_160px.png] NOTICE: This email message and any attachments hereto may contain confidential information. Any unauthorized review, use, disclosure, or distribution of such information is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy the original message and all copies of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2323 bytes Desc: image001.png URL: From bbanister at jumptrading.com Tue Apr 19 19:00:53 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 19 Apr 2016 18:00:53 +0000 Subject: [gpfsug-discuss] Hello from Idaho In-Reply-To: <12ff9317b22e40ffb7d56e11bab19a58@NACR502.nacr.com> References: <12ff9317b22e40ffb7d56e11bab19a58@NACR502.nacr.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB0609E1E6@CHI-EXCHANGEW1.w2k.jumptrading.com> Hello Shaun, welcome to the list. If you haven't already see the new Cluster Export Services (CES) facility in 4.1.1-X and 4.2.X-X releases of Spectrum Scale, which provides cross-protocol support of clustered NFS/SMB/etc, then I would highly suggest looking at that as a fully-supported solution over CTDB w/ SAMBA. Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Shaun Anderson Sent: Tuesday, April 19, 2016 12:56 PM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Hello from Idaho My name is Shaun Anderson and I work for an IBM Business Partner in Boise, ID, USA. Our main vertical is Health-Care but we do other work in other sectors as well. My experience with GPFS has been via the storage product line (Sonas, V7kU) and now with ESS/Spectrum Archive. I stumbled upon SpectrumScale.org today and am glad to have found it while I prepare to implement a cNFS/CTDB(SAMBA) cluster. Shaun Anderson Storage Architect M 214.263.7014 o 208.577.2112 [http://info.spanlink.com/hubfs/Email_images/C1-EmailSignature-logo_160px.png] NOTICE: This email message and any attachments hereto may contain confidential information. Any unauthorized review, use, disclosure, or distribution of such information is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy the original message and all copies of it. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2323 bytes Desc: image001.png URL: From vpuvvada at in.ibm.com Wed Apr 20 12:04:42 2016 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 20 Apr 2016 16:34:42 +0530 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <57163619.6000500@cc.in2p3.fr> References: <57111191.4050200@cc.in2p3.fr><201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com><5712087C.9060608@cc.in2p3.fr><201604190602.u3J62bl314745928@d28relay02.in.ibm.com> <57163619.6000500@cc.in2p3.fr> Message-ID: <201604201114.u3KBEnww50331902@d28relay01.in.ibm.com> Hi, There is an issue with gpfs.snap which scans AFM internal mounts. This is issue got fixed in later releases. To workaround this problem, 1. cp /usr/lpp/mmfs/bin/gpfs.snap /usr/lpp/mmfs/bin/gpfs.snap.orig 2. Change this line : ccrSnapExcludeListRaw=$($find /var/mmfs \ \( -name "proxy-server*" -o -name "keystone*" -o -name "openrc*" \) \ 2>/dev/null) to this: ccrSnapExcludeListRaw=$($find /var/mmfs -xdev \ \( -name "proxy-server*" -o -name "keystone*" -o -name "openrc*" \) \ 2>/dev/null) Regards, Venkat ------------------------------------------------------------------- Venkateswara R Puvvada/India/IBM at IBMIN vpuvvada at in.ibm.com +91-80-41777734 From: Loic Tortay To: gpfsug main discussion list Date: 04/19/2016 07:13 PM Subject: Re: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Sent by: gpfsug-discuss-bounces at spectrumscale.org On 04/19/2016 08:01 AM, Venkateswara R Puvvada wrote: > Hi, > > AFM usually logs the following message at gateway node if it cannot open > control file to read ACLs/EAs. > > AFM: Cannot find control file for file system fileset > in the exported file system at home. > ACLs and extended attributes will not be synchronized. > Sparse files will have zeros written for holes. > > If the above message didn't not appear in logs and if AFM failed to bring > ACLs, this may be a defect. Please open PMR with supporting traces to > debug this issue further. Thanks. > Hello, There is no such message on any node in the test cluster. I have opened a PMR (50962,650,706), the "gpfs.snap" output is on ecurep.ibm.com in "/toibm/linux/gpfs.snap.50962.650.706.tar". BTW, it would probably be useful if "gpfs.snap" avoided doing a "find /var/mmfs ..." on AFM gateway nodes (or used appropriate find options), since the NFS mountpoints for AFM are in "/var/mmfs/afm" and their content is scanned too. This can be quite time consuming, for instance our test setup has several million files in the home filesystem. The "offending" 'find' is the one at line 3014 in the version of gpfs.snap included with Spectrum Scale 4.1.1-5. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 13:15:07 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 12:15:07 +0000 Subject: [gpfsug-discuss] mmbackup and filenames Message-ID: Hi, We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, on one we run CES/SMB and run a sync and share tool as well. This means we sometimes end up with filenames containing characters like newline (e.g. >From OSX clients). Mmbackup fails on these filenames, any suggestions on how we can get it to work? Thanks Simon From jonathan at buzzard.me.uk Wed Apr 20 13:28:18 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 20 Apr 2016 13:28:18 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: Message-ID: <1461155298.1434.83.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-20 at 12:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: > Hi, > > We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, > on one we run CES/SMB and run a sync and share tool as well. This means we > sometimes end up with filenames containing characters like newline (e.g. > From OSX clients). Mmbackup fails on these filenames, any suggestions on > how we can get it to work? > OMG, it's like seven/eight years since I reported that as a bug in mmbackup and they *STILL* haven't fixed it!!! I bet it still breaks with back ticks and other wacko characters too. I seem to recall it failed with very long path lengths as well; specifically ones longer than MAX_PATH (google it MAX_PATH is not something you can rely on). Back then mmbackup would just fail completely and not back anything up. Is it still the same or is it just failing on the files with wacko characters? I concluded back then that mmbackup was not suitable for production use. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Wed Apr 20 13:38:21 2016 From: oehmes at us.ibm.com (Sven Oehme) Date: Wed, 20 Apr 2016 12:38:21 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: Message-ID: <201604201239.u3KCdrAb016643@d01av04.pok.ibm.com> Which version of gpfs are you running on this cluster ? Sent from IBM Verse Simon Thompson (Research Computing - IT Services) --- [gpfsug-discuss] mmbackup and filenames --- From:"Simon Thompson (Research Computing - IT Services)" To:gpfsug-discuss at spectrumscale.orgDate:Wed, Apr 20, 2016 5:15 AMSubject:[gpfsug-discuss] mmbackup and filenames Hi,We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems,on one we run CES/SMB and run a sync and share tool as well. This means wesometimes end up with filenames containing characters like newline (e.g.From OSX clients). Mmbackup fails on these filenames, any suggestions onhow we can get it to work?ThanksSimon_______________________________________________gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 13:42:16 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 12:42:16 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <201604201239.u3KCdrAb016643@d01av04.pok.ibm.com> References: , <201604201239.u3KCdrAb016643@d01av04.pok.ibm.com> Message-ID: This is a 4.2 cluster with 7.1.3 protect client. (Probably 4.2.0.0) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sven Oehme [oehmes at us.ibm.com] Sent: 20 April 2016 13:38 To: gpfsug main discussion list Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] mmbackup and filenames Which version of gpfs are you running on this cluster ? Sent from IBM Verse Simon Thompson (Research Computing - IT Services) --- [gpfsug-discuss] mmbackup and filenames --- From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug-discuss at spectrumscale.org Date: Wed, Apr 20, 2016 5:15 AM Subject: [gpfsug-discuss] mmbackup and filenames ________________________________ Hi, We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, on one we run CES/SMB and run a sync and share tool as well. This means we sometimes end up with filenames containing characters like newline (e.g. >From OSX clients). Mmbackup fails on these filenames, any suggestions on how we can get it to work? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Wed Apr 20 15:42:29 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 20 Apr 2016 10:42:29 -0400 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: Message-ID: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. Each path must be specified on a single line. A line can contain only one path. Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 20 16:05:16 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 20 Apr 2016 15:05:16 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> References: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <0F66BEED-E30F-410A-BE20-2F706A5BAC9B@vanderbilt.edu> All, I would like to see this issue get resolved as it has caused us problems as well. We recently had an issue that necessitated us restoring 9.6 million files (out of 260 million) in a filesystem. We were able to restore a little over 8 million of those files relatively easily, but more than a million have been problematic due to various special characters in the filenames. I think there needs to be a recognition that TSM is going to be asked to back up filesystems that are used by Windows and Mac clients via NFS, SAMBA/CTDB, CES, etc., and that the users of those clients cannot be expected to not choose filenames that Unix-savvy users would never in a million years choose. And since I had to write some scripts to generate md5sums of files we restored and therefore had to deal with things in filenames that had me asking ?what in the world were they thinking?!?", I fully recognize that this is not an easy nut to crack. My 2 cents worth? Kevin On Apr 20, 2016, at 9:42 AM, Marc A Kaplan > wrote: The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 16:15:10 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 15:15:10 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 16:19:38 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 15:19:38 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> The Public RFE process sounds like a good way to go... somebody put it up and I'll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 16:27:08 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 15:27:08 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 16:28:47 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 15:28:47 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> Well what a lame restriction... I don't understand why all IBM products don't have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go... somebody put it up and I'll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Wed Apr 20 16:35:04 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 20 Apr 2016 11:35:04 -0400 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <201604201535.u3KFZC28024194@d03av04.boulder.ibm.com> >From a computer science point of view, this is a simple matter of programming. Provide yet-another-option on filelist processing that supports encoding or escaping of special characters. Pick your poison! We and many others have worked through this issue and provided solutions in products apart from TSM. In Spectrum Scale Filesystem, we code filelists with escapes \n and \\. Or if you prefer, use the ESCAPE option. See the Advanced Admin Guide, on or near page 24 in the ILM chapter 2. IBM is a very large organization and sometimes, for some issues, customers have the best, most effective means of communicating requirements to particular product groups within IBM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 16:41:00 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 15:41:00 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: OK, I might have managed to create a public RFE for this: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Well what a lame restriction? I don?t understand why all IBM products don?t have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 20 16:46:17 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 20 Apr 2016 16:46:17 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <1461167177.1434.89.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-20 at 15:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: [SNIP] > Who should we approach at IBM as a user community to get this on the > TSM fix list? > I personally raised this with IBM seven or eight years ago and was told that they where aware of the problem and it would be fixed. Clearly they have not fixed it or they did and then let it break again and thus have never heard of a unit test. The basic problem back then was that mmbackup used various standard Unix text processing utilities and was doomed to break if you put "special" but perfectly valid characters in your file names. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From r.horton at imperial.ac.uk Wed Apr 20 16:58:54 2016 From: r.horton at imperial.ac.uk (Robert Horton) Date: Wed, 20 Apr 2016 16:58:54 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: Message-ID: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> On Wed, 2016-04-20 at 12:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: > We use mmbackup with Spectrum Protect (TSM!) to backup our > file-systems, > on one we run CES/SMB and run a sync and share tool as well. This > means we > sometimes end up with filenames containing characters like newline > (e.g. > From OSX clients). Mmbackup fails on these filenames, any suggestions > on > how we can get it to work? I've not had to do do anything with TSM for a couple of years but when I did as a workaround to that I had a wrapper that called mmbackup and then parsed the output and for any files it couldn't handle due to non-ascii characters then called the tsm backup command directly on the whole directory. This does mean some stuff is getting backed up more than necessary but if it's only a handful of files it's a reasonable workaround. Rob -- Robert Horton HPC Systems Support Analyst Imperial College London +44 (0) 20 7594 5759 From scottcumbie at dynamixgroup.com Wed Apr 20 17:23:08 2016 From: scottcumbie at dynamixgroup.com (Scott Cumbie) Date: Wed, 20 Apr 2016 16:23:08 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> References: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> Message-ID: <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> You should open a PMR. This is not a ?feature? request, this is a failure of the code to work as it should. Scott Cumbie, Dynamix Group scottcumbie at dynamixgroup.com Office: (336) 765-9290 Cell: (336) 782-1590 On Apr 20, 2016, at 11:58 AM, Robert Horton > wrote: On Wed, 2016-04-20 at 12:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, on one we run CES/SMB and run a sync and share tool as well. This means we sometimes end up with filenames containing characters like newline (e.g. From OSX clients). Mmbackup fails on these filenames, any suggestions on how we can get it to work? I've not had to do do anything with TSM for a couple of years but when I did as a workaround to that I had a wrapper that called mmbackup and then parsed the output and for any files it couldn't handle due to non-ascii characters then called the tsm backup command directly on the whole directory. This does mean some stuff is getting backed up more than necessary but if it's only a handful of files it's a reasonable workaround. Rob -- Robert Horton HPC Systems Support Analyst Imperial College London +44 (0) 20 7594 5759 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 20 19:26:27 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 20 Apr 2016 19:26:27 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> References: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> Message-ID: <5717C9D3.8050501@buzzard.me.uk> On 20/04/16 17:23, Scott Cumbie wrote: > You should open a PMR. This is not a ?feature? request, this is a > failure of the code to work as it should. > I did at least seven years ago. I shall see if I can find the reference in my old notebooks tomorrow. Unfortunately one has gone missing so I might not have the reference. I do however wonder if the newlines really are newlines and not some UTF multibyte character that looks like a newline when you parse it as ASCII/ISO-8859-1 or some other legacy encoding? In my experience you have to try really really hard to actually get a newline into a file name. Mostly because the GUI will interpret pressing the return/enter key to think you have finished typing the file name rather than inserting a newline into the file name. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From bbanister at jumptrading.com Wed Apr 20 19:28:54 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 18:28:54 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> I voted for this! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:41 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames OK, I might have managed to create a public RFE for this: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Well what a lame restriction... I don't understand why all IBM products don't have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go... somebody put it up and I'll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 20 19:42:10 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 20 Apr 2016 18:42:10 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <4F3BBBF1-34BF-4FE6-8FB4-D21430C4BFCE@vanderbilt.edu> Me too! And I have to say (and those of you in the U.S. will understand this best) that it was kind of nice to really *want* to cast a vote instead of saying, ?I sure wish ?none of the above? was an option?? ;-) Kevin On Apr 20, 2016, at 1:28 PM, Bryan Banister > wrote: I voted for this! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:41 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames OK, I might have managed to create a public RFE for this: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Well what a lame restriction? I don?t understand why all IBM products don?t have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Wed Apr 20 19:56:42 2016 From: viccornell at gmail.com (viccornell at gmail.com) Date: Wed, 20 Apr 2016 19:56:42 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <584AAC36-28C1-4138-893E-DFC00760C8B0@gmail.com> Me too. Sent from my iPhone > On 20 Apr 2016, at 19:28, Bryan Banister wrote: > > I voted for this! > -B > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) > Sent: Wednesday, April 20, 2016 10:41 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > OK, I might have managed to create a public RFE for this: > > https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 > > Simon > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] > Sent: 20 April 2016 16:28 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > Well what a lame restriction? I don?t understand why all IBM products don?t have public RFE options, > -B > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) > Sent: Wednesday, April 20, 2016 10:27 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) > > Simon > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] > Sent: 20 April 2016 16:19 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! > -B > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) > Sent: Wednesday, April 20, 2016 10:15 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > Hi Mark, > > I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... > > I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. > > Who should we approach at IBM as a user community to get this on the TSM fix list? > > Simon > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] > Sent: 20 April 2016 15:42 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: > > http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html > > ... > The files (entries) listed in the filelist must adhere to the following rules: > Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. > Each path must be specified on a single line. A line can contain only one path. > Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). > By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... > IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 20:02:08 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 19:02:08 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed Apr 20 20:05:26 2016 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 20 Apr 2016 19:05:26 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: It?s there for sending data to support, primarily. But we do make use of it for report generation. -- Jonathan Fosburgh Principal Application Systems Analyst Storage Team IT Operations jfosburg at mdanderson.org (713) 745-9346 From: > on behalf of Bryan Banister > Reply-To: gpfsug main discussion list > Date: Wednesday, April 20, 2016 at 2:02 PM To: "gpfsug main discussion list (gpfsug-discuss at spectrumscale.org)" > Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Apparently, though not documented in man pages or any of the GPFS docs that I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS commands that provides output in machine readable fashion?. That?s right kids? no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dan.Foster at bristol.ac.uk Wed Apr 20 21:23:15 2016 From: Dan.Foster at bristol.ac.uk (Dan Foster) Date: Wed, 20 Apr 2016 21:23:15 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: On 20 April 2016 at 20:02, Bryan Banister wrote: > Apparently, though not documented in man pages or any of the GPFS docs that > I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS > commands that provides output in machine readable fashion?. > > That?s right kids? no more parsing obscure, often changed output columns > with your favorite bash/awk/python/magic. This is really useful, thanks for sharing! :) -- Dan Foster | Senior Storage Systems Administrator Advanced Computing Research Centre, University of Bristol From bevans at pixitmedia.com Wed Apr 20 21:38:42 2016 From: bevans at pixitmedia.com (Barry Evans) Date: Wed, 20 Apr 2016 21:38:42 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <5717E8D2.2080107@pixitmedia.com> If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any of the GPFS docs > that I?ve read (at least that I recall), there is a ?-Y? option to > many/most GPFS commands that provides output in machine readable fashion?. > > That?s right kids? no more parsing obscure, often changed output > columns with your favorite bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From duersch at us.ibm.com Wed Apr 20 21:43:11 2016 From: duersch at us.ibm.com (Steve Duersch) Date: Wed, 20 Apr 2016 16:43:11 -0400 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: References: Message-ID: We try our hardest to keep those columns static. Rarely are they changed. We are aware that folks are programming against them and we don't rearrange where things are. Steve Duersch Spectrum Scale (GPFS) FVTest IBM Poughkeepsie, New York >If you build a monitoring pipeline using -Y output, make sure you test >between revisions before upgrading. The columns do have a tendency to >change from time to time. > >Cheers, >Barry >On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any of the GPFS docs > that I?ve read (at least that I recall), there is a ?-Y? option to > many/most GPFS commands that provides output in machine readable fashion?. > > That?s right kids? no more parsing obscure, often changed output > columns with your favorite bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 21:46:04 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 20:46:04 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717E8D2.2080107@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtucker at pixitmedia.com Wed Apr 20 22:12:10 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Wed, 20 Apr 2016 22:12:10 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <5717F0AA.8050901@pixitmedia.com> Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: > > What?s nice is that the ?-Y? output provides a HEADER so that you can > still programmatically determine fields of interest? this is the best! > > I recommend adding ?-Y? option documentation to all supporting GPFS > commands for others to be informed. > > -Bryan > > *From:*gpfsug-discuss-bounces at spectrumscale.org > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry > Evans > *Sent:* Wednesday, April 20, 2016 3:39 PM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > If you build a monitoring pipeline using -Y output, make sure you test > between revisions before upgrading. The columns do have a tendency to > change from time to time. > > Cheers, > Barry > > On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any of the GPFS > docs that I?ve read (at least that I recall), there is a ?-Y? > option to many/most GPFS commands that provides output in machine > readable fashion?. > > That?s right kids? no more parsing obscure, often changed output > columns with your favorite bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the named > addressee(s) only and may contain proprietary, confidential or > privileged information. If you are not the intended recipient, you > are hereby notified that any review, dissemination or copying of > this email is strictly prohibited, and to please notify the sender > immediately and destroy this email and any attachments. Email > transmission cannot be guaranteed to be secure or error-free. The > Company, therefore, does not make any guarantees as to the > completeness or accuracy of this email or any attachments. This > email is for informational purposes only and does not constitute a > recommendation, offer, request or solicitation of any kind to buy, > sell, subscribe, redeem or perform any type of transaction of a > financial product. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtucker at pixitmedia.com Wed Apr 20 22:18:28 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Wed, 20 Apr 2016 22:18:28 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717F0AA.8050901@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> Message-ID: <5717F224.2010100@pixitmedia.com> So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: > Indeed. > > jtucker at elmo:~$ mmlsfs all -Y > mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: > > I must say I've not seen any headers increment above 0:1 since ... er > .. 3.3(?), so they're pretty static. > > Now, if only mmlspool supported -Y ... or if _all_ commands supported > -Y, I might even FedEX beer. > > Jez > > > On 20/04/16 21:46, Bryan Banister wrote: >> >> What?s nice is that the ?-Y? output provides a HEADER so that you can >> still programmatically determine fields of interest? this is the best! >> >> I recommend adding ?-Y? option documentation to all supporting GPFS >> commands for others to be informed. >> >> -Bryan >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of >> *Barry Evans >> *Sent:* Wednesday, April 20, 2016 3:39 PM >> *To:* gpfsug-discuss at spectrumscale.org >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >> didn't... game changer >> >> If you build a monitoring pipeline using -Y output, make sure you >> test between revisions before upgrading. The columns do have a >> tendency to change from time to time. >> >> Cheers, >> Barry >> >> On 20/04/2016 20:02, Bryan Banister wrote: >> >> Apparently, though not documented in man pages or any of the GPFS >> docs that I?ve read (at least that I recall), there is a ?-Y? >> option to many/most GPFS commands that provides output in machine >> readable fashion?. >> >> That?s right kids? no more parsing obscure, often changed output >> columns with your favorite bash/awk/python/magic. >> >> Why IBM would not document this is beyond me, >> >> -B >> >> ------------------------------------------------------------------------ >> >> >> Note: This email is for the confidential use of the named >> addressee(s) only and may contain proprietary, confidential or >> privileged information. If you are not the intended recipient, >> you are hereby notified that any review, dissemination or copying >> of this email is strictly prohibited, and to please notify the >> sender immediately and destroy this email and any attachments. >> Email transmission cannot be guaranteed to be secure or >> error-free. The Company, therefore, does not make any guarantees >> as to the completeness or accuracy of this email or any >> attachments. This email is for informational purposes only and >> does not constitute a recommendation, offer, request or >> solicitation of any kind to buy, sell, subscribe, redeem or >> perform any type of transaction of a financial product. >> >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> This email is confidential in that it is intended for the exclusive >> attention of the addressee(s) indicated. If you are not the intended >> recipient, this email should not be read or disclosed to any other >> person. Please notify the sender immediately and delete this email >> from your computer system. Any opinions expressed are not necessarily >> those of the company from which this email was sent and, whilst to >> the best of our knowledge no viruses or defects exist, no >> responsibility can be accepted for any loss or damage arising from >> its receipt or subsequent use of this email. >> >> >> ------------------------------------------------------------------------ >> >> Note: This email is for the confidential use of the named >> addressee(s) only and may contain proprietary, confidential or >> privileged information. If you are not the intended recipient, you >> are hereby notified that any review, dissemination or copying of this >> email is strictly prohibited, and to please notify the sender >> immediately and destroy this email and any attachments. Email >> transmission cannot be guaranteed to be secure or error-free. The >> Company, therefore, does not make any guarantees as to the >> completeness or accuracy of this email or any attachments. This email >> is for informational purposes only and does not constitute a >> recommendation, offer, request or solicitation of any kind to buy, >> sell, subscribe, redeem or perform any type of transaction of a >> financial product. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -- > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 20 22:24:01 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 20 Apr 2016 21:24:01 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717F0AA.8050901@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> Message-ID: <3360F57F-BC94-4116-82F6-9E1CDFC2919F@vanderbilt.edu> All, Does the unit of measure for *all* fields default to the same as if you ran the command without ?-Y?? For example: mmlsquota:user:HEADER:version:reserved:reserved:filesystemName:quotaType:id:name:blockUsage:blockQuota:blockLimit:blockInDoubt:blockGrace:filesUsage:filesQuota:filesLimit:filesInDoubt:filesGrace:remarks:fid:filesetname: blockUsage, blockLimit, and blockInDoubt are in KB, which makes sense, since that?s the default. But what about blockGrace if a user is over quota? Will it also contain output in varying units of measure (?6 days? or ?2 hours? or ?expired?) just like without the ?-Y?? I think this points to Bryan being right ?-Y? should be documented somewhere / somehow. Thanks? Kevin On Apr 20, 2016, at 4:12 PM, Jez Tucker > wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What?s nice is that the ?-Y? output provides a HEADER so that you can still programmatically determine fields of interest? this is the best! I recommend adding ?-Y? option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS commands that provides output in machine readable fashion?. That?s right kids? no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bevans at pixitmedia.com Wed Apr 20 22:58:27 2016 From: bevans at pixitmedia.com (Barry Evans) Date: Wed, 20 Apr 2016 22:58:27 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717F224.2010100@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> Message-ID: <5717FB83.6020805@pixitmedia.com> Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: > So mmlspool does in 4.1.1.3... perhaps my memory fails me. > I'm pretty certain Yuri told me that mmlspool was completely > unsupported and didn't have -Y a couple of years ago when we did the > original GPFS UG RFEs prior to 4.x. I figure that earns cookies. > > Perhaps in light of the mmbackup thread; "Will fix RFEs for > cookies?". Name your price ;-) > > Jez > > On 20/04/16 22:12, Jez Tucker wrote: >> Indeed. >> >> jtucker at elmo:~$ mmlsfs all -Y >> mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: >> >> I must say I've not seen any headers increment above 0:1 since ... er >> .. 3.3(?), so they're pretty static. >> >> Now, if only mmlspool supported -Y ... or if _all_ commands supported >> -Y, I might even FedEX beer. >> >> Jez >> >> >> On 20/04/16 21:46, Bryan Banister wrote: >>> >>> What?s nice is that the ?-Y? output provides a HEADER so that you >>> can still programmatically determine fields of interest? this is the >>> best! >>> >>> I recommend adding ?-Y? option documentation to all supporting GPFS >>> commands for others to be informed. >>> >>> -Bryan >>> >>> *From:*gpfsug-discuss-bounces at spectrumscale.org >>> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of >>> *Barry Evans >>> *Sent:* Wednesday, April 20, 2016 3:39 PM >>> *To:* gpfsug-discuss at spectrumscale.org >>> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >>> didn't... game changer >>> >>> If you build a monitoring pipeline using -Y output, make sure you >>> test between revisions before upgrading. The columns do have a >>> tendency to change from time to time. >>> >>> Cheers, >>> Barry >>> >>> On 20/04/2016 20:02, Bryan Banister wrote: >>> >>> Apparently, though not documented in man pages or any of the >>> GPFS docs that I?ve read (at least that I recall), there is a >>> ?-Y? option to many/most GPFS commands that provides output in >>> machine readable fashion?. >>> >>> That?s right kids? no more parsing obscure, often changed output >>> columns with your favorite bash/awk/python/magic. >>> >>> Why IBM would not document this is beyond me, >>> >>> -B >>> >>> ------------------------------------------------------------------------ >>> >>> >>> Note: This email is for the confidential use of the named >>> addressee(s) only and may contain proprietary, confidential or >>> privileged information. If you are not the intended recipient, >>> you are hereby notified that any review, dissemination or >>> copying of this email is strictly prohibited, and to please >>> notify the sender immediately and destroy this email and any >>> attachments. Email transmission cannot be guaranteed to be >>> secure or error-free. The Company, therefore, does not make any >>> guarantees as to the completeness or accuracy of this email or >>> any attachments. This email is for informational purposes only >>> and does not constitute a recommendation, offer, request or >>> solicitation of any kind to buy, sell, subscribe, redeem or >>> perform any type of transaction of a financial product. >>> >>> >>> >>> _______________________________________________ >>> >>> gpfsug-discuss mailing list >>> >>> gpfsug-discuss at spectrumscale.org >>> >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >>> >>> This email is confidential in that it is intended for the exclusive >>> attention of the addressee(s) indicated. If you are not the intended >>> recipient, this email should not be read or disclosed to any other >>> person. Please notify the sender immediately and delete this email >>> from your computer system. Any opinions expressed are not >>> necessarily those of the company from which this email was sent and, >>> whilst to the best of our knowledge no viruses or defects exist, no >>> responsibility can be accepted for any loss or damage arising from >>> its receipt or subsequent use of this email. >>> >>> >>> ------------------------------------------------------------------------ >>> >>> Note: This email is for the confidential use of the named >>> addressee(s) only and may contain proprietary, confidential or >>> privileged information. If you are not the intended recipient, you >>> are hereby notified that any review, dissemination or copying of >>> this email is strictly prohibited, and to please notify the sender >>> immediately and destroy this email and any attachments. Email >>> transmission cannot be guaranteed to be secure or error-free. The >>> Company, therefore, does not make any guarantees as to the >>> completeness or accuracy of this email or any attachments. This >>> email is for informational purposes only and does not constitute a >>> recommendation, offer, request or solicitation of any kind to buy, >>> sell, subscribe, redeem or perform any type of transaction of a >>> financial product. >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> -- >> Jez Tucker >> Head of Research & Development >> Pixit Media >> Mobile: +44 (0) 776 419 3820 >> www.pixitmedia.com > > -- > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 23:02:50 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 22:02:50 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717FB83.6020805@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A3684@CHI-EXCHANGEW1.w2k.jumptrading.com> That's a separate topic from having GPFS CLI commands output machine readable format, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 4:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Apr 20 23:06:18 2016 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 20 Apr 2016 22:06:18 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717FB83.6020805@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> Message-ID: <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> +1 to a real python API. We have written our own, albeit incomplete, library to expose most of what we need. We would be happy to share some general ideas on what should be included, but a real IBM implementation wouldn't have to do what we did. (Think lots of subprocess.Popen + subprocess.communicate and shredding the output of mm commands. And yes, we wrote a parser which could shred the output of either -Y or tabular format.) Thx Paul From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 5:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 23:08:39 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 22:08:39 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> Sounds like a candidate for the GPFS UG Git Hub!! https://github.com/gpfsug/gpfsug-tools -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Sanchez, Paul Sent: Wednesday, April 20, 2016 5:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer +1 to a real python API. We have written our own, albeit incomplete, library to expose most of what we need. We would be happy to share some general ideas on what should be included, but a real IBM implementation wouldn't have to do what we did. (Think lots of subprocess.Popen + subprocess.communicate and shredding the output of mm commands. And yes, we wrote a parser which could shred the output of either -Y or tabular format.) Thx Paul From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 5:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtucker at pixitmedia.com Thu Apr 21 01:05:39 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Thu, 21 Apr 2016 01:05:39 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <57181953.9090506@pixitmedia.com> I'd suggest you attend the UK UG in May then ... ref Agenda: http://www.gpfsug.org/may-2016-uk-user-group/ On 20/04/16 23:08, Bryan Banister wrote: > > Sounds like a candidate for the GPFS UG Git Hub!! > > https://github.com/gpfsug/gpfsug-tools > > -B > > *From:*gpfsug-discuss-bounces at spectrumscale.org > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of > *Sanchez, Paul > *Sent:* Wednesday, April 20, 2016 5:06 PM > *To:* gpfsug main discussion list > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > +1 to a real python API. > > We have written our own, albeit incomplete, library to expose most of > what we need. We would be happy to share some general ideas on what > should be included, but a real IBM implementation wouldn?t have to do > what we did. (Think lots of subprocess.Popen + subprocess.communicate > and shredding the output of mm commands. And yes, we wrote a parser > which could shred the output of either ?Y or tabular format.) > > Thx > > Paul > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry > Evans > *Sent:* Wednesday, April 20, 2016 5:58 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > Someone should just make a python API that just abstracts all of this > > On 20/04/2016 22:18, Jez Tucker wrote: > > So mmlspool does in 4.1.1.3... perhaps my memory fails me. > I'm pretty certain Yuri told me that mmlspool was completely > unsupported and didn't have -Y a couple of years ago when we did > the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. > > Perhaps in light of the mmbackup thread; "Will fix RFEs for > cookies?". Name your price ;-) > > Jez > > On 20/04/16 22:12, Jez Tucker wrote: > > Indeed. > > jtucker at elmo:~$ mmlsfs all -Y > mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: > > I must say I've not seen any headers increment above 0:1 since > ... er .. 3.3(?), so they're pretty static. > > Now, if only mmlspool supported -Y ... or if _all_ commands > supported -Y, I might even FedEX beer. > > Jez > > On 20/04/16 21:46, Bryan Banister wrote: > > What?s nice is that the ?-Y? output provides a HEADER so > that you can still programmatically determine fields of > interest? this is the best! > > I recommend adding ?-Y? option documentation to all > supporting GPFS commands for others to be informed. > > -Bryan > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On > Behalf Of *Barry Evans > *Sent:* Wednesday, April 20, 2016 3:39 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? > I sure didn't... game changer > > If you build a monitoring pipeline using -Y output, make > sure you test between revisions before upgrading. The > columns do have a tendency to change from time to time. > > Cheers, > Barry > > On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any > of the GPFS docs that I?ve read (at least that I > recall), there is a ?-Y? option to many/most GPFS > commands that provides output in machine readable > fashion?. > > That?s right kids? no more parsing obscure, often > changed output columns with your favorite > bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the > named addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not > the intended recipient, you are hereby notified that > any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender > immediately and destroy this email and any > attachments. Email transmission cannot be guaranteed > to be secure or error-free. The Company, therefore, > does not make any guarantees as to the completeness or > accuracy of this email or any attachments. This email > is for informational purposes only and does not > constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, > redeem or perform any type of transaction of a > financial product. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you > are not the intended recipient, this email should not be > read or disclosed to any other person. Please notify the > sender immediately and delete this email from your > computer system. Any opinions expressed are not > necessarily those of the company from which this email was > sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for > any loss or damage arising from its receipt or subsequent > use of this email. > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the named > addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not the > intended recipient, you are hereby notified that any > review, dissemination or copying of this email is strictly > prohibited, and to please notify the sender immediately > and destroy this email and any attachments. Email > transmission cannot be guaranteed to be secure or > error-free. The Company, therefore, does not make any > guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational > purposes only and does not constitute a recommendation, > offer, request or solicitation of any kind to buy, sell, > subscribe, redeem or perform any type of transaction of a > financial product. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you are not > the intended recipient, this email should not be read or disclosed > to any other person. Please notify the sender immediately and > delete this email from your computer system. Any opinions > expressed are not necessarily those of the company from which this > email was sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for any loss > or damage arising from its receipt or subsequent use of this email. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Barry Evans > Technical Director & Co-Founder > Pixit Media > > http://www.pixitmedia.com > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jez.tucker at gpfsug.org Thu Apr 21 01:10:07 2016 From: jez.tucker at gpfsug.org (Jez Tucker) Date: Thu, 21 Apr 2016 01:10:07 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <57181A5F.4070909@gpfsug.org> Btw. If anyone wants to add anything to the UG github, just send a pull request. Jez On 20/04/16 23:08, Bryan Banister wrote: > > Sounds like a candidate for the GPFS UG Git Hub!! > > https://github.com/gpfsug/gpfsug-tools > > -B > > *From:*gpfsug-discuss-bounces at spectrumscale.org > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of > *Sanchez, Paul > *Sent:* Wednesday, April 20, 2016 5:06 PM > *To:* gpfsug main discussion list > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > +1 to a real python API. > > We have written our own, albeit incomplete, library to expose most of > what we need. We would be happy to share some general ideas on what > should be included, but a real IBM implementation wouldn?t have to do > what we did. (Think lots of subprocess.Popen + subprocess.communicate > and shredding the output of mm commands. And yes, we wrote a parser > which could shred the output of either ?Y or tabular format.) > > Thx > > Paul > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry > Evans > *Sent:* Wednesday, April 20, 2016 5:58 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > Someone should just make a python API that just abstracts all of this > > On 20/04/2016 22:18, Jez Tucker wrote: > > So mmlspool does in 4.1.1.3... perhaps my memory fails me. > I'm pretty certain Yuri told me that mmlspool was completely > unsupported and didn't have -Y a couple of years ago when we did > the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. > > Perhaps in light of the mmbackup thread; "Will fix RFEs for > cookies?". Name your price ;-) > > Jez > > On 20/04/16 22:12, Jez Tucker wrote: > > Indeed. > > jtucker at elmo:~$ mmlsfs all -Y > mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: > > I must say I've not seen any headers increment above 0:1 since > ... er .. 3.3(?), so they're pretty static. > > Now, if only mmlspool supported -Y ... or if _all_ commands > supported -Y, I might even FedEX beer. > > Jez > > On 20/04/16 21:46, Bryan Banister wrote: > > What?s nice is that the ?-Y? output provides a HEADER so > that you can still programmatically determine fields of > interest? this is the best! > > I recommend adding ?-Y? option documentation to all > supporting GPFS commands for others to be informed. > > -Bryan > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On > Behalf Of *Barry Evans > *Sent:* Wednesday, April 20, 2016 3:39 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? > I sure didn't... game changer > > If you build a monitoring pipeline using -Y output, make > sure you test between revisions before upgrading. The > columns do have a tendency to change from time to time. > > Cheers, > Barry > > On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any > of the GPFS docs that I?ve read (at least that I > recall), there is a ?-Y? option to many/most GPFS > commands that provides output in machine readable > fashion?. > > That?s right kids? no more parsing obscure, often > changed output columns with your favorite > bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the > named addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not > the intended recipient, you are hereby notified that > any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender > immediately and destroy this email and any > attachments. Email transmission cannot be guaranteed > to be secure or error-free. The Company, therefore, > does not make any guarantees as to the completeness or > accuracy of this email or any attachments. This email > is for informational purposes only and does not > constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, > redeem or perform any type of transaction of a > financial product. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you > are not the intended recipient, this email should not be > read or disclosed to any other person. Please notify the > sender immediately and delete this email from your > computer system. Any opinions expressed are not > necessarily those of the company from which this email was > sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for > any loss or damage arising from its receipt or subsequent > use of this email. > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the named > addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not the > intended recipient, you are hereby notified that any > review, dissemination or copying of this email is strictly > prohibited, and to please notify the sender immediately > and destroy this email and any attachments. Email > transmission cannot be guaranteed to be secure or > error-free. The Company, therefore, does not make any > guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational > purposes only and does not constitute a recommendation, > offer, request or solicitation of any kind to buy, sell, > subscribe, redeem or perform any type of transaction of a > financial product. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you are not > the intended recipient, this email should not be read or disclosed > to any other person. Please notify the sender immediately and > delete this email from your computer system. Any opinions > expressed are not necessarily those of the company from which this > email was sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for any loss > or damage arising from its receipt or subsequent use of this email. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Barry Evans > Technical Director & Co-Founder > Pixit Media > > http://www.pixitmedia.com > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From stijn.deweirdt at ugent.be Thu Apr 21 07:49:03 2016 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 21 Apr 2016 08:49:03 +0200 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <57181A5F.4070909@gpfsug.org> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> <57181A5F.4070909@gpfsug.org> Message-ID: <571877DF.6070600@ugent.be> we have a parser, but not an actual API, in case someone is interested. https://github.com/hpcugent/vsc-filesystems/blob/master/lib/vsc/filesystem/gpfs.py anyway, from my experience, the best man page for the mm* commands is reading the bash scripts themself, they often contain other useful but undocumented options ;) stijn On 04/21/2016 02:10 AM, Jez Tucker wrote: > Btw. If anyone wants to add anything to the UG github, just send a pull > request. > > Jez > > On 20/04/16 23:08, Bryan Banister wrote: >> >> Sounds like a candidate for the GPFS UG Git Hub!! >> >> https://github.com/gpfsug/gpfsug-tools >> >> -B >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of >> *Sanchez, Paul >> *Sent:* Wednesday, April 20, 2016 5:06 PM >> *To:* gpfsug main discussion list >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >> didn't... game changer >> >> +1 to a real python API. >> >> We have written our own, albeit incomplete, library to expose most of >> what we need. We would be happy to share some general ideas on what >> should be included, but a real IBM implementation wouldn?t have to do >> what we did. (Think lots of subprocess.Popen + subprocess.communicate >> and shredding the output of mm commands. And yes, we wrote a parser >> which could shred the output of either ?Y or tabular format.) >> >> Thx >> >> Paul >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry >> Evans >> *Sent:* Wednesday, April 20, 2016 5:58 PM >> *To:* gpfsug-discuss at spectrumscale.org >> >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >> didn't... game changer >> >> Someone should just make a python API that just abstracts all of this >> >> On 20/04/2016 22:18, Jez Tucker wrote: >> >> So mmlspool does in 4.1.1.3... perhaps my memory fails me. >> I'm pretty certain Yuri told me that mmlspool was completely >> unsupported and didn't have -Y a couple of years ago when we did >> the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. >> >> Perhaps in light of the mmbackup thread; "Will fix RFEs for >> cookies?". Name your price ;-) >> >> Jez >> >> On 20/04/16 22:12, Jez Tucker wrote: >> >> Indeed. >> >> jtucker at elmo:~$ mmlsfs all -Y >> >> mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: >> >> >> I must say I've not seen any headers increment above 0:1 since >> ... er .. 3.3(?), so they're pretty static. >> >> Now, if only mmlspool supported -Y ... or if _all_ commands >> supported -Y, I might even FedEX beer. >> >> Jez >> >> On 20/04/16 21:46, Bryan Banister wrote: >> >> What?s nice is that the ?-Y? output provides a HEADER so >> that you can still programmatically determine fields of >> interest? this is the best! >> >> I recommend adding ?-Y? option documentation to all >> supporting GPFS commands for others to be informed. >> >> -Bryan >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On >> Behalf Of *Barry Evans >> *Sent:* Wednesday, April 20, 2016 3:39 PM >> *To:* gpfsug-discuss at spectrumscale.org >> >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? >> I sure didn't... game changer >> >> If you build a monitoring pipeline using -Y output, make >> sure you test between revisions before upgrading. The >> columns do have a tendency to change from time to time. >> >> Cheers, >> Barry >> >> On 20/04/2016 20:02, Bryan Banister wrote: >> >> Apparently, though not documented in man pages or any >> of the GPFS docs that I?ve read (at least that I >> recall), there is a ?-Y? option to many/most GPFS >> commands that provides output in machine readable >> fashion?. >> >> That?s right kids? no more parsing obscure, often >> changed output columns with your favorite >> bash/awk/python/magic. >> >> Why IBM would not document this is beyond me, >> >> -B >> >> >> ------------------------------------------------------------------------ >> >> >> Note: This email is for the confidential use of the >> named addressee(s) only and may contain proprietary, >> confidential or privileged information. If you are not >> the intended recipient, you are hereby notified that >> any review, dissemination or copying of this email is >> strictly prohibited, and to please notify the sender >> immediately and destroy this email and any >> attachments. Email transmission cannot be guaranteed >> to be secure or error-free. The Company, therefore, >> does not make any guarantees as to the completeness or >> accuracy of this email or any attachments. This email >> is for informational purposes only and does not >> constitute a recommendation, offer, request or >> solicitation of any kind to buy, sell, subscribe, >> redeem or perform any type of transaction of a >> financial product. >> >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> This email is confidential in that it is intended for the >> exclusive attention of the addressee(s) indicated. If you >> are not the intended recipient, this email should not be >> read or disclosed to any other person. Please notify the >> sender immediately and delete this email from your >> computer system. Any opinions expressed are not >> necessarily those of the company from which this email was >> sent and, whilst to the best of our knowledge no viruses >> or defects exist, no responsibility can be accepted for >> any loss or damage arising from its receipt or subsequent >> use of this email. >> >> >> ------------------------------------------------------------------------ >> >> >> Note: This email is for the confidential use of the named >> addressee(s) only and may contain proprietary, >> confidential or privileged information. If you are not the >> intended recipient, you are hereby notified that any >> review, dissemination or copying of this email is strictly >> prohibited, and to please notify the sender immediately >> and destroy this email and any attachments. Email >> transmission cannot be guaranteed to be secure or >> error-free. The Company, therefore, does not make any >> guarantees as to the completeness or accuracy of this >> email or any attachments. This email is for informational >> purposes only and does not constitute a recommendation, >> offer, request or solicitation of any kind to buy, sell, >> subscribe, redeem or perform any type of transaction of a >> financial product. >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> -- >> Jez Tucker >> Head of Research & Development >> Pixit Media >> Mobile: +44 (0) 776 419 3820 >> www.pixitmedia.com >> >> -- >> Jez Tucker >> Head of Research & Development >> Pixit Media >> Mobile: +44 (0) 776 419 3820 >> www.pixitmedia.com >> >> This email is confidential in that it is intended for the >> exclusive attention of the addressee(s) indicated. If you are not >> the intended recipient, this email should not be read or disclosed >> to any other person. Please notify the sender immediately and >> delete this email from your computer system. Any opinions >> expressed are not necessarily those of the company from which this >> email was sent and, whilst to the best of our knowledge no viruses >> or defects exist, no responsibility can be accepted for any loss >> or damage arising from its receipt or subsequent use of this email. >> >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> -- >> >> Barry Evans >> Technical Director & Co-Founder >> Pixit Media >> >> http://www.pixitmedia.com >> >> This email is confidential in that it is intended for the exclusive >> attention of the addressee(s) indicated. If you are not the intended >> recipient, this email should not be read or disclosed to any other >> person. Please notify the sender immediately and delete this email >> from your computer system. Any opinions expressed are not necessarily >> those of the company from which this email was sent and, whilst to the >> best of our knowledge no viruses or defects exist, no responsibility >> can be accepted for any loss or damage arising from its receipt or >> subsequent use of this email. >> >> >> ------------------------------------------------------------------------ >> >> Note: This email is for the confidential use of the named addressee(s) >> only and may contain proprietary, confidential or privileged >> information. If you are not the intended recipient, you are hereby >> notified that any review, dissemination or copying of this email is >> strictly prohibited, and to please notify the sender immediately and >> destroy this email and any attachments. Email transmission cannot be >> guaranteed to be secure or error-free. The Company, therefore, does >> not make any guarantees as to the completeness or accuracy of this >> email or any attachments. This email is for informational purposes >> only and does not constitute a recommendation, offer, request or >> solicitation of any kind to buy, sell, subscribe, redeem or perform >> any type of transaction of a financial product. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From mweil at genome.wustl.edu Thu Apr 21 16:31:03 2016 From: mweil at genome.wustl.edu (Matt Weil) Date: Thu, 21 Apr 2016 10:31:03 -0500 Subject: [gpfsug-discuss] PMR 78846,122,000 Message-ID: <5718F237.4040705@genome.wustl.edu> Apr 21 07:41:53 linuscs88 mmfs: Shutting down abnormally due to error in /project/sprelfks1/build/rfks1s007a/src/avs/fs/mmfs/ts/tm/tree.C line 1025 retCode 12, reasonCode 56 any ideas? ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. From jonathan at buzzard.me.uk Thu Apr 21 16:51:01 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Thu, 21 Apr 2016 16:51:01 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <5717C9D3.8050501@buzzard.me.uk> References: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> <5717C9D3.8050501@buzzard.me.uk> Message-ID: <1461253861.1434.110.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-20 at 19:26 +0100, Jonathan Buzzard wrote: > On 20/04/16 17:23, Scott Cumbie wrote: > > You should open a PMR. This is not a ?feature? request, this is a > > failure of the code to work as it should. > > > > I did at least seven years ago. I shall see if I can find the reference > in my old notebooks tomorrow. Unfortunately one has gone missing so I > might not have the reference. > PMR 30456 is what I have written in my notebook, with a date of 11th June 2009, all under a title of "mmbackup is busted". Though I guess IBM might claim that not backing up the file is a fix because back then mmbackup would crash out completely and not backup anything at all. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From russell.steffen1 at navy.mil Thu Apr 21 22:25:30 2016 From: russell.steffen1 at navy.mil (Steffen, Russell CIV FNMOC, N63) Date: Thu, 21 Apr 2016 21:25:30 +0000 Subject: [gpfsug-discuss] [Non-DoD Source] Re: Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com>, <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> Message-ID: <366F49EE121F9F488D7EA78AA37C01620DF75583@NAWEMUGUXM01V.nadsuswe.nads.navy.mil> Last year I wrote a python package to plot the I/O volume our clusters were generating. In order to do that I ended up reverse-engineering the mmsdrfs file format so that I could determine which NSDs were in which filesystems and served by which NSD servers - basic cluster topology. Everything I was able to figure out is in this python module: https://bitbucket.org/rrs42/iographer/src/6d410073fc39b448a4742da7bb1a9ecf258d611c/iographer/GPFS.py?at=master&fileviewer=file-view-default And if anyone is interested in the package the repository is hosted here: https://bitbucket.org/rrs42/iographer -- Russell Steffen HPC Systems Analyst/Systems Administrator, N63 Fleet Numerical Meteorology and Oceanograph Center russell.steffen1 at navy.mil, Phone 831-656-4218 ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sanchez, Paul [Paul.Sanchez at deshaw.com] Sent: Wednesday, April 20, 2016 3:06 PM To: gpfsug main discussion list Subject: [Non-DoD Source] Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer +1 to a real python API. We have written our own, albeit incomplete, library to expose most of what we need. We would be happy to share some general ideas on what should be included, but a real IBM implementation wouldn?t have to do what we did. (Think lots of subprocess.Popen + subprocess.communicate and shredding the output of mm commands. And yes, we wrote a parser which could shred the output of either ?Y or tabular format.) Thx Paul From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 5:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What?s nice is that the ?-Y? output provides a HEADER so that you can still programmatically determine fields of interest? this is the best! I recommend adding ?-Y? option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS commands that provides output in machine readable fashion?. That?s right kids? no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [X] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [X] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [X] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. From chair at spectrumscale.org Fri Apr 22 08:38:55 2016 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Fri, 22 Apr 2016 08:38:55 +0100 Subject: [gpfsug-discuss] ISC June Meeting Message-ID: Hi All, IBM are hoping to put together a short agenda for a meeting at ISC in June this year. They have asked if there are any US based people likely to be attending who would be interested in giving a talk at the ISC, Germany meeting. If you are US based and planning to attend, please let me know and I'll put you in touch with the right people. Its likely to be on the Monday at the start of ISC, further details when its all sorted! Thanks Simon From Kevin.Buterbaugh at Vanderbilt.Edu Fri Apr 22 16:43:00 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 22 Apr 2016 15:43:00 +0000 Subject: [gpfsug-discuss] make InstallImages errors Message-ID: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> Hi All, We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the following errors: /usr/lpp/mmfs/src root at testnsd3# make InstallImages (cd gpl-linux; /usr/bin/make InstallImages; \ exit $?) || exit 1 make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... depmod: ERROR: fstatat(4, mmfs26.ko): No such file or directory depmod: ERROR: fstatat(4, mmfslinux.ko): No such file or directory depmod: ERROR: fstatat(4, tracedev.ko): No such file or directory make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' /usr/lpp/mmfs/src root at testnsd3# However, they don?t seem to actually impact anything ? i.e. GPFS starts up just fine on the box and the upgrade is apparently successful: /root root at testnsd3# mmgetstate Node number Node name GPFS state ------------------------------------------ 3 testnsd3 active /root root at testnsd3# mmdiag --version === mmdiag: version === Current GPFS build: "4.2.0.2 ". Built on Mar 7 2016 at 10:28:55 Running 5 minutes 5 secs /root root at testnsd3# So just to satisfy my own curiosity, has anyone else seen this and can anybody explain what that?s all about? OS is latest CentOS 7, BTW. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 22 20:52:35 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 22 Apr 2016 19:52:35 +0000 Subject: [gpfsug-discuss] make InstallImages errors In-Reply-To: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> References: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> Message-ID: Did you do a kernel upgrade as well? I've seen similar when you get dangling symlinks in the weak updates kernel module directory. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 22 April 2016 16:43 To: gpfsug main discussion list Subject: [gpfsug-discuss] make InstallImages errors Hi All, We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the following errors: /usr/lpp/mmfs/src root at testnsd3# make InstallImages (cd gpl-linux; /usr/bin/make InstallImages; \ exit $?) || exit 1 make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... depmod: ERROR: fstatat(4, mmfs26.ko): No such file or directory depmod: ERROR: fstatat(4, mmfslinux.ko): No such file or directory depmod: ERROR: fstatat(4, tracedev.ko): No such file or directory make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' /usr/lpp/mmfs/src root at testnsd3# However, they don?t seem to actually impact anything ? i.e. GPFS starts up just fine on the box and the upgrade is apparently successful: /root root at testnsd3# mmgetstate Node number Node name GPFS state ------------------------------------------ 3 testnsd3 active /root root at testnsd3# mmdiag --version === mmdiag: version === Current GPFS build: "4.2.0.2 ". Built on Mar 7 2016 at 10:28:55 Running 5 minutes 5 secs /root root at testnsd3# So just to satisfy my own curiosity, has anyone else seen this and can anybody explain what that?s all about? OS is latest CentOS 7, BTW. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 From ewahl at osc.edu Fri Apr 22 21:12:20 2016 From: ewahl at osc.edu (Edward Wahl) Date: Fri, 22 Apr 2016 16:12:20 -0400 Subject: [gpfsug-discuss] make InstallImages errors In-Reply-To: References: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> Message-ID: <20160422161220.135f209a@osc.edu> On Fri, 22 Apr 2016 19:52:35 +0000 "Simon Thompson (Research Computing - IT Services)" wrote: > > Did you do a kernel upgrade as well? > > I've seen similar when you get dangling symlinks in the weak updates kernel > module directory. > Simon I've had exactly the same experience here. From 4.x going back to early 3.4 with this error. Ed > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org > [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Buterbaugh, Kevin L > [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 22 April 2016 16:43 To: gpfsug main > discussion list Subject: [gpfsug-discuss] make InstallImages errors > > Hi All, > > We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) > to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the > following errors: > > /usr/lpp/mmfs/src > root at testnsd3# make InstallImages > (cd gpl-linux; /usr/bin/make InstallImages; \ > exit $?) || exit 1 > make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux' > Pre-kbuild step 1... > depmod: ERROR: fstatat(4, mmfs26.ko): No such file or directory > depmod: ERROR: fstatat(4, mmfslinux.ko): No such file or directory > depmod: ERROR: fstatat(4, tracedev.ko): No such file or directory > make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' > /usr/lpp/mmfs/src > root at testnsd3# > > However, they don?t seem to actually impact anything ? i.e. GPFS starts up > just fine on the box and the upgrade is apparently successful: > > /root > root at testnsd3# mmgetstate > > Node number Node name GPFS state > ------------------------------------------ > 3 testnsd3 active > /root > root at testnsd3# mmdiag --version > > === mmdiag: version === > Current GPFS build: "4.2.0.2 ". > Built on Mar 7 2016 at 10:28:55 > Running 5 minutes 5 secs > /root > root at testnsd3# > > So just to satisfy my own curiosity, has anyone else seen this and can > anybody explain what that?s all about? OS is latest CentOS 7, BTW. Thanks? > > Kevin > > ? > Kevin Buterbaugh - Senior System Administrator > Vanderbilt University - Advanced Computing Center for Research and Education > Kevin.Buterbaugh at vanderbilt.edu - > (615)875-9633 > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Ed Wahl Ohio Supercomputer Center 614-292-9302 From jan.finnerman at load.se Mon Apr 25 21:27:13 2016 From: jan.finnerman at load.se (Jan Finnerman Load) Date: Mon, 25 Apr 2016 20:27:13 +0000 Subject: [gpfsug-discuss] Dell Multipath Message-ID: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Hi, I realize this might not be strictly GPFS related but I?m getting a little desperate here? I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer and struggle on a question of disk multipathing for the intended NSD disks with their direct attached SAS disk systems. If I do a multipath ?ll, after a few seconds I just get the prompt back. I expected to see the usual big amount of path info, but nothing there. If I do a multipathd ?k and then a show config, I see all the Dell disk luns with reasonably right parameters. I can see them as /dev/sdf, /dev/sdg, etc. devices. I can also add them in PowerKVM:s Kimchi web interface and even deploy a GPFS installation on it. The big question is, though, how do I get multipathing to work ? Do I need any special driver or setting in the multipath.conf file ? I found some of that but more generic e.g. for RedHat 6, but now we are in PowerKVM country. The platform consists of: 4x IBM S812L servers SAS controller PowerKVM 3.1 Red Hat 7.1 2x Dell MD3460 SAS disk systems No switches Jan ///Jan [cid:E11C3C62-0896-4FE2-9DCF-FFA5CF812B75] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:621A25E3-E641-4D21-B2C3-0C93AB8B73B6] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3[5].png Type: image/png Size: 5565 bytes Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3[5].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA[5].png Type: image/png Size: 8584 bytes Desc: E895055E-B11B-47C3-BA29-E12D29D394FA[5].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1][5].png Type: image/png Size: 6664 bytes Desc: CertPowerSystems_sm[1][5].png URL: From jenocram at gmail.com Mon Apr 25 21:37:18 2016 From: jenocram at gmail.com (Jeno Cram) Date: Mon, 25 Apr 2016 16:37:18 -0400 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Message-ID: Is multipathd running? Also make sure you don't have them blacklisted in your multipath.conf. On Apr 25, 2016 4:27 PM, "Jan Finnerman Load" wrote: > Hi, > > I realize this might not be strictly GPFS related but I?m getting a little > desperate here? > I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer and > struggle on a question of disk multipathing for the intended NSD disks with > their direct attached SAS disk systems. > If I do a *multipath ?ll*, after a few seconds I just get the prompt > back. I expected to see the usual big amount of path info, but nothing > there. > > If I do a *multipathd ?k* and then a show config, I see all the Dell disk > luns with reasonably right parameters. I can see them as /dev/sdf, > /dev/sdg, etc. devices. > I can also add them in PowerKVM:s Kimchi web interface and even deploy a > GPFS installation on it. The big question is, though, how do I get > multipathing to work ? > Do I need any special driver or setting in the multipath.conf file ? > I found some of that but more generic e.g. for RedHat 6, but now we are in > PowerKVM country. > > The platform consists of: > 4x IBM S812L servers > SAS controller > PowerKVM 3.1 > Red Hat 7.1 > 2x Dell MD3460 *SAS* disk systems > No switches > > Jan > > ///Jan > > > Jan Finnerman > > Senior Technical consultant > > > [image: CertTiv_sm] > > > Kista Science Tower > > 164 51 Kista > > Mobil: +46 (0)70 631 66 26 > > Kontor: +46 (0)8 633 66 00/26 > > jan.finnerman at load.se > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3[5].png Type: image/png Size: 5565 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1][5].png Type: image/png Size: 6664 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA[5].png Type: image/png Size: 8584 bytes Desc: not available URL: From ewahl at osc.edu Mon Apr 25 21:48:07 2016 From: ewahl at osc.edu (Edward Wahl) Date: Mon, 25 Apr 2016 16:48:07 -0400 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Message-ID: <20160425164807.52f40d7a@osc.edu> Sounds like too wide of a blacklist. Have you specifically added the MD devices to the blacklist_exceptions? What does the overall blacklist and blacklist_exceptions look like? A quick 'lsscsi' should give you the vendor/product to stick into the blacklist_exception. Wildcards work with quotes there, as well if you have multiple similar but not exact enclosures. eg: "IBM 1818 FAStT" can become: device { vendor "IBM" product "1818*" } or Dell MD*, etc. If you have issues with things working in the interactive mode or debug mode (which usually turns out to be a timing problem) run a "multipath -v3" and check the output. It will normally tell you exactly why each disk device is being skipped. Things like "device node name blacklisted" or whitelisted. Ed Wahl OSC On Mon, 25 Apr 2016 20:27:13 +0000 Jan Finnerman Load wrote: > Hi, > > I realize this might not be strictly GPFS related but I?m getting a little > desperate here? I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a > customer and struggle on a question of disk multipathing for the intended NSD > disks with their direct attached SAS disk systems. If I do a multipath ?ll, > after a few seconds I just get the prompt back. I expected to see the usual > big amount of path info, but nothing there. > > If I do a multipathd ?k and then a show config, I see all the Dell disk luns > with reasonably right parameters. I can see them as /dev/sdf, /dev/sdg, etc. > devices. I can also add them in PowerKVM:s Kimchi web interface and even > deploy a GPFS installation on it. The big question is, though, how do I get > multipathing to work ? Do I need any special driver or setting in the > multipath.conf file ? I found some of that but more generic e.g. for RedHat > 6, but now we are in PowerKVM country. > > The platform consists of: > 4x IBM S812L servers > SAS controller > PowerKVM 3.1 > Red Hat 7.1 > 2x Dell MD3460 SAS disk systems > No switches > > Jan > ///Jan > > [cid:E11C3C62-0896-4FE2-9DCF-FFA5CF812B75] > Jan Finnerman > Senior Technical consultant > > [CertTiv_sm] > > [cid:621A25E3-E641-4D21-B2C3-0C93AB8B73B6] > Kista Science Tower > 164 51 Kista > Mobil: +46 (0)70 631 66 26 > Kontor: +46 (0)8 633 66 00/26 > jan.finnerman at load.se -- Ed Wahl Ohio Supercomputer Center 614-292-9302 From mweil at genome.wustl.edu Mon Apr 25 21:50:02 2016 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 25 Apr 2016 15:50:02 -0500 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Message-ID: <571E82FA.2000008@genome.wustl.edu> enable mpathconf --enable --with_multipathd y show config multipathd show config On 4/25/16 3:27 PM, Jan Finnerman Load wrote: > Hi, > > I realize this might not be strictly GPFS related but I?m getting a > little desperate here? > I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer > and struggle on a question of disk multipathing for the intended NSD > disks with their direct attached SAS disk systems. > If I do a /*multipath ?ll*/, after a few seconds I just get the > prompt back. I expected to see the usual big amount of path info, but > nothing there. > > If I do a /*multipathd ?k*/ and then a show config, I see all the Dell > disk luns with reasonably right parameters. I can see them as > /dev/sdf, /dev/sdg, etc. devices. > I can also add them in PowerKVM:s Kimchi web interface and even deploy > a GPFS installation on it. The big question is, though, how do I get > multipathing to work ? > Do I need any special driver or setting in the multipath.conf file ? > I found some of that but more generic e.g. for RedHat 6, but now we > are in PowerKVM country. > > The platform consists of: > 4x IBM S812L servers > SAS controller > PowerKVM 3.1 > Red Hat 7.1 > 2x Dell MD3460 *SAS* disk systems > No switches > > Jan > > ///Jan > > > Jan Finnerman > > Senior Technical consultant > > > CertTiv_sm > > > Kista Science Tower > > 164 51 Kista > > Mobil: +46 (0)70 631 66 26 > > Kontor: +46 (0)8 633 66 00/26 > > jan.finnerman at load.se > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 8584 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 5565 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 6664 bytes Desc: not available URL: From stefan.dietrich at desy.de Tue Apr 26 22:01:52 2016 From: stefan.dietrich at desy.de (Dietrich, Stefan) Date: Tue, 26 Apr 2016 23:01:52 +0200 (CEST) Subject: [gpfsug-discuss] CES behind DNS RR and 16 group limitation? Message-ID: <183207187.6100390.1461704512921.JavaMail.zimbra@desy.de> Hello, we will soon start to deploy CES in our clusters, however two questions popped up. - According to the "CES NFS Support" in the "Implementing Cluster Export Services" documentation, DNS round-robin might lead to corrupted data with NFSv3: If a DNS Round Robin (RR) entry name is used to mount an NFSv3 export, data corruption and data unavailability might occur. The lock manager on the GPFS file system is not clustered-system-aware. The documentation does not state anything about NFSv4, so this restriction does not apply? Has somebody already experience with NFS and SMB mounts/exports behind a DNS RR entry? - For NFSv3 there is the known 16 supplementary group limitation. The CES option MANAGE_GIDS lifts this limitation and group lookup is performed on the protocl node itself. However, the NFS version is not mentioned in the docs. Would this work for NFSv4 with secType=sys as well or is this limited to NFSv3? With NFSv4 and secType=krb the 16 group limit does not apply, but I can think of some use-cases where the ticket handling might be problematic. Regards, Stefan -- ------------------------------------------------------------------------ Stefan Dietrich Deutsches Elektronen-Synchrotron (IT-Systems) Ein Forschungszentrum der Helmholtz-Gemeinschaft Notkestr. 85 phone: +49-40-8998-4696 22607 Hamburg e-mail: stefan.dietrich at desy.de Germany ------------------------------------------------------------------------ From S.J.Thompson at bham.ac.uk Tue Apr 26 22:09:18 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Tue, 26 Apr 2016 21:09:18 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: Hi, We've had some reports from some of our users that out CES SMB exports are slow to access. It appears that this is only when the client is a Linux system and using SMB to access the file-system. In fact if we dual boot the same box, we can get sensible speeds out of it (I.e. Not network problems to the client system). They also report that access to real Windows based file-servers works at sensible speeds. Maybe the Win file servers support SMB1, but has anyone else seen this, or have any suggestions? Thanks Simon From jonathan at buzzard.me.uk Tue Apr 26 22:27:24 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 26 Apr 2016 22:27:24 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <571FDD3C.3080801@buzzard.me.uk> On 26/04/16 22:09, Simon Thompson (Research Computing - IT Services) wrote: > Hi, > > We've had some reports from some of our users that out CES SMB > exports are slow to access. > > It appears that this is only when the client is a Linux system and > using SMB to access the file-system. In fact if we dual boot the same > box, we can get sensible speeds out of it (I.e. Not network problems > to the client system). > > They also report that access to real Windows based file-servers works > at sensible speeds. Maybe the Win file servers support SMB1, but has > anyone else seen this, or have any suggestions? > In the past I have seen huge difference between opening up a terminal and doing a mount -t cifs ... and mapping the drive in Gnome. The later is a fraction of the performance of the first. I suspect that KDE is similar but I have not used KDE in anger now for 17 years. I would say we need to know what version of Linux you are having issues with and what method of attaching to the server you are using. In general best performance comes from a proper mount. If you have not tried that yet do so first. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at gmail.com Tue Apr 26 23:48:23 2016 From: oehmes at gmail.com (Sven Oehme) Date: Tue, 26 Apr 2016 15:48:23 -0700 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: can you check what OS level this is ? i have seen reports from issues with RHEL 7 clients and SMB On Tue, Apr 26, 2016 at 2:09 PM, Simon Thompson (Research Computing - IT Services) wrote: > Hi, > > We've had some reports from some of our users that out CES SMB exports are > slow to access. > > It appears that this is only when the client is a Linux system and using > SMB to access the file-system. In fact if we dual boot the same box, we can > get sensible speeds out of it (I.e. Not network problems to the client > system). > > They also report that access to real Windows based file-servers works at > sensible speeds. Maybe the Win file servers support SMB1, but has anyone > else seen this, or have any suggestions? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Wed Apr 27 01:21:09 2016 From: YARD at il.ibm.com (Yaron Daniel) Date: Wed, 27 Apr 2016 03:21:09 +0300 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> Hi Please run this command: # mmsmb export list export path guest ok smb encrypt cifs /gpfs1/cifs no disabled mixed /gpfs1/mixed no disabled cifs-text /gpfs/gpfs2/cifs-text/ no auto nfs-text /gpfs/gpfs2/nfs-text/ no auto Try to disable "smb encrypt" value, and try again. Example: #mmsmb export change --option "smb encrypt=disabled" cifs-text Regards Yaron Daniel 94 Em Ha'Moshavot Rd Server, Storage and Data Services - Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Sven Oehme To: gpfsug main discussion list Date: 04/27/2016 01:48 AM Subject: Re: [gpfsug-discuss] SMB access speed Sent by: gpfsug-discuss-bounces at spectrumscale.org can you check what OS level this is ? i have seen reports from issues with RHEL 7 clients and SMB On Tue, Apr 26, 2016 at 2:09 PM, Simon Thompson (Research Computing - IT Services) wrote: Hi, We've had some reports from some of our users that out CES SMB exports are slow to access. It appears that this is only when the client is a Linux system and using SMB to access the file-system. In fact if we dual boot the same box, we can get sensible speeds out of it (I.e. Not network problems to the client system). They also report that access to real Windows based file-servers works at sensible speeds. Maybe the Win file servers support SMB1, but has anyone else seen this, or have any suggestions? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From A.K.Ghumra at bham.ac.uk Wed Apr 27 09:11:35 2016 From: A.K.Ghumra at bham.ac.uk (Aslam Ghumra (IT Services, Facilities Management)) Date: Wed, 27 Apr 2016 08:11:35 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear -------------- next part -------------- An HTML attachment was scrubbed... URL: From secretary at gpfsug.org Wed Apr 27 10:46:18 2016 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Wed, 27 Apr 2016 10:46:18 +0100 Subject: [gpfsug-discuss] US GPFS/Spectrum Scale Events Message-ID: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We'd like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 [1] Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: * 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 * Enhancements for CORAL from IBM * Panel discussion with customers, topic TBD * AFM and integration with Spectrum Protect * Best practices for GPFS or Spectrum Scale Tuning. * At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ---- 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ---- We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal Links: ------ [1] https://www.spxxl.org/?q=New-York-City-2016 -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.K.Ghumra at bham.ac.uk Wed Apr 27 12:35:55 2016 From: A.K.Ghumra at bham.ac.uk (Aslam Ghumra (IT Services, Facilities Management)) Date: Wed, 27 Apr 2016 11:35:55 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: Apologies, I meant Mbps not Gbps Regards, Aslam Research Computing Team DDI: +44 (121) 414 5877 | Skype: JanitorX | Twitter: @aslamghumra | a.k.ghumra at bham.ac.uk | intranet.birmingham.ac.uk/bear -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of gpfsug-discuss-request at spectrumscale.org Sent: 27 April 2016 12:00 To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 51, Issue 48 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. SMB access speed (Aslam Ghumra (IT Services, Facilities Management)) 2. US GPFS/Spectrum Scale Events (Secretary GPFS UG) ---------------------------------------------------------------------- Message: 1 Date: Wed, 27 Apr 2016 08:11:35 +0000 From: "Aslam Ghumra (IT Services, Facilities Management)" To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] SMB access speed Message-ID: Content-Type: text/plain; charset="iso-8859-1" As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Wed, 27 Apr 2016 10:46:18 +0100 From: Secretary GPFS UG To: gpfsug main discussion list Cc: "usa-principal-gpfsug.org" , usa-co-principal at gpfsug.org, Chair , Gorini Stefano Claudio Subject: [gpfsug-discuss] US GPFS/Spectrum Scale Events Message-ID: <21b651c4a310b67c139fccff707dce97 at webmail.gpfsug.org> Content-Type: text/plain; charset="us-ascii" Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We'd like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 [1] Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: * 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 * Enhancements for CORAL from IBM * Panel discussion with customers, topic TBD * AFM and integration with Spectrum Protect * Best practices for GPFS or Spectrum Scale Tuning. * At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ---- 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ---- We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal Links: ------ [1] https://www.spxxl.org/?q=New-York-City-2016 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 51, Issue 48 ********************************************** From jonathan at buzzard.me.uk Wed Apr 27 12:40:37 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 27 Apr 2016 12:40:37 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <1461757237.1434.178.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-27 at 08:11 +0000, Aslam Ghumra (IT Services, Facilities Management) wrote: > As Simon has reported, the speed of access on Linux system are slow. > > > We've just used the mount command as below > > > mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o > noperm //<> /media/mnt1 > Try dialing back on the SMB version would be my first port of call. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 27 14:10:32 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 27 Apr 2016 13:10:32 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Hi All, Question - why are you SAMBA mounting to Linux clients instead of CNFS mounting? We don?t use CES (yet) here, but our ?rules? are: 1) if you?re a Linux client, you CNFS mount. 2) if you?re a Windows client, you SAMBA mount. 3) if you?re a Mac client, you can do either. (C)NFS seems to be must more stable and less problematic than SAMBA, in our experience. Just trying to understand? Kevin On Apr 27, 2016, at 3:11 AM, Aslam Ghumra (IT Services, Facilities Management) > wrote: As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 27 14:16:57 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 27 Apr 2016 13:16:57 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> References: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Message-ID: We don't manage the Linux systems, wr have no control over identity or authentication on them, but we do for SMB access. Simon -----Original Message----- From: Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: Wednesday, April 27, 2016 02:11 PM GMT Standard Time To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB access speed Hi All, Question - why are you SAMBA mounting to Linux clients instead of CNFS mounting? We don?t use CES (yet) here, but our ?rules? are: 1) if you?re a Linux client, you CNFS mount. 2) if you?re a Windows client, you SAMBA mount. 3) if you?re a Mac client, you can do either. (C)NFS seems to be must more stable and less problematic than SAMBA, in our experience. Just trying to understand? Kevin On Apr 27, 2016, at 3:11 AM, Aslam Ghumra (IT Services, Facilities Management) > wrote: As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 27 19:57:33 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 27 Apr 2016 19:57:33 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> References: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Message-ID: <57210B9D.8080906@buzzard.me.uk> On 27/04/16 14:10, Buterbaugh, Kevin L wrote: > Hi All, > > Question - why are you SAMBA mounting to Linux clients instead of CNFS > mounting? We don?t use CES (yet) here, but our ?rules? are: > > 1) if you?re a Linux client, you CNFS mount. > 2) if you?re a Windows client, you SAMBA mount. > 3) if you?re a Mac client, you can do either. > > (C)NFS seems to be must more stable and less problematic than SAMBA, in > our experience. Just trying to understand? > My rule that trumps all those is that a given share is available via SMB *OR* NFS, but never both. Therein lies the path to great pain in the future. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From bpappas at dstonline.com Wed Apr 27 20:38:06 2016 From: bpappas at dstonline.com (Bill Pappas) Date: Wed, 27 Apr 2016 19:38:06 +0000 Subject: [gpfsug-discuss] GPFS discussions Message-ID: Where do other users in this group meet to discuss GPFS advancements and share experiences/how-tos? How often? I am speaking of conferences, etc. Thank you, Bill Pappas 901-619-0585 bpappas at dstonline.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 27 20:47:55 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 27 Apr 2016 20:47:55 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Message-ID: <5721176B.5020809@buzzard.me.uk> On 27/04/16 14:16, Simon Thompson (Research Computing - IT Services) wrote: > We don't manage the Linux systems, wr have no control over identity or > authentication on them, but we do for SMB access. > Does not the combination of Ganesha and NFSv4 with Kerberos fix that? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From S.J.Thompson at bham.ac.uk Wed Apr 27 20:52:46 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 27 Apr 2016 19:52:46 +0000 Subject: [gpfsug-discuss] GPFS discussions In-Reply-To: References: Message-ID: Hi Bill, As a user community, we organise events in the UK and USA, we post them on the mailing list and the group website - www.spectrumscale.org. There are a few types of events, meet the devs, which are typically a small group of customers, an integrator or two, and a few developers. We also do @conference events, for example at Super Computing (USA), Computing Insights UK, ibm are also trying to get a meeting running at ISC as well. We then have the larger annual events, for example in the UK we have a meeting in May. These are typically larger meetings with IBM speakers, customer talks and partner talks. Finally there are events organsied/advertised with other groups, for example SPXXL, where in the UK last year we ran with SPXXL's meeting. This is also happening in NYC in a few weeks. In the UK we have a much smaller geographic problem than the USA, we've also been going a lot longer - the USA side chapter only launched September last year, and Kristy and Bob are building the activity over there. I think if there was interest in a an informal (e.g.) state meeting that people wanted to coordinate with Kristy/Bob, then we could advertise to the list. Of course all of those involved in organising from the user side of things have real jobs as well and getting big meetings up and running takes quite a lot of work (agendas, speakers, venues, lunches, registration...) Simon (uk group chair) ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bill Pappas [bpappas at dstonline.com] Sent: 27 April 2016 20:38 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] GPFS discussions Where do other users in this group meet to discuss GPFS advancements and share experiences/how-tos? How often? I am speaking of conferences, etc. Thank you, Bill Pappas 901-619-0585 bpappas at dstonline.com From Greg.Lehmann at csiro.au Thu Apr 28 00:27:03 2016 From: Greg.Lehmann at csiro.au (Greg.Lehmann at csiro.au) Date: Wed, 27 Apr 2016 23:27:03 +0000 Subject: [gpfsug-discuss] GPFS discussions In-Reply-To: References: Message-ID: Hi Bill, In Australia, I've been lobbying IBM to do something locally, after the great UG meeting at SC15 in Austin. It is looking like they might tack something onto the annual tech symposium they have here - no time frame yet but August has been when it happened for the last couple of years. At that event we should be able to gauge interest on whether we can form a local UG. The advantage of the timing is that a lot of experts will be in the country for the Tech Symposium. They are also talking about another local HPC focused event in the same time frame. My guess is it may well be all bundled together. Here's hoping it comes off. It might give some of you an excuse to come to Australia! Seriously, I am jealous of the events I see happening in the UK. Cheers, Greg Lehmann Senior High Performance Data Specialist Data Services | Scientific Computing Platforms CSIRO Information Management and Technology Phone: +61 7 3327 4137 | Fax: +61 1 3327 4455 Greg.Lehmann at csiro.au | www.csiro.au Address: 1 Technology Court, Pullenvale, QLD 4069 PLEASE NOTE The information contained in this email may be confidential or privileged. Any unauthorised use or disclosure is prohibited. If you have received this email in error, please delete it immediately and notify the sender by return email. Thank you. To the extent permitted by law, CSIRO does not represent, warrant and/or guarantee that the integrity of this communication has been maintained or that the communication is free of errors, virus, interception or interference. Please consider the environment before printing this email. -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Thursday, 28 April 2016 5:53 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFS discussions Hi Bill, As a user community, we organise events in the UK and USA, we post them on the mailing list and the group website - www.spectrumscale.org. There are a few types of events, meet the devs, which are typically a small group of customers, an integrator or two, and a few developers. We also do @conference events, for example at Super Computing (USA), Computing Insights UK, ibm are also trying to get a meeting running at ISC as well. We then have the larger annual events, for example in the UK we have a meeting in May. These are typically larger meetings with IBM speakers, customer talks and partner talks. Finally there are events organsied/advertised with other groups, for example SPXXL, where in the UK last year we ran with SPXXL's meeting. This is also happening in NYC in a few weeks. In the UK we have a much smaller geographic problem than the USA, we've also been going a lot longer - the USA side chapter only launched September last year, and Kristy and Bob are building the activity over there. I think if there was interest in a an informal (e.g.) state meeting that people wanted to coordinate with Kristy/Bob, then we could advertise to the list. Of course all of those involved in organising from the user side of things have real jobs as well and getting big meetings up and running takes quite a lot of work (agendas, speakers, venues, lunches, registration...) Simon (uk group chair) ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bill Pappas [bpappas at dstonline.com] Sent: 27 April 2016 20:38 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] GPFS discussions Where do other users in this group meet to discuss GPFS advancements and share experiences/how-tos? How often? I am speaking of conferences, etc. Thank you, Bill Pappas 901-619-0585 bpappas at dstonline.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From usa-principal at gpfsug.org Thu Apr 28 15:19:51 2016 From: usa-principal at gpfsug.org (GPFS UG USA Principal) Date: Thu, 28 Apr 2016 10:19:51 -0400 Subject: [gpfsug-discuss] US GPFS/Spectrum Scale Events In-Reply-To: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> References: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> Message-ID: Thank you Claire. All, please note on the SPXXL registration page referenced below there is now a $0 May 26 GPFS Day Registration option. -Kristy > On Apr 27, 2016, at 5:46 AM, Secretary GPFS UG wrote: > > Dear All, > > Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. > > Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 > > This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 > > If you wish to register, please do so via the Eventbrite page. > > Kind regards, > > -- > Claire O'Toole > Spectrum Scale/GPFS User Group Secretary > +44 (0)7508 033896 > www.spectrumscaleug.org > > > --- > > Hello all, > > We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. > > 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. > > > Tentative Agenda: > ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 > ? Enhancements for CORAL from IBM > ? Panel discussion with customers, topic TBD > ? AFM and integration with Spectrum Protect > ? Best practices for GPFS or Spectrum Scale Tuning. > ? At least one site update > > Location: > New York Academy of Medicine > 1216 Fifth Avenue > New York, NY 10029 > > ?? > > 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! > > Location: Argonne National Lab more details and final agenda will come later. > > Tentative Agenda: > > > 9:00a-12:30p > 9-9:30a - Opening Remarks > 9:30-10a Deep Dive - Update on ESS > 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) > 11-11:30 Break > 11:30a-Noon - Deep Dive - Protect & Scale integration > Noon-12:30p HDFS/Hadoop > > 12:30 - 1:30p Lunch > > 1:30p-5:00p > 1:30 - 2:00p IBM AFM Update > 2:00-2:30p ANL: AFM as a burst buffer > 2:30-3:00p ANL: GHI (GPFS HPSS Integration) > 3:00-3:30p Break > 3:30p - 4:00p LANL: ? or other site preso > 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences > 4:30p -5:00p Closing comments and Open Forum for Questions > > 5:00 - ? > Beer hunting? > > > ?? > > > We hope you can attend one or both of these events. > > Best, > Kristy Kallback-Rose & Bob Oesterlin > GPFS Users Group - USA Chapter - Principal & Co-principal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mark.Roberts at awe.co.uk Thu Apr 28 15:40:18 2016 From: Mark.Roberts at awe.co.uk (Mark.Roberts at awe.co.uk) Date: Thu, 28 Apr 2016 14:40:18 +0000 Subject: [gpfsug-discuss] EXTERNAL: Re: US GPFS/Spectrum Scale Events In-Reply-To: References: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> Message-ID: <201604281438.u3SEckmo029951@msw1.awe.co.uk> Kirsty, Thank you for the heads up. I?m guessing that those people who have already registered for XXL prior to this option should proceed to the Eventbrite page and register the GPFS day ? Regards Mark Roberts AWE From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of GPFS UG USA Principal Sent: 28 April 2016 15:20 To: Secretary GPFS UG Cc: usa-co-principal at gpfsug.org; Chair ; gpfsug main discussion list ; Gorini Stefano Claudio Subject: EXTERNAL: Re: [gpfsug-discuss] US GPFS/Spectrum Scale Events Thank you Claire. All, please note on the SPXXL registration page referenced below there is now a $0 May 26 GPFS Day Registration option. -Kristy On Apr 27, 2016, at 5:46 AM, Secretary GPFS UG > wrote: Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 ? Enhancements for CORAL from IBM ? Panel discussion with customers, topic TBD ? AFM and integration with Spectrum Protect ? Best practices for GPFS or Spectrum Scale Tuning. ? At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ?? 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ?? We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From kallbac at iu.edu Thu Apr 28 15:47:18 2016 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Thu, 28 Apr 2016 14:47:18 +0000 Subject: [gpfsug-discuss] EXTERNAL: Re: US GPFS/Spectrum Scale Events In-Reply-To: <201604281438.u3SEckmo029951@msw1.awe.co.uk> References: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> <201604281438.u3SEckmo029951@msw1.awe.co.uk> Message-ID: Stefano, Can you take this one? Thanks, Kristy On Apr 28, 2016, at 10:40 AM, Mark.Roberts at awe.co.uk wrote: Kirsty, Thank you for the heads up. I?m guessing that those people who have already registered for XXL prior to this option should proceed to the Eventbrite page and register the GPFS day ? Regards Mark Roberts AWE From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of GPFS UG USA Principal Sent: 28 April 2016 15:20 To: Secretary GPFS UG > Cc: usa-co-principal at gpfsug.org; Chair >; gpfsug main discussion list >; Gorini Stefano Claudio > Subject: EXTERNAL: Re: [gpfsug-discuss] US GPFS/Spectrum Scale Events Thank you Claire. All, please note on the SPXXL registration page referenced below there is now a $0 May 26 GPFS Day Registration option. -Kristy On Apr 27, 2016, at 5:46 AM, Secretary GPFS UG > wrote: Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 ? Enhancements for CORAL from IBM ? Panel discussion with customers, topic TBD ? AFM and integration with Spectrum Protect ? Best practices for GPFS or Spectrum Scale Tuning. ? At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ?? 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ?? We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Apr 28 22:04:58 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 28 Apr 2016 21:04:58 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> References: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> Message-ID: Ok, we are going to try this out and see if this makes a difference. The Windows server which is "faster" from Linux is running Server 2008R2, so I guess isn't doing encrypted SMB. Will report back next week once we've run some tests. Simon -----Original Message----- From: Yaron Daniel [YARD at il.ibm.com] Sent: Wednesday, April 27, 2016 01:21 AM GMT Standard Time To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB access speed Hi Please run this command: # mmsmb export list export path guest ok smb encrypt cifs /gpfs1/cifs no disabled mixed /gpfs1/mixed no disabled cifs-text /gpfs/gpfs2/cifs-text/ no auto nfs-text /gpfs/gpfs2/nfs-text/ no auto Try to disable "smb encrypt" value, and try again. Example: #mmsmb export change --option "smb encrypt=disabled" cifs-text Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd [cid:_1_0D90DCD00D90D73C0001EFFAC2257FA2] Server, Storage and Data Services- Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Sven Oehme To: gpfsug main discussion list Date: 04/27/2016 01:48 AM Subject: Re: [gpfsug-discuss] SMB access speed Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ can you check what OS level this is ? i have seen reports from issues with RHEL 7 clients and SMB On Tue, Apr 26, 2016 at 2:09 PM, Simon Thompson (Research Computing - IT Services) > wrote: Hi, We've had some reports from some of our users that out CES SMB exports are slow to access. It appears that this is only when the client is a Linux system and using SMB to access the file-system. In fact if we dual boot the same box, we can get sensible speeds out of it (I.e. Not network problems to the client system). They also report that access to real Windows based file-servers works at sensible speeds. Maybe the Win file servers support SMB1, but has anyone else seen this, or have any suggestions? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00001.gif Type: image/gif Size: 1851 bytes Desc: ATT00001.gif URL: From usa-principal at gpfsug.org Thu Apr 28 22:44:32 2016 From: usa-principal at gpfsug.org (GPFS UG USA Principal) Date: Thu, 28 Apr 2016 17:44:32 -0400 Subject: [gpfsug-discuss] GPFS/Spectrum Scale Upcoming US Events - Save the Dates In-Reply-To: <4192D644-E3AB-4B7B-AF7A-96D3D617FA7B@gpfsug.org> References: <4192D644-E3AB-4B7B-AF7A-96D3D617FA7B@gpfsug.org> Message-ID: <9489DBA2-1F12-4B05-A968-5D4855FBEA40@gpfsug.org> All, the registration page for the second event listed below at Argonne National Lab on June 10th is now up. An updated agenda is also at this site. Please register here: https://www.regonline.com/Spectrumscalemeeting We look forward to seeing some of you at these upcoming events. Feel free to send suggestions for future events in your area. Cheers, -Kristy > On Apr 4, 2016, at 4:52 PM, GPFS UG USA Principal wrote: > > Hello all, > > We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. > > 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. > > Tentative Agenda: > ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 > ? Enhancements for CORAL from IBM > ? Panel discussion with customers, topic TBD > ? AFM and integration with Spectrum Protect > ? Best practices for GPFS or Spectrum Scale Tuning. > ? At least one site update > > Location: > New York Academy of Medicine > 1216 Fifth Avenue > New York, NY 10029 > > ?? > > 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! > > Location: Argonne National Lab more details and final agenda will come later. > > Tentative Agenda: > > 9:00a-12:30p > 9-9:30a - Opening Remarks > 9:30-10a Deep Dive - Update on ESS > 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) > 11-11:30 Break > 11:30a-Noon - Deep Dive - Protect & Scale integration > Noon-12:30p HDFS/Hadoop > > 12:30 - 1:30p Lunch > > 1:30p-5:00p > 1:30 - 2:00p IBM AFM Update > 2:00-2:30p ANL: AFM as a burst buffer > 2:30-3:00p ANL: GHI (GPFS HPSS Integration) > 3:00-3:30p Break > 3:30p - 4:00p LANL: ? or other site preso > 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences > 4:30p -5:00p Closing comments and Open Forum for Questions > > 5:00 - ? > Beer hunting? > > ?? > > We hope you can attend one or both of these events. > > Best, > Kristy Kallback-Rose & Bob Oesterlin > GPFS Users Group - USA Chapter - Principal & Co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Thu Apr 28 23:57:42 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Thu, 28 Apr 2016 23:57:42 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> Message-ID: <57229566.7060009@buzzard.me.uk> On 28/04/16 22:04, Simon Thompson (Research Computing - IT Services) wrote: > Ok, we are going to try this out and see if this makes a difference. The > Windows server which is "faster" from Linux is running Server 2008R2, so > I guess isn't doing encrypted SMB. > A quick poke in the Linux source code suggests that the CIFS encryption is handled with standard kernel crypto routines, but and here is the big but, whether you get any hardware acceleration is going to depend heavily on the CPU in the machine. Don't have the right CPU and you won't get it being done in hardware and the performance would I expect take a dive. I imagine it is like scp; making sure all your ducks are lined up and both server and client are doing hardware accelerated encryption is more complicated that it appears at first sight. Lots of lower end CPU's seem to be missing hardware accelerated encryption. Anyway boot into Windows 7 and you get don't get encryption, connect to 2008R2 and you don't get encryption and it all looks better. A quick Google suggests encryption didn't hit till Windows 8 and Server 2012. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From zgiles at gmail.com Fri Apr 29 05:22:03 2016 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 29 Apr 2016 00:22:03 -0400 Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? Message-ID: Fellow GPFS Users, I have a silly question about file replicas... I've been playing around with copies=2 (or 3) and hoping that this would protect against data corruption on poor-quality RAID controllers.. but it seems that if I purposefully corrupt blocks on a LUN used by GPFS, the "replica" doesn't take over, rather GPFS just returns corrupt data. This includes if just "dd" into the disk, or if I break the RAID controller somehow by yanking whole chassis and the controller responds poorly for a few seconds. Originally my thinking was that replicas were for mirroring and GPFS would somehow return whichever is the "good" copy of your data, but now I'm thinking it's just intended for better file placement.. such as having a near replica and a far replica so you dont have to cross buildings for access, etc. That, and / or, disk outages where the outage is not corruption, just simply outage either by failure or for disk-moves, SAN rewiring, etc. In those cases you wouldn't have to "move" all the data since you already have a second copy. I can see how that would makes sense.. Somehow I guess I always knew this.. but it seems many people say they will just turn on copies=2 and be "safe".. but it's not the case.. Which way is the intended? Has anyone else had experience with this realization? Thanks, -Zach -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From UWEFALKE at de.ibm.com Fri Apr 29 10:22:10 2016 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Fri, 29 Apr 2016 11:22:10 +0200 Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? In-Reply-To: References: Message-ID: <201604290922.u3T9MGYY000400@d06av01.portsmouth.uk.ibm.com> Zach, GPFS replication does not include automatically a comparison of the replica copies. It protects against one part (i.e. one FG, or two with 3-fold replication) of the storage being down. How should GPFS know what version is the good one if both replica copies are readable? There are tools in 4.x to compare the replicas, but do use them only from 4.2 onward (problems with prior versions). Still then you need to decide what is the "good" copy (there is a consistency check on MD replicas though, but correct/incorrect data blocks cannot be auto-detected for obvious reasons). E2E Check-summing (as in GNR) would of course help here. Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Frank Hammer, Thorsten Moehring Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: Zachary Giles To: gpfsug main discussion list Date: 04/29/2016 06:22 AM Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? Sent by: gpfsug-discuss-bounces at spectrumscale.org Fellow GPFS Users, I have a silly question about file replicas... I've been playing around with copies=2 (or 3) and hoping that this would protect against data corruption on poor-quality RAID controllers.. but it seems that if I purposefully corrupt blocks on a LUN used by GPFS, the "replica" doesn't take over, rather GPFS just returns corrupt data. This includes if just "dd" into the disk, or if I break the RAID controller somehow by yanking whole chassis and the controller responds poorly for a few seconds. Originally my thinking was that replicas were for mirroring and GPFS would somehow return whichever is the "good" copy of your data, but now I'm thinking it's just intended for better file placement.. such as having a near replica and a far replica so you dont have to cross buildings for access, etc. That, and / or, disk outages where the outage is not corruption, just simply outage either by failure or for disk-moves, SAN rewiring, etc. In those cases you wouldn't have to "move" all the data since you already have a second copy. I can see how that would makes sense.. Somehow I guess I always knew this.. but it seems many people say they will just turn on copies=2 and be "safe".. but it's not the case.. Which way is the intended? Has anyone else had experience with this realization? Thanks, -Zach -- Zach Giles zgiles at gmail.com_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From zgiles at gmail.com Fri Apr 29 13:18:29 2016 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 29 Apr 2016 08:18:29 -0400 Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? In-Reply-To: <201604290922.u3T9MGYY000400@d06av01.portsmouth.uk.ibm.com> References: <201604290922.u3T9MGYY000400@d06av01.portsmouth.uk.ibm.com> Message-ID: Hi Uwe, You're right.. how would it know which one is the good one? I had imagined it would at least compare some piece of metadata to the block's metadata on retrieval, maybe generation number, something... However, when I think about that, it doesnt make any sense. The block on-disk is purely the data, no metadata. Thus, there won't be any structural issues when retrieving a bad block. What is the tool in 4.2 that you are referring to for comparing replicas? I'd be interested in trying it out. I didn't happen to pass-by any mmrestripefs options for that.. maybe I missed something. E2E I guess is what I'm looking for, but not on GNR. I'm just trying to investigate failure cases possible on standard-RAID hardware. I'm sure we've all had a RAID controller or two that have failed in interesting ways... -Zach On Fri, Apr 29, 2016 at 5:22 AM, Uwe Falke wrote: > Zach, > GPFS replication does not include automatically a comparison of the > replica copies. > It protects against one part (i.e. one FG, or two with 3-fold replication) > of the storage being down. > How should GPFS know what version is the good one if both replica copies > are readable? > > There are tools in 4.x to compare the replicas, but do use them only from > 4.2 onward (problems with prior versions). Still then you need to decide > what is the "good" copy (there is a consistency check on MD replicas > though, but correct/incorrect data blocks cannot be auto-detected for > obvious reasons). E2E Check-summing (as in GNR) would of course help here. > > > Mit freundlichen Gr??en / Kind regards > > > Dr. Uwe Falke > > IT Specialist > High Performance Computing Services / Integrated Technology Services / > Data Center Services > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland > Rathausstr. 7 > 09111 Chemnitz > Phone: +49 371 6978 2165 > Mobile: +49 175 575 2877 > E-Mail: uwefalke at de.ibm.com > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: > Frank Hammer, Thorsten Moehring > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, > HRB 17122 > > > > > From: Zachary Giles > To: gpfsug main discussion list > Date: 04/29/2016 06:22 AM > Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Fellow GPFS Users, > > I have a silly question about file replicas... I've been playing around > with copies=2 (or 3) and hoping that this would protect against data > corruption on poor-quality RAID controllers.. but it seems that if I > purposefully corrupt blocks on a LUN used by GPFS, the "replica" doesn't > take over, rather GPFS just returns corrupt data. This includes if just > "dd" into the disk, or if I break the RAID controller somehow by yanking > whole chassis and the controller responds poorly for a few seconds. > > Originally my thinking was that replicas were for mirroring and GPFS would > somehow return whichever is the "good" copy of your data, but now I'm > thinking it's just intended for better file placement.. such as having a > near replica and a far replica so you dont have to cross buildings for > access, etc. That, and / or, disk outages where the outage is not > corruption, just simply outage either by failure or for disk-moves, SAN > rewiring, etc. In those cases you wouldn't have to "move" all the data > since you already have a second copy. I can see how that would makes > sense.. > > Somehow I guess I always knew this.. but it seems many people say they > will just turn on copies=2 and be "safe".. but it's not the case.. > > Which way is the intended? > Has anyone else had experience with this realization? > > Thanks, > -Zach > > > -- > Zach Giles > zgiles at gmail.com_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.K.Ghumra at bham.ac.uk Fri Apr 29 17:07:17 2016 From: A.K.Ghumra at bham.ac.uk (Aslam Ghumra (IT Services, Facilities Management)) Date: Fri, 29 Apr 2016 16:07:17 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: Many thanks Yaron, after the change to disable encryption we were able to increase the speed via Ubuntu of copying files from the local desktop to our gpfs filestore with average speeds of 60Mbps. We also tried changing the mount from vers=3.0 to vers=2.1, which gave similar figures However, using the Ubuntu gui ( Unity ) the speed drops down to 7Mbps, however, we're not concerned as the user will use rsync / cp. The other issue is copying data from gpfs filestore to the local HDD, which resulted in 4Mbps. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.A.Hurst at bham.ac.uk Fri Apr 29 17:22:48 2016 From: L.A.Hurst at bham.ac.uk (Laurence Alexander Hurst (IT Services)) Date: Fri, 29 Apr 2016 16:22:48 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: On 29/04/2016 17:07, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Aslam Ghumra (IT Services, Facilities Management)" wrote: >Many thanks Yaron, after the change to disable encryption we were able to >increase the speed via Ubuntu of copying files from the local desktop to >our gpfs filestore with average speeds of 60Mbps. > >We also tried changing the mount from vers=3.0 to vers=2.1, which gave >similar figures > >However, using the Ubuntu gui ( Unity ) the speed drops down to 7Mbps, >however, we?re not concerned as the user will use rsync / cp. > > >The other issue is copying data from gpfs filestore to the local HDD, >which resulted in 4Mbps. > >Aslam Ghumra >Research Data Management I wonder if Unity uses what used to be called the "gnome virtual filesystem" to connect. It may be using it's own implementation that's not such a well written samba/cifs (which ever they are using) client than the implementation used if you mount it "properly" with mount.smb/mount.cifs. Laurence -- Laurence Hurst Research Computing, IT Services, University of Birmingham w: http://www.birmingham.ac.uk/bear (http://servicedesk.bham.ac.uk/ for support) e: l.a.hurst at bham.ac.uk From jonathan at buzzard.me.uk Fri Apr 29 21:05:02 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 29 Apr 2016 21:05:02 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <5723BE6E.6000403@buzzard.me.uk> On 29/04/16 17:22, Laurence Alexander Hurst (IT Services) wrote: [SNIP] > I wonder if Unity uses what used to be called the "gnome virtual > filesystem" to connect. It may be using it's own implementation that's > not such a well written samba/cifs (which ever they are using) client than > the implementation used if you mount it "properly" with > mount.smb/mount.cifs. Probably, as I said previously these desktop VFS CIF's clients are significantly slower than the kernel client. It's worth remembering that a few years back the Linux kernel CIFS client was extensively optimized for speed, and was at on point at least giving better performance than the NFS client. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From p.childs at qmul.ac.uk Fri Apr 29 21:58:53 2016 From: p.childs at qmul.ac.uk (Peter Childs) Date: Fri, 29 Apr 2016 20:58:53 +0000 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <571E82FA.2000008@genome.wustl.edu> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se>, <571E82FA.2000008@genome.wustl.edu> Message-ID: >From my experience using a Dell md3460 with zfs (not gpfs). I've not tried it with gpfs but it looks very simular to our IBM dcs3700 we run gpfs on. To get multipath to work correctly, we had to install the storage manager software from the cd that can be downloaded from Dells website, which made a few modifications to multipath.conf. Broadly speaking the blacklist comments others have made are correct. You also need to enable and start multipathd (chkconfig multipathd on) Peter Childs ITS Research and Teaching Support Queen Mary, University of London ---- Matt Weil wrote ---- enable mpathconf --enable --with_multipathd y show config multipathd show config On 4/25/16 3:27 PM, Jan Finnerman Load wrote: Hi, I realize this might not be strictly GPFS related but I?m getting a little desperate here? I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer and struggle on a question of disk multipathing for the intended NSD disks with their direct attached SAS disk systems. If I do a multipath ?ll, after a few seconds I just get the prompt back. I expected to see the usual big amount of path info, but nothing there. If I do a multipathd ?k and then a show config, I see all the Dell disk luns with reasonably right parameters. I can see them as /dev/sdf, /dev/sdg, etc. devices. I can also add them in PowerKVM:s Kimchi web interface and even deploy a GPFS installation on it. The big question is, though, how do I get multipathing to work ? Do I need any special driver or setting in the multipath.conf file ? I found some of that but more generic e.g. for RedHat 6, but now we are in PowerKVM country. The platform consists of: 4x IBM S812L servers SAS controller PowerKVM 3.1 Red Hat 7.1 2x Dell MD3460 SAS disk systems No switches Jan ///Jan [cid:part1.01010308.03000406 at genome.wustl.edu] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:part3.01010404.04060703 at genome.wustl.edu] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00001.png Type: image/png Size: 8584 bytes Desc: ATT00001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00002.png Type: image/png Size: 5565 bytes Desc: ATT00002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00003.png Type: image/png Size: 6664 bytes Desc: ATT00003.png URL: From YARD at il.ibm.com Sat Apr 30 06:17:28 2016 From: YARD at il.ibm.com (Yaron Daniel) Date: Sat, 30 Apr 2016 08:17:28 +0300 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <201604300517.u3U5HcbY022432@d06av12.portsmouth.uk.ibm.com> Hi It could be that GUI use in the "background" default command which use smb v1. Regard copy files from GPFS to Local HDD, it might be related to the local HDD settings. What is the speed transfer between the local HHD ? Cache Settings and so.. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Server, Storage and Data Services - Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: "Aslam Ghumra (IT Services, Facilities Management)" To: "gpfsug-discuss at spectrumscale.org" Date: 04/29/2016 07:07 PM Subject: [gpfsug-discuss] SMB access speed Sent by: gpfsug-discuss-bounces at spectrumscale.org Many thanks Yaron, after the change to disable encryption we were able to increase the speed via Ubuntu of copying files from the local desktop to our gpfs filestore with average speeds of 60Mbps. We also tried changing the mount from vers=3.0 to vers=2.1, which gave similar figures However, using the Ubuntu gui ( Unity ) the speed drops down to 7Mbps, however, we?re not concerned as the user will use rsync / cp. The other issue is copying data from gpfs filestore to the local HDD, which resulted in 4Mbps. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From jan.finnerman at load.se Fri Apr 1 12:04:38 2016 From: jan.finnerman at load.se (Jan Finnerman Load) Date: Fri, 1 Apr 2016 11:04:38 +0000 Subject: [gpfsug-discuss] Failure Group Message-ID: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se> Hi, I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw Device Mapping. They just ran in to an issue with adding some nsd disks. They claim that their current file system?s nsddisks are specified with 4001 as the failure group. This is out of bounds, since the allowed range is ?1>??>4000. So, when they now try to add some new disks with mmcrnsd, with 4001 specified, they get an error message. Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt [Description: cid:image001.png at 01D18B5D.FFCEFE30] His gpfsdisk.txt file looks like this. [Description: cid:image002.png at 01D18B5D.FFCEFE30] A listing of current disks show all as belonging to Failure group 4001 [Description: cid:image003.png at 01D18B5D.FFCEFE30] So, Why can?t he choose failure group 4001 when the existing disks are member of that group ? If he creates a disk in an other failure group, what?s the pros and cons with that ? I guess issues with replication not working as expected?. Brgds ///Jan [cid:95049B1E-9581-4B5E-8878-5BC3F3371B27] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:DB2EE70A-D139-4B15-B58C-5BD987D2FAB5] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png Type: image/png Size: 5565 bytes Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png Type: image/png Size: 8584 bytes Desc: E895055E-B11B-47C3-BA29-E12D29D394FA.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1].png Type: image/png Size: 6664 bytes Desc: CertPowerSystems_sm[1].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png Type: image/png Size: 3320 bytes Desc: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png Type: image/png Size: 1648 bytes Desc: 7A01C40C-085E-430C-BA95-D4238AFE5602.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 446525C9-567E-4B06-ACA0-34865B35B109.png Type: image/png Size: 6144 bytes Desc: 446525C9-567E-4B06-ACA0-34865B35B109.png URL: From Robert.Oesterlin at nuance.com Fri Apr 1 16:08:02 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:08:02 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? Message-ID: There are a number of good guides and Redbooks out from IBM that talk about the implementation of encryption in a Spectrum Scale (GPFS) cluster. What I?m looking for are other white papers, guidelines, reference material on the sizing considerations. For instance, what?s the performance overhead on an NSD server? If I have a well running cluster today, and I start using encryption, will my NSD servers need to be changed? (more of then, more CPU, etc) And references material or practical experience welcome. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 1 16:10:00 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 1 Apr 2016 15:10:00 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: I thought the enc/decrypt was done client side? So nothing on the nsd server? Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 01 April 2016 16:08 To: gpfsug main discussion list Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? There are a number of good guides and Redbooks out from IBM that talk about the implementation of encryption in a Spectrum Scale (GPFS) cluster. What I?m looking for are other white papers, guidelines, reference material on the sizing considerations. For instance, what?s the performance overhead on an NSD server? If I have a well running cluster today, and I start using encryption, will my NSD servers need to be changed? (more of then, more CPU, etc) And references material or practical experience welcome. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From Robert.Oesterlin at nuance.com Fri Apr 1 16:17:20 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:17:20 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? Message-ID: Hrm ? I thought it was done at the server, meaning data in the client (pagepool) was unencrypted? Well, Simon, one of us is wrong here :) Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Fri Apr 1 16:19:58 2016 From: oehmes at gmail.com (Sven Oehme) Date: Fri, 1 Apr 2016 08:19:58 -0700 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: its done on the client On Fri, Apr 1, 2016 at 8:17 AM, Oesterlin, Robert < Robert.Oesterlin at nuance.com> wrote: > Hrm ? I thought it was done at the server, meaning data in the client > (pagepool) was unencrypted? > > Well, Simon, one of us is wrong here :) > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 1 16:26:31 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 1 Apr 2016 15:26:31 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: Hmm. I thought part of the point was that different nodes (clients?) could have different encryption keys. And I also understood that it was encrypted to the client (I.e. Potentially on the wire). Though the docs talk about at rest and decrypted on the way, so a little unclear. But I could be completely wrong on this. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 01 April 2016 16:17 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? Hrm ? I thought it was done at the server, meaning data in the client (pagepool) was unencrypted? Well, Simon, one of us is wrong here :) Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From Robert.Oesterlin at nuance.com Fri Apr 1 16:28:07 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:28:07 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: References: Message-ID: <91CA6AA1-25A0-47FD-A05C-A1EE52A86E06@nuance.com> Thanks for clearing that up! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? its done on the client -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 1 16:34:42 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 1 Apr 2016 15:34:42 +0000 Subject: [gpfsug-discuss] Encryption - guidelines, performance penalties? In-Reply-To: <91CA6AA1-25A0-47FD-A05C-A1EE52A86E06@nuance.com> References: , <91CA6AA1-25A0-47FD-A05C-A1EE52A86E06@nuance.com> Message-ID: The docs (https://www.ibm.com/support/knowledgecenter/#!/SSFKCN_4.1.0/com.ibm.cluster.gpfs.v4r1.gpfs200.doc/bl1adv_encryption.htm) Do say at rest. It also says it protects against an untrusted node in multi cluster. I thought if you were root on such a box, whilst you cant read the file, you could delete it? Can we clear that up? Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 01 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? Thanks for clearing that up! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? its done on the client ) From Robert.Oesterlin at nuance.com Fri Apr 1 16:35:28 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 1 Apr 2016 15:35:28 +0000 Subject: [gpfsug-discuss] Encryption - client performance penalties? Message-ID: Hit send too fast ? so the question is now ? what?s the penalty on the client side? Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Robert Oesterlin > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:28 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? Thanks for clearing that up! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid 507-269-0413 From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 1, 2016 at 10:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Encryption - guidelines, performance penalties? its done on the client -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mark.Bush at siriuscom.com Fri Apr 1 16:48:17 2016 From: Mark.Bush at siriuscom.com (Mark.Bush at siriuscom.com) Date: Fri, 1 Apr 2016 15:48:17 +0000 Subject: [gpfsug-discuss] ESS cabling guide Message-ID: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> Is there such a thing as this? And if we want to use protocol nodes along with ESS could they use the same HMC as the ESS? Mark R. Bush | Solutions Architect Mobile: 210.237.8415 | mark.bush at siriuscom.com Sirius Computer Solutions | www.siriuscom.com 10100 Reunion Place, Suite 500, San Antonio, TX 78216 This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you. Sirius Computer Solutions -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Fri Apr 1 16:48:51 2016 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Fri, 1 Apr 2016 07:48:51 -0800 Subject: [gpfsug-discuss] Encryption - client performance penalties? In-Reply-To: References: Message-ID: <201604011549.u31Fn1u8016410@d01av03.pok.ibm.com> > From: "Oesterlin, Robert" > > Hit send too fast ? so the question is now ? what?s the penalty on > the client side? > Data is encrypted/decrypted on the path to/from the storage device -- it is in cleartext in the buffer pool. If you can read-ahead and write-behind you may not see the overhead of encryption. Random reads and synchronous writes will see it. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsallen at alcf.anl.gov Fri Apr 1 17:51:16 2016 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 1 Apr 2016 16:51:16 +0000 Subject: [gpfsug-discuss] ESS cabling guide In-Reply-To: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> References: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> Message-ID: Mark, There are SAS and networking diagrams in the ESS Install Procedure PDF that ships with the Spectrum Scale RAID download from FixCentral. You can use the same HMC as the ESS with any other Power hardware. There is a maximum of 48 hosts per HMC however. Depending on firmware levels, you may need to upgrade the HMC first for newer hardware. Ben > On Apr 1, 2016, at 10:48 AM, Mark.Bush at siriuscom.com wrote: > > Is there such a thing as this? And if we want to use protocol nodes along with ESS could they use the same HMC as the ESS? > > > Mark R. Bush | Solutions Architect > Mobile: 210.237.8415 | mark.bush at siriuscom.com > Sirius Computer Solutions | www.siriuscom.com > 10100 Reunion Place, Suite 500, San Antonio, TX 78216 > > This message (including any attachments) is intended only for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, privileged, confidential, and exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. This message may be viewed by parties at Sirius Computer Solutions other than those named in the message header. This message does not contain an official representation of Sirius Computer Solutions. If you have received this communication in error, notify Sirius Computer Solutions immediately and (i) destroy this message if a facsimile or (ii) delete this message immediately if this is an electronic communication. Thank you. > > Sirius Computer Solutions > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From janfrode at tanso.net Fri Apr 1 20:04:58 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 1 Apr 2016 21:04:58 +0200 Subject: [gpfsug-discuss] Failure Group In-Reply-To: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se> References: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se> Message-ID: Hi :-) I seem to remember failure group 4001 was common at some point, but can't see why.. Maybe it was just the default when no failure group was specified ? Have you tried what happens if you use an empty failure group "::", does it default to "-1" on v3.4 -- or maybe "4001"? You might consider changing the failure groups of the existing disks using mmchdisk if you need them to be the same. Pro's and cons of using another failure group.. Depends a bit on if they're using any replication within the filesystem. If all other NSDs are in failure group 4001 -- they can't be doing any replication, so it doesn't matter much. Only side effect I know of is that new block allocations will first go round robin over the failure groups, then round robin within the failure group, so unless you have similar amount of disks in the two failure groups the disk load might become a bit uneven. -jf On Fri, Apr 1, 2016 at 1:04 PM, Jan Finnerman Load wrote: > Hi, > > I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw > Device Mapping. They just ran in to an issue with adding some nsd disks. > They claim that their current file system?s nsddisks are specified with > 4001 as the failure group. This is out of bounds, since the allowed range > is ?1>??>4000. > So, when they now try to add some new disks with mmcrnsd, with 4001 > specified, they get an error message. > > Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt > > [image: Description: cid:image001.png at 01D18B5D.FFCEFE30] > > > > > > His gpfsdisk.txt file looks like this. > > [image: Description: cid:image002.png at 01D18B5D.FFCEFE30] > > > > > > A listing of current disks show all as belonging to Failure group 4001 > > [image: Description: cid:image003.png at 01D18B5D.FFCEFE30] > > > > So, Why can?t he choose failure group 4001 when the existing disks are > member of that group ? > > If he creates a disk in an other failure group, what?s the pros and cons > with that ? I guess issues with replication not working as expected?. > > > Brgds > > ///Jan > > > Jan Finnerman > > Senior Technical consultant > > > [image: CertTiv_sm] > > > Kista Science Tower > > 164 51 Kista > > Mobil: +46 (0)70 631 66 26 > > Kontor: +46 (0)8 633 66 00/26 > > jan.finnerman at load.se > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 446525C9-567E-4B06-ACA0-34865B35B109.png Type: image/png Size: 6144 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1].png Type: image/png Size: 6664 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png Type: image/png Size: 8584 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png Type: image/png Size: 3320 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png Type: image/png Size: 1648 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png Type: image/png Size: 5565 bytes Desc: not available URL: From jan.finnerman at load.se Fri Apr 1 20:16:11 2016 From: jan.finnerman at load.se (Jan Finnerman Load) Date: Fri, 1 Apr 2016 19:16:11 +0000 Subject: [gpfsug-discuss] Failure Group In-Reply-To: References: <5AE04D37-0381-4BD2-BBE6-FDC29645A122@load.se>, Message-ID: <5E3DB2EE-D644-475A-AABA-FE49BFB84D91@load.se> Ok, I checked the replication status with mmlsfs the output is: -r=1, -m=1, -R=2,-M=2, which means they don't use replication, although they could activate it. I told them that they could add the new disks to the file system with a different failure group e.g. 201 It shouldn't matter that much if they coexist with the 4001 disks, since they don't replicate. I'll follow up on Monday. MVH Jan Finnerman Konsult Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 1 apr. 2016 kl. 21:05 skrev Jan-Frode Myklebust >: Hi :-) I seem to remember failure group 4001 was common at some point, but can't see why.. Maybe it was just the default when no failure group was specified ? Have you tried what happens if you use an empty failure group "::", does it default to "-1" on v3.4 -- or maybe "4001"? You might consider changing the failure groups of the existing disks using mmchdisk if you need them to be the same. Pro's and cons of using another failure group.. Depends a bit on if they're using any replication within the filesystem. If all other NSDs are in failure group 4001 -- they can't be doing any replication, so it doesn't matter much. Only side effect I know of is that new block allocations will first go round robin over the failure groups, then round robin within the failure group, so unless you have similar amount of disks in the two failure groups the disk load might become a bit uneven. -jf On Fri, Apr 1, 2016 at 1:04 PM, Jan Finnerman Load > wrote: Hi, I have a customer with GPFS 3.4.0.11 on Windows @VMware with VMware Raw Device Mapping. They just ran in to an issue with adding some nsd disks. They claim that their current file system's nsddisks are specified with 4001 as the failure group. This is out of bounds, since the allowed range is -1>-->4000. So, when they now try to add some new disks with mmcrnsd, with 4001 specified, they get an error message. Customer runs this command: mmcrnsd -F D:\slask\gpfs\gpfsdisk.txt His gpfsdisk.txt file looks like this. <7A01C40C-085E-430C-BA95-D4238AFE5602.png> A listing of current disks show all as belonging to Failure group 4001 <446525C9-567E-4B06-ACA0-34865B35B109.png> So, Why can't he choose failure group 4001 when the existing disks are member of that group ? If he creates a disk in an other failure group, what's the pros and cons with that ? I guess issues with replication not working as expected.... Brgds ///Jan Jan Finnerman Senior Technical consultant Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 446525C9-567E-4B06-ACA0-34865B35B109.png Type: image/png Size: 6144 bytes Desc: 446525C9-567E-4B06-ACA0-34865B35B109.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1].png Type: image/png Size: 6664 bytes Desc: CertPowerSystems_sm[1].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA.png Type: image/png Size: 8584 bytes Desc: E895055E-B11B-47C3-BA29-E12D29D394FA.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png Type: image/png Size: 3320 bytes Desc: B13E252A-3014-49AD-97EE-6E9B4D57A9F4.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7A01C40C-085E-430C-BA95-D4238AFE5602.png Type: image/png Size: 1648 bytes Desc: 7A01C40C-085E-430C-BA95-D4238AFE5602.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png Type: image/png Size: 5565 bytes Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3.png URL: From janfrode at tanso.net Sat Apr 2 20:27:09 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Sat, 02 Apr 2016 19:27:09 +0000 Subject: [gpfsug-discuss] ESS cabling guide In-Reply-To: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> References: <9F67A04D-0AC1-4F2D-9D97-D2BE5C1022F8@siriuscom.com> Message-ID: Share hmc is no problem, also I think it should be fairly easy to use the xcat-setup on the EMS to deploy and manage the protocol nodes. -jf fre. 1. apr. 2016 kl. 17.48 skrev Mark.Bush at siriuscom.com < Mark.Bush at siriuscom.com>: > Is there such a thing as this? And if we want to use protocol nodes along > with ESS could they use the same HMC as the ESS? > > > Mark R. Bush | Solutions Architect > Mobile: 210.237.8415 | mark.bush at siriuscom.com > Sirius Computer Solutions | www.siriuscom.com > 10100 Reunion Place, Suite 500, San Antonio, TX 78216 > > This message (including any attachments) is intended only for the use of > the individual or entity to which it is addressed and may contain > information that is non-public, proprietary, privileged, confidential, and > exempt from disclosure under applicable law. If you are not the intended > recipient, you are hereby notified that any use, dissemination, > distribution, or copying of this communication is strictly prohibited. This > message may be viewed by parties at Sirius Computer Solutions other than > those named in the message header. This message does not contain an > official representation of Sirius Computer Solutions. If you have received > this communication in error, notify Sirius Computer Solutions immediately > and (i) destroy this message if a facsimile or (ii) delete this message > immediately if this is an electronic communication. Thank you. > Sirius Computer Solutions > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Mon Apr 4 21:52:37 2016 From: usa-principal at gpfsug.org (GPFS UG USA Principal) Date: Mon, 4 Apr 2016 16:52:37 -0400 Subject: [gpfsug-discuss] GPFS/Spectrum Scale Upcoming US Events - Save the Dates Message-ID: <4192D644-E3AB-4B7B-AF7A-96D3D617FA7B@gpfsug.org> Hello all, We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 ? Enhancements for CORAL from IBM ? Panel discussion with customers, topic TBD ? AFM and integration with Spectrum Protect ? Best practices for GPFS or Spectrum Scale Tuning. ? At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ?? 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ?? We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Tue Apr 5 10:50:35 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Tue, 5 Apr 2016 09:50:35 +0000 Subject: [gpfsug-discuss] Excluding AFM Caches from mmbackup Message-ID: Hi All, Is there any intelligence yet for mmbackup to ignore AFM cache filesets? I guess a way to do this would be to dynamically re-write TSM include / exclude rules based on the extended attributes of the fileset; for example: 1. Scan the all the available filesets in the filesystem, determining which ones have the MISC_ATTRIBUTE=%P% set, 2. Lookup the junction points for the list of filesets returned in (1), 3. Write out EXCLUDE statements for TSM for each directory in (2), 4. Proceed with mmbackup using the new EXCLUDE rules. Presumably one could accomplish this by using the -P flag for mmbackup and writing your own rule to do this? But, maybe IBM could do this for me and put another flag on the mmbackup command :) Although... a blanket flag for ignoring AFM caches altogether might not be good if you want to backup changed files in a local-update cache. Anybody want to do this work for me? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From chair at spectrumscale.org Mon Apr 11 10:37:38 2016 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Mon, 11 Apr 2016 10:37:38 +0100 Subject: [gpfsug-discuss] UK May Meeting Message-ID: Hi All, We are down to our last few places for the May user group meeting, if you are planning to come along, please do register: The draft agenda and registration for the day is at: http://www.eventbrite.com/e/spectrum-scale-gpfs-uk-user-group-spring-2016-t ickets-21724951916 If you have registered and aren't able to attend now, please do let us know so that we can free the slot for other members of the group. We also have 1 slot left on the agenda for a user talk, so if you have an interesting deployment or plans and are able to speak, please let me know! Thanks Simon From damir.krstic at gmail.com Mon Apr 11 14:15:30 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Mon, 11 Apr 2016 13:15:30 +0000 Subject: [gpfsug-discuss] backup and disaster recovery solutions Message-ID: We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinto at scinet.utoronto.ca Mon Apr 11 15:34:54 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 11 Apr 2016 10:34:54 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: References: Message-ID: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> Do you want backups or periodic frozen snapshots of the file system? Backups can entail some level of version control, so that you or end-users can get files back on certain points in time, in case of accidental deletions. Besides 1.5PB is a lot of material, so you may not want to take full snapshots that often. In that case, a combination of daily incremental backups using TSM with GPFS's mmbackup can be a good option. TSM also does a very good job at controlling how material is distributed across multiple tapes, and that is something that requires a lot of micro-management if you want a home grown solution of rsync+LTFS. On the other hand, you could use gpfs built-in tools such a mmapplypolicy to identify candidates for incremental backup, and send them to LTFS. Just more micro management, and you may have to come up with your own tool to let end-users restore their stuff, or you'll have to act on their behalf. Jaime Quoting Damir Krstic : > We have implemented 1.5PB ESS solution recently in our HPC environment. > Today we are kicking of backup and disaster recovery discussions so I was > wondering what everyone else is using for their backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life cycle > feature - so if the file is not touched for number of days, it's moved to a > tape (something like LTFS). > > Thanks in advance. > > DAmir > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From jonathan at buzzard.me.uk Mon Apr 11 16:02:45 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 11 Apr 2016 16:02:45 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> Message-ID: <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: > Do you want backups or periodic frozen snapshots of the file system? > > Backups can entail some level of version control, so that you or > end-users can get files back on certain points in time, in case of > accidental deletions. Besides 1.5PB is a lot of material, so you may > not want to take full snapshots that often. In that case, a > combination of daily incremental backups using TSM with GPFS's > mmbackup can be a good option. TSM also does a very good job at > controlling how material is distributed across multiple tapes, and > that is something that requires a lot of micro-management if you want > a home grown solution of rsync+LTFS. Is there any other viable option other than TSM for backing up 1.5PB of data? All other backup software does not handle this at all well. > On the other hand, you could use gpfs built-in tools such a > mmapplypolicy to identify candidates for incremental backup, and send > them to LTFS. Just more micro management, and you may have to come up > with your own tool to let end-users restore their stuff, or you'll > have to act on their behalf. > I was not aware of a way of letting end users restore their stuff from *backup* for any of the major backup software while respecting the file system level security of the original file system. If you let the end user have access to the backup they can restore any file to any location which is generally not a good idea. I do have a concept of creating a read only Fuse mounted file system from a TSM point in time synthetic backup, and then using the shadow copy feature of Samba to enable restores using the "Previous Versions" feature of windows file manager. I got as far as getting a directory tree you could browse through but then had an enforced change of jobs and don't have access to a TSM server any more to continue development. Note if anyone from IBM is listening that would be a super cool feature. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From makaplan at us.ibm.com Mon Apr 11 16:11:24 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 11 Apr 2016 11:11:24 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: References: Message-ID: <201604111511.u3BFBVbg015832@d03av02.boulder.ibm.com> Since you write " so if the file is not touched for number of days, it's moved to a tape" - that is what we call the HSM feature. This is additional function beyond backup. IBM has two implementations. (1) TSM/HSM now called IBM Spectrum Protect. http://www-03.ibm.com/software/products/en/spectrum-protect-for-space-management (2) HPSS http://www.hpss-collaboration.org/ The GPFS (Spectrum Scale File System) policy feature supports both, so that mmapplypolicy and GPFS policy rules can be used to perform accelerated metadata scans to identify which files should be migrated. Also, GPFS supports on-demand recall (on application reads) of data from long term storage (tape) to GPFS storage (disk or SSD). See also DMAPI. From: Damir Krstic To: gpfsug main discussion list Date: 04/11/2016 09:16 AM Subject: [gpfsug-discuss] backup and disaster recovery solutions Sent by: gpfsug-discuss-bounces at spectrumscale.org We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From pinto at scinet.utoronto.ca Mon Apr 11 16:18:47 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 11 Apr 2016 11:18:47 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> Message-ID: <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> I heard as recently as last Friday from IBM support/vendors/developers of GPFS/TSM/HSM that the newest release of Spectrum Protect (7.11) offers a GUI interface that is user centric, and will allow for unprivileged users to restore their own material via a newer WebGUI (one that also works with Firefox, Chrome and on linux, not only IE on Windows). Users may authenticate via AD or LDAP, and traverse only what they would be allowed to via linux permissions and ACLs. Jaime Quoting Jonathan Buzzard : > On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: >> Do you want backups or periodic frozen snapshots of the file system? >> >> Backups can entail some level of version control, so that you or >> end-users can get files back on certain points in time, in case of >> accidental deletions. Besides 1.5PB is a lot of material, so you may >> not want to take full snapshots that often. In that case, a >> combination of daily incremental backups using TSM with GPFS's >> mmbackup can be a good option. TSM also does a very good job at >> controlling how material is distributed across multiple tapes, and >> that is something that requires a lot of micro-management if you want >> a home grown solution of rsync+LTFS. > > Is there any other viable option other than TSM for backing up 1.5PB of > data? All other backup software does not handle this at all well. > >> On the other hand, you could use gpfs built-in tools such a >> mmapplypolicy to identify candidates for incremental backup, and send >> them to LTFS. Just more micro management, and you may have to come up >> with your own tool to let end-users restore their stuff, or you'll >> have to act on their behalf. >> > > I was not aware of a way of letting end users restore their stuff from > *backup* for any of the major backup software while respecting the file > system level security of the original file system. If you let the end > user have access to the backup they can restore any file to any location > which is generally not a good idea. > > I do have a concept of creating a read only Fuse mounted file system > from a TSM point in time synthetic backup, and then using the shadow > copy feature of Samba to enable restores using the "Previous Versions" > feature of windows file manager. > > I got as far as getting a directory tree you could browse through but > then had an enforced change of jobs and don't have access to a TSM > server any more to continue development. > > Note if anyone from IBM is listening that would be a super cool feature. > > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From jtucker at pixitmedia.com Mon Apr 11 16:23:06 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Mon, 11 Apr 2016 16:23:06 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> Message-ID: Hi Having just commissioned three TSM setups and one with HSM, I can say that's not available from the standard APAR updates at present - however it would be rather nice... The current release is 7.1.5 http://www-01.ibm.com/support/docview.wss?uid=swg24041864 Jez On Mon, Apr 11, 2016 at 4:18 PM, Jaime Pinto wrote: > I heard as recently as last Friday from IBM support/vendors/developers of > GPFS/TSM/HSM that the newest release of Spectrum Protect (7.11) offers a > GUI interface that is user centric, and will allow for unprivileged users > to restore their own material via a newer WebGUI (one that also works with > Firefox, Chrome and on linux, not only IE on Windows). Users may > authenticate via AD or LDAP, and traverse only what they would be allowed > to via linux permissions and ACLs. > > Jaime > > > Quoting Jonathan Buzzard : > > On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: >> >>> Do you want backups or periodic frozen snapshots of the file system? >>> >>> Backups can entail some level of version control, so that you or >>> end-users can get files back on certain points in time, in case of >>> accidental deletions. Besides 1.5PB is a lot of material, so you may >>> not want to take full snapshots that often. In that case, a >>> combination of daily incremental backups using TSM with GPFS's >>> mmbackup can be a good option. TSM also does a very good job at >>> controlling how material is distributed across multiple tapes, and >>> that is something that requires a lot of micro-management if you want >>> a home grown solution of rsync+LTFS. >>> >> >> Is there any other viable option other than TSM for backing up 1.5PB of >> data? All other backup software does not handle this at all well. >> >> On the other hand, you could use gpfs built-in tools such a >>> mmapplypolicy to identify candidates for incremental backup, and send >>> them to LTFS. Just more micro management, and you may have to come up >>> with your own tool to let end-users restore their stuff, or you'll >>> have to act on their behalf. >>> >>> >> I was not aware of a way of letting end users restore their stuff from >> *backup* for any of the major backup software while respecting the file >> system level security of the original file system. If you let the end >> user have access to the backup they can restore any file to any location >> which is generally not a good idea. >> >> I do have a concept of creating a read only Fuse mounted file system >> from a TSM point in time synthetic backup, and then using the shadow >> copy feature of Samba to enable restores using the "Previous Versions" >> feature of windows file manager. >> >> I got as far as getting a directory tree you could browse through but >> then had an enforced change of jobs and don't have access to a TSM >> server any more to continue development. >> >> Note if anyone from IBM is listening that would be a super cool feature. >> >> >> JAB. >> >> -- >> Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk >> Fife, United Kingdom. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> > > --- > Jaime Pinto > SciNet HPC Consortium - Compute/Calcul Canada > www.scinet.utoronto.ca - www.computecanada.org > University of Toronto > 256 McCaul Street, Room 235 > Toronto, ON, M5T1W5 > P: 416-978-2755 > C: 416-505-1477 > > ---------------------------------------------------------------- > This message was sent using IMP at SciNet Consortium, University of > Toronto. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominic.mueller at de.ibm.com Mon Apr 11 16:26:45 2016 From: dominic.mueller at de.ibm.com (Dominic Mueller-Wicke01) Date: Mon, 11 Apr 2016 17:26:45 +0200 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 51, Issue 9 In-Reply-To: References: Message-ID: <201604111529.u3BFT51c027238@d06av02.portsmouth.uk.ibm.com> Spectrum Protect backup (under the hood of mmbackup) and Spectrum Protect for Space Management (HSM) can be combined on the same data. There are some valuable integration topics between the products that can reduce the overall network traffic if using backup and HSM on the same files. With the combination of the products you have the ability to free file system space from cold data and migrate them out to tape and to have several versions of frequently used files in backup in the same file system. Greetings, Dominic. ______________________________________________________________________________________________________________ Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com Vorsitzende des Aufsichtsrats: Martina Koederitz; Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen; Registergericht: Amtsgericht Stuttgart, HRB 243294 From: gpfsug-discuss-request at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Date: 11.04.2016 17:11 Subject: gpfsug-discuss Digest, Vol 51, Issue 9 Sent by: gpfsug-discuss-bounces at spectrumscale.org Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. backup and disaster recovery solutions (Damir Krstic) 2. Re: backup and disaster recovery solutions (Jaime Pinto) 3. Re: backup and disaster recovery solutions (Jonathan Buzzard) 4. Re: backup and disaster recovery solutions (Marc A Kaplan) ----- Message from Damir Krstic on Mon, 11 Apr 2016 13:15:30 +0000 ----- To: gpfsug main discussion list Subject: [gpfsug-discuss] backup and disaster recovery solutions We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir ----- Message from Jaime Pinto on Mon, 11 Apr 2016 10:34:54 -0400 ----- To: gpfsug main discussion list , Damir Krstic Subject: Re: [gpfsug-discuss] backup and disaster recovery solutions Do you want backups or periodic frozen snapshots of the file system? Backups can entail some level of version control, so that you or end-users can get files back on certain points in time, in case of accidental deletions. Besides 1.5PB is a lot of material, so you may not want to take full snapshots that often. In that case, a combination of daily incremental backups using TSM with GPFS's mmbackup can be a good option. TSM also does a very good job at controlling how material is distributed across multiple tapes, and that is something that requires a lot of micro-management if you want a home grown solution of rsync+LTFS. On the other hand, you could use gpfs built-in tools such a mmapplypolicy to identify candidates for incremental backup, and send them to LTFS. Just more micro management, and you may have to come up with your own tool to let end-users restore their stuff, or you'll have to act on their behalf. Jaime Quoting Damir Krstic : > We have implemented 1.5PB ESS solution recently in our HPC environment. > Today we are kicking of backup and disaster recovery discussions so I was > wondering what everyone else is using for their backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life cycle > feature - so if the file is not touched for number of days, it's moved to a > tape (something like LTFS). > > Thanks in advance. > > DAmir > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. ----- Message from Jonathan Buzzard on Mon, 11 Apr 2016 16:02:45 +0100 ----- To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] backup and disaster recovery solutions On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: > Do you want backups or periodic frozen snapshots of the file system? > > Backups can entail some level of version control, so that you or > end-users can get files back on certain points in time, in case of > accidental deletions. Besides 1.5PB is a lot of material, so you may > not want to take full snapshots that often. In that case, a > combination of daily incremental backups using TSM with GPFS's > mmbackup can be a good option. TSM also does a very good job at > controlling how material is distributed across multiple tapes, and > that is something that requires a lot of micro-management if you want > a home grown solution of rsync+LTFS. Is there any other viable option other than TSM for backing up 1.5PB of data? All other backup software does not handle this at all well. > On the other hand, you could use gpfs built-in tools such a > mmapplypolicy to identify candidates for incremental backup, and send > them to LTFS. Just more micro management, and you may have to come up > with your own tool to let end-users restore their stuff, or you'll > have to act on their behalf. > I was not aware of a way of letting end users restore their stuff from *backup* for any of the major backup software while respecting the file system level security of the original file system. If you let the end user have access to the backup they can restore any file to any location which is generally not a good idea. I do have a concept of creating a read only Fuse mounted file system from a TSM point in time synthetic backup, and then using the shadow copy feature of Samba to enable restores using the "Previous Versions" feature of windows file manager. I got as far as getting a directory tree you could browse through but then had an enforced change of jobs and don't have access to a TSM server any more to continue development. Note if anyone from IBM is listening that would be a super cool feature. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. ----- Message from "Marc A Kaplan" on Mon, 11 Apr 2016 11:11:24 -0400 ----- To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] backup and disaster recovery solutions Since you write "so if the file is not touched for number of days, it's moved to a tape" - that is what we call the HSM feature. This is additional function beyond backup. IBM has two implementations. (1) TSM/HSM now called IBM Spectrum Protect. http://www-03.ibm.com/software/products/en/spectrum-protect-for-space-management (2) HPSS http://www.hpss-collaboration.org/ The GPFS (Spectrum Scale File System) policy feature supports both, so that mmapplypolicy and GPFS policy rules can be used to perform accelerated metadata scans to identify which files should be migrated. Also, GPFS supports on-demand recall (on application reads) of data from long term storage (tape) to GPFS storage (disk or SSD). See also DMAPI. Marc A Kaplan From: Damir Krstic To: gpfsug main discussion list Date: 04/11/2016 09:16 AM Subject: [gpfsug-discuss] backup and disaster recovery solutions Sent by: gpfsug-discuss-bounces at spectrumscale.org We have implemented 1.5PB ESS solution recently in our HPC environment. Today we are kicking of backup and disaster recovery discussions so I was wondering what everyone else is using for their backup? In our old storage environment we simply rsync-ed home and software directories and projects were not backed up. With ESS we are looking for more of a GPFS based backup solution - something to tape possibly and also something that will have life cycle feature - so if the file is not touched for number of days, it's moved to a tape (something like LTFS). Thanks in advance. DAmir _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0E436792.gif Type: image/gif Size: 21994 bytes Desc: not available URL: From jez.tucker at gpfsug.org Mon Apr 11 16:31:52 2016 From: jez.tucker at gpfsug.org (Jez Tucker) Date: Mon, 11 Apr 2016 16:31:52 +0100 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 51, Issue 9 In-Reply-To: <201604111529.u3BFT51c027238@d06av02.portsmouth.uk.ibm.com> References: <201604111529.u3BFT51c027238@d06av02.portsmouth.uk.ibm.com> Message-ID: <570BC368.9090307@gpfsug.org> Dominic, Speculatively, when is TSM converting from DMAPI to Light Weight Events? Is there an up-to-date slide share we can put on the UG website regarding the 7.1.11 / public roadmap? Jez On 11/04/16 16:26, Dominic Mueller-Wicke01 wrote: > > Spectrum Protect backup (under the hood of mmbackup) and Spectrum > Protect for Space Management (HSM) can be combined on the same data. > There are some valuable integration topics between the products that > can reduce the overall network traffic if using backup and HSM on the > same files. With the combination of the products you have the ability > to free file system space from cold data and migrate them out to tape > and to have several versions of frequently used files in backup in the > same file system. > > Greetings, Dominic. > > ______________________________________________________________________________________________________________ > Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical > Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com > > Vorsitzende des Aufsichtsrats: Martina Koederitz; Gesch?ftsf?hrung: > Dirk Wittkopp > Sitz der Gesellschaft: B?blingen; Registergericht: Amtsgericht > Stuttgart, HRB 243294 > > Inactive hide details for gpfsug-discuss-request---11.04.2016 > 17:11:55---Send gpfsug-discuss mailing list submissions to > gpfsugpfsug-discuss-request---11.04.2016 17:11:55---Send > gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > From: gpfsug-discuss-request at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Date: 11.04.2016 17:11 > Subject: gpfsug-discuss Digest, Vol 51, Issue 9 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > Today's Topics: > > 1. backup and disaster recovery solutions (Damir Krstic) > 2. Re: backup and disaster recovery solutions (Jaime Pinto) > 3. Re: backup and disaster recovery solutions (Jonathan Buzzard) > 4. Re: backup and disaster recovery solutions (Marc A Kaplan) > > ----- Message from Damir Krstic on Mon, 11 > Apr 2016 13:15:30 +0000 ----- > *To:* > gpfsug main discussion list > *Subject:* > [gpfsug-discuss] backup and disaster recovery solutions > > We have implemented 1.5PB ESS solution recently in our HPC > environment. Today we are kicking of backup and disaster recovery > discussions so I was wondering what everyone else is using for their > backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life > cycle feature - so if the file is not touched for number of days, it's > moved to a tape (something like LTFS). > > Thanks in advance. > > DAmir > ----- Message from Jaime Pinto on Mon, 11 > Apr 2016 10:34:54 -0400 ----- > *To:* > gpfsug main discussion list , Damir > Krstic > *Subject:* > Re: [gpfsug-discuss] backup and disaster recovery solutions > > Do you want backups or periodic frozen snapshots of the file system? > > Backups can entail some level of version control, so that you or > end-users can get files back on certain points in time, in case of > accidental deletions. Besides 1.5PB is a lot of material, so you may > not want to take full snapshots that often. In that case, a > combination of daily incremental backups using TSM with GPFS's > mmbackup can be a good option. TSM also does a very good job at > controlling how material is distributed across multiple tapes, and > that is something that requires a lot of micro-management if you want > a home grown solution of rsync+LTFS. > > On the other hand, you could use gpfs built-in tools such a > mmapplypolicy to identify candidates for incremental backup, and send > them to LTFS. Just more micro management, and you may have to come up > with your own tool to let end-users restore their stuff, or you'll > have to act on their behalf. > > Jaime > > > > > Quoting Damir Krstic : > > > We have implemented 1.5PB ESS solution recently in our HPC environment. > > Today we are kicking of backup and disaster recovery discussions so > I was > > wondering what everyone else is using for their backup? > > > > In our old storage environment we simply rsync-ed home and software > > directories and projects were not backed up. > > > > With ESS we are looking for more of a GPFS based backup solution - > > something to tape possibly and also something that will have life cycle > > feature - so if the file is not touched for number of days, it's > moved to a > > tape (something like LTFS). > > > > Thanks in advance. > > > > DAmir > > > > > > > > > ************************************ > TELL US ABOUT YOUR SUCCESS STORIES > http://www.scinethpc.ca/testimonials > ************************************ > --- > Jaime Pinto > SciNet HPC Consortium - Compute/Calcul Canada > www.scinet.utoronto.ca - www.computecanada.org > University of Toronto > 256 McCaul Street, Room 235 > Toronto, ON, M5T1W5 > P: 416-978-2755 > C: 416-505-1477 > > ---------------------------------------------------------------- > This message was sent using IMP at SciNet Consortium, University of > Toronto. > > > > > ----- Message from Jonathan Buzzard on Mon, > 11 Apr 2016 16:02:45 +0100 ----- > *To:* > gpfsug-discuss at spectrumscale.org > *Subject:* > Re: [gpfsug-discuss] backup and disaster recovery solutions > > On Mon, 2016-04-11 at 10:34 -0400, Jaime Pinto wrote: > > Do you want backups or periodic frozen snapshots of the file system? > > > > Backups can entail some level of version control, so that you or > > end-users can get files back on certain points in time, in case of > > accidental deletions. Besides 1.5PB is a lot of material, so you may > > not want to take full snapshots that often. In that case, a > > combination of daily incremental backups using TSM with GPFS's > > mmbackup can be a good option. TSM also does a very good job at > > controlling how material is distributed across multiple tapes, and > > that is something that requires a lot of micro-management if you want > > a home grown solution of rsync+LTFS. > > Is there any other viable option other than TSM for backing up 1.5PB of > data? All other backup software does not handle this at all well. > > > On the other hand, you could use gpfs built-in tools such a > > mmapplypolicy to identify candidates for incremental backup, and send > > them to LTFS. Just more micro management, and you may have to come up > > with your own tool to let end-users restore their stuff, or you'll > > have to act on their behalf. > > > > I was not aware of a way of letting end users restore their stuff from > *backup* for any of the major backup software while respecting the file > system level security of the original file system. If you let the end > user have access to the backup they can restore any file to any location > which is generally not a good idea. > > I do have a concept of creating a read only Fuse mounted file system > from a TSM point in time synthetic backup, and then using the shadow > copy feature of Samba to enable restores using the "Previous Versions" > feature of windows file manager. > > I got as far as getting a directory tree you could browse through but > then had an enforced change of jobs and don't have access to a TSM > server any more to continue development. > > Note if anyone from IBM is listening that would be a super cool feature. > > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > > > > ----- Message from "Marc A Kaplan" on Mon, 11 > Apr 2016 11:11:24 -0400 ----- > *To:* > gpfsug main discussion list > *Subject:* > Re: [gpfsug-discuss] backup and disaster recovery solutions > > Since you write "so if the file is not touched for number of days, > it's moved to a tape" - > that is what we call the HSM feature. This is additional function > beyond backup. IBM has two implementations. > > (1) TSM/HSM now called IBM Spectrum Protect. > _http://www-03.ibm.com/software/products/en/spectrum-protect-for-space-management_ > > (2) HPSS _http://www.hpss-collaboration.org/_ > > The GPFS (Spectrum Scale File System) policy feature supports both, so > that mmapplypolicy and GPFS policy rules can be used to perform > accelerated metadata scans to identify which files should be migrated. > > Also, GPFS supports on-demand recall (on application reads) of data > from long term storage (tape) to GPFS storage (disk or SSD). See also > DMAPI. > > > > Marc A Kaplan > > > > From: Damir Krstic > To: gpfsug main discussion list > Date: 04/11/2016 09:16 AM > Subject: [gpfsug-discuss] backup and disaster recovery solutions > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------------------------------------------------ > > > > We have implemented 1.5PB ESS solution recently in our HPC > environment. Today we are kicking of backup and disaster recovery > discussions so I was wondering what everyone else is using for their > backup? > > In our old storage environment we simply rsync-ed home and software > directories and projects were not backed up. > > With ESS we are looking for more of a GPFS based backup solution - > something to tape possibly and also something that will have life > cycle feature - so if the file is not touched for number of days, it's > moved to a tape (something like LTFS). > > Thanks in advance. > > DAmir _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org_ > __http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From makaplan at us.ibm.com Mon Apr 11 16:50:03 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 11 Apr 2016 11:50:03 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca><1460386965.19299.108.camel@buzzard.phy.strath.ac.uk><20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> Message-ID: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> IBM HSM products have always supported unprivileged, user triggered recall of any file. I am not familiar with any particular GUI, but from the CLI, it's easy enough: dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & # pulling the first few blocks will trigger a complete recall if the file happens to be on HSM We also had IBM HSM for mainframe MVS, years and years ago, which is now called DFHSM for z/OS. (I remember using this from TSO...) If the file has been migrated to a tape archive, accessing the file will trigger a tape mount which can take a while, depending on how fast your tape mounting (robot?), operates and what other requests may be queued ahead of yours....! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Mon Apr 11 17:01:19 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 11 Apr 2016 17:01:19 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> Message-ID: <1460390479.19299.125.camel@buzzard.phy.strath.ac.uk> On Mon, 2016-04-11 at 11:50 -0400, Marc A Kaplan wrote: > IBM HSM products have always supported unprivileged, user triggered > recall of any file. I am not familiar with any particular GUI, but > from the CLI, it's easy enough: Sure, but HSM != Backup. Right now secure aka with the appropriate level of privilege recall of *BACKUPS* ain't supported to my knowledge. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jez.tucker at gpfsug.org Mon Apr 11 17:01:37 2016 From: jez.tucker at gpfsug.org (Jez Tucker) Date: Mon, 11 Apr 2016 17:01:37 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> Message-ID: <570BCA61.4010900@gpfsug.org> Yes, but since the dsmrootd in 6.3.4+ removal be aware that several commands now require sudo: jtucker at tsm-demo-01:~$ dsmls /mmfs1/afile IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 7, Release 1, Level 4.4 Client date/time: 11/04/16 16:58:18 (c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved. ActS ResS ResB FSt FName ANS9505E dsmls: cannot initialize the DMAPI interface. Reason: Operation not permitted jtucker at tsm-demo-01:~$ sudo dsmls /mmfs1/afile [sudo] password for jtucker: IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 7, Release 1, Level 4.4 Client date/time: 11/04/16 16:58:25 (c) Copyright by IBM Corporation and other(s) 1990, 2016. All Rights Reserved. ActS ResS ResB FSt FName 8 8 0 p afile Though, yes, a straight cat of the file as an unpriv user works fine. Jez On 11/04/16 16:50, Marc A Kaplan wrote: > IBM HSM products have always supported unprivileged, user triggered > recall of any file. I am not familiar with any particular GUI, but > from the CLI, it's easy enough: > > dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & # > pulling the first few blocks will trigger a complete recall if the > file happens to be on HSM > > We also had IBM HSM for mainframe MVS, years and years ago, which is > now called DFHSM for z/OS. (I remember using this from TSO...) > > If the file has been migrated to a tape archive, accessing the file > will trigger a tape mount which can take a while, depending on how > fast your tape mounting (robot?), operates and what other requests may > be queued ahead of yours....! > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinto at scinet.utoronto.ca Mon Apr 11 17:03:00 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 11 Apr 2016 12:03:00 -0400 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca><1460386965.19299.108.camel@buzzard.phy.strath.ac.uk><20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> <201604111544.u3BFiCcd006767@d01av05.pok.ibm.com> Message-ID: <20160411120300.171861d6i1iu1ltg@support.scinet.utoronto.ca> Hi Mark Personally I'm aware of the HSM features. However I was specifically referring to TSM Backup restore. I was told the new GUI for unprivileged users looks identical to what root would see, but unprivileged users would only be able to see material for which they have read permissions, and restore only to paths they have write permissions. The GUI is supposed to be a difference platform then the java/WebSphere like we have seen in the past to manage TSM. I'm looking forward to it as well. Jaime Quoting Marc A Kaplan : > IBM HSM products have always supported unprivileged, user triggered recall > of any file. I am not familiar with any particular GUI, but from the CLI, > it's easy enough: > > dd if=/pathtothefileyouwantrecalled of=/dev/null bs=1M count=2 & # > pulling the first few blocks will trigger a complete recall if the file > happens to be on HSM > > We also had IBM HSM for mainframe MVS, years and years ago, which is now > called DFHSM for z/OS. (I remember using this from TSO...) > > If the file has been migrated to a tape archive, accessing the file will > trigger a tape mount which can take a while, depending on how fast your > tape mounting (robot?), operates and what other requests may be queued > ahead of yours....! > > > > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From jonathan at buzzard.me.uk Mon Apr 11 17:03:04 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 11 Apr 2016 17:03:04 +0100 Subject: [gpfsug-discuss] backup and disaster recovery solutions In-Reply-To: <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> References: <20160411103454.13132ppwu3en258u@support.scinet.utoronto.ca> <1460386965.19299.108.camel@buzzard.phy.strath.ac.uk> <20160411111847.48853e2lgp51nxw7@support.scinet.utoronto.ca> Message-ID: <1460390584.19299.127.camel@buzzard.phy.strath.ac.uk> On Mon, 2016-04-11 at 11:18 -0400, Jaime Pinto wrote: > I heard as recently as last Friday from IBM support/vendors/developers > of GPFS/TSM/HSM that the newest release of Spectrum Protect (7.11) > offers a GUI interface that is user centric, and will allow for > unprivileged users to restore their own material via a newer WebGUI > (one that also works with Firefox, Chrome and on linux, not only IE on > Windows). Users may authenticate via AD or LDAP, and traverse only > what they would be allowed to via linux permissions and ACLs. > Hum, if they are they are not exactly advertising the feature or my Google foo is in extremely short supply today. Do you have a pointer to this on the web anywhere? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From mweil at genome.wustl.edu Mon Apr 11 17:05:17 2016 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 11 Apr 2016 11:05:17 -0500 Subject: [gpfsug-discuss] GPFS 4.2 SMB with IPA Message-ID: <570BCB3D.1020602@genome.wustl.edu> Hello all, Is there any good documentation out there to integrate IPA with CES? Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. From janfrode at tanso.net Mon Apr 11 17:43:21 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 11 Apr 2016 16:43:21 +0000 Subject: [gpfsug-discuss] GPFS 4.2 SMB with IPA In-Reply-To: <570BCB3D.1020602@genome.wustl.edu> References: <570BCB3D.1020602@genome.wustl.edu> Message-ID: As IPA is just an LDAP directory + kerberos, I believe you can follow example 7 in the mmuserauth manual. Another way would be to install your CES nodes into your domain outside of GPFS, and use the userdefined mmuserauth config. That's how I would have preferred to do it in an IPA managed linux environment. But, I believe there are still some problems with it overwriting /etc/krb5.keytab and /etc/nsswitch.conf, and stopping "sssd" unnecessarily on mmshutdown. So you might want to make the keytab and nsswitch immutable (chatter +i), and have some logic in f.ex. /var/mmfs/etc/mmfsup that restarts or somehow makes sure sssd is running. Oh.. and you'll need a shared NFS service principal in the krb5.keytab on all nodes to be able to use failover addresses.. and same for samba (which I think hides the ticket in /var/lib/samba/private/netlogon_creds_cli.tdb). -jf man. 11. apr. 2016 kl. 18.05 skrev Matt Weil : > Hello all, > > Is there any good documentation out there to integrate IPA with CES? > > Thanks > > Matt > > ____ > This email message is a private communication. The information > transmitted, including attachments, is intended only for the person or > entity to which it is addressed and may contain confidential, privileged, > and/or proprietary material. Any review, duplication, retransmission, > distribution, or other use of, or taking of any action in reliance upon, > this information by persons or entities other than the intended recipient > is unauthorized by the sender and is prohibited. If you have received this > message in error, please contact the sender immediately by return email and > delete the original message from all computer systems. Thank you. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr.roland.pabel at gmail.com Tue Apr 12 09:03:34 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Tue, 12 Apr 2016 10:03:34 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes Message-ID: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> Hi everyone, we are using GPFS 4.1.0.8 with 4 servers and 850 clients. Our GPFS setup is fairly new, we are still in the testing phase. A few days ago, we had some problems in the cluster which seemed to have started with deadlocks on a small number of nodes. To be better prepared for this scenario, I would like to install a callback for Event deadlockDetected. But this is a local event and the callback is executed on the client nodes, from which I cannot even send an email. Is it possible using mm-commands to instead delegate the callback to the servers (Nodeclass nsdNodes)? I guess it would be possible to use a callback of the form "ssh nsd0 /root/bin/deadlock-callback.sh", but then it is contingent upon server nsd0 being available. The mm-command style "-N nsdNodes" would more reliable in my opinion, because it would be run on all servers. On the servers, I can then check to actually only execute the script on the cluster manager. Thanks Roland -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From Robert.Oesterlin at nuance.com Tue Apr 12 12:54:39 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 12 Apr 2016 11:54:39 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> Message-ID: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Some general thoughts on ?deadlocks? and automated deadlock detection. I personally don?t like the term ?deadlock? as it implies a condition that won?t ever resolve itself. In GPFS terms, a deadlock is really a ?long RPC waiter? over a certain threshold. RPCs that wait on certain events can and do occur and they can take some time to complete. This is not necessarily a condition that is a problem, but you should be looking into them. GPFS does have automated deadlock detection and collection, but in the early releases it was ? well.. it?s not very ?robust?. With later releases (4.2) it?s MUCH better. I personally don?t rely on it because in larger clusters it can be too aggressive and depending on what?s really going on it can make things worse. This statement is my opinion and it doesn?t mean it?s not a good thing to have. :-) On the point of what commands to execute and what to collect ? be careful about long running callback scripts and executing commands on other nodes. Depending on what the issues is, you could end up causing a deadlock or making it worse. Some basic data collection, local to the node with the long RPC waiter is a good thing. Test them well before deploying them. And make sure that you don?t conflict with the automated collections. (which you might consider turning off) For my larger clusters, I dump the cluster waiters on a regular basis (once a minute: mmlsnode ?N waiters ?L), count the types and dump them into a database for graphing via Grafana. This doesn?t help me with true deadlock alerting, but it does give me insight into overall cluster behavior. If I see large numbers of long waiters I will (usually) go and investigate them on a cases by case basis. If you have large numbers of long RPC waiters on an ongoing basis, it's an indication of a larger problem that should be investigated. A few here and there is not a cause for real alarm in my experience. Last ? if you have a chance to upgrade to 4.1.1 or 4.2, I would encourage you to do so as the deadlock detection has improved quite a bit. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid robert.oesterlin at nuance.com From: > on behalf of Roland Pabel > Organization: RRZK Uni K?ln Reply-To: gpfsug main discussion list > Date: Tuesday, April 12, 2016 at 3:03 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Executing Callbacks on other Nodes Hi everyone, we are using GPFS 4.1.0.8 with 4 servers and 850 clients. Our GPFS setup is fairly new, we are still in the testing phase. A few days ago, we had some problems in the cluster which seemed to have started with deadlocks on a small number of nodes. To be better prepared for this scenario, I would like to install a callback for Event deadlockDetected. But this is a local event and the callback is executed on the client nodes, from which I cannot even send an email. Is it possible using mm-commands to instead delegate the callback to the servers (Nodeclass nsdNodes)? I guess it would be possible to use a callback of the form "ssh nsd0 /root/bin/deadlock-callback.sh", but then it is contingent upon server nsd0 being available. The mm-command style "-N nsdNodes" would more reliable in my opinion, because it would be run on all servers. On the servers, I can then check to actually only execute the script on the cluster manager. Thanks Roland -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=CwIFAw&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY&r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=c7jzNm-H6SdZMztP1xkwgySivoe4FlOcI2pS2SCJ8K8&s=AfohxS7tz0ky5C8ImoufbQmQpdwpo4wEO7cSCzHPCD0&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From dr.roland.pabel at gmail.com Tue Apr 12 14:25:33 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Tue, 12 Apr 2016 15:25:33 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Message-ID: <2149839.vuvB37DuRo@soliton.rrz.uni-koeln.de> Hi Bob, thanks for your remarks. I already understood that deadlocks are more timeouts than "tangled up balls of code". I was not (yet) planning on changing the whole routine, I'd just like to get a notice when something unexpected happens in the cluster. So, first, I just want to write these notices into a file and email it once it reaches a certain size. >From what you are saying, it sounds like it is worth upgrading to 4.1.1.x . We are planning a maintenance next month, I'll try to get this into the todo- list. Upgrading beyond this is going require a longer preparation, unless the prerequisite of "RHEL 6.4 or later" as stated on the IBM FAQ is irrelevant. Our clients still run RHEL 6.3. Best regards, Roland > Some general thoughts on ?deadlocks? and automated deadlock detection. > > I personally don?t like the term ?deadlock? as it implies a condition that > won?t ever resolve itself. In GPFS terms, a deadlock is really a ?long RPC > waiter? over a certain threshold. RPCs that wait on certain events can and > do occur and they can take some time to complete. This is not necessarily a > condition that is a problem, but you should be looking into them. > GPFS does have automated deadlock detection and collection, but in the early > releases it was ? well.. it?s not very ?robust?. With later releases (4.2) > it?s MUCH better. I personally don?t rely on it because in larger clusters > it can be too aggressive and depending on what?s really going on it can > make things worse. This statement is my opinion and it doesn?t mean it?s > not a good thing to have. :-) > On the point of what commands to execute and what to collect ? be careful > about long running callback scripts and executing commands on other nodes. > Depending on what the issues is, you could end up causing a deadlock or > making it worse. Some basic data collection, local to the node with the > long RPC waiter is a good thing. Test them well before deploying them. And > make sure that you don?t conflict with the automated collections. (which > you might consider turning off) > For my larger clusters, I dump the cluster waiters on a regular basis (once > a minute: mmlsnode ?N waiters ?L), count the types and dump them into a > database for graphing via Grafana. This doesn?t help me with true deadlock > alerting, but it does give me insight into overall cluster behavior. If I > see large numbers of long waiters I will (usually) go and investigate them > on a cases by case basis. If you have large numbers of long RPC waiters on > an ongoing basis, it's an indication of a larger problem that should be > investigated. A few here and there is not a cause for real alarm in my > experience. > Last ? if you have a chance to upgrade to 4.1.1 or 4.2, I would encourage > you to do so as the deadlock detection has improved quite a bit. > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > robert.oesterlin at nuance.com > > From: > ctrumscale.org>> on behalf of Roland Pabel > > > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > > > Date: Tuesday, April 12, 2016 at 3:03 AM > To: gpfsug main discussion list > > > Subject: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi everyone, > > we are using GPFS 4.1.0.8 with 4 servers and 850 clients. Our GPFS setup is > fairly new, we are still in the testing phase. A few days ago, we had some > problems in the cluster which seemed to have started with deadlocks on a > small number of nodes. To be better prepared for this scenario, I would > like to install a callback for Event deadlockDetected. But this is a local > event and the callback is executed on the client nodes, from which I cannot > even send an email. > > Is it possible using mm-commands to instead delegate the callback to the > servers (Nodeclass nsdNodes)? > > I guess it would be possible to use a callback of the form "ssh nsd0 > /root/bin/deadlock-callback.sh", but then it is contingent upon server nsd0 > being available. The mm-command style "-N nsdNodes" would more reliable in > my opinion, because it would be run on all servers. On the servers, I can > then check to actually only execute the script on the cluster manager. > Thanks > > Roland > -- > Dr. Roland Pabel > Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) > Weyertal 121, Raum 3.07 > D-50931 K?ln > > Tel.: +49 (221) 470-89589 > E-Mail: pabel at uni-koeln.de > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listi > nfo_gpfsug-2Ddiscuss&d=CwIFAw&c=djjh8EKwHtOepW4Bjau0lKhLlu-DxM1dlgP0rrLsOzY& > r=LPDewt1Z4o9eKc86MXmhqX-45Cz1yz1ylYELF9olLKU&m=c7jzNm-H6SdZMztP1xkwgySivoe4 > FlOcI2pS2SCJ8K8&s=AfohxS7tz0ky5C8ImoufbQmQpdwpo4wEO7cSCzHPCD0&e= -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From Robert.Oesterlin at nuance.com Tue Apr 12 15:09:10 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 12 Apr 2016 14:09:10 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <2149839.vuvB37DuRo@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <2149839.vuvB37DuRo@soliton.rrz.uni-koeln.de> Message-ID: <59C81E1E-59CC-40C4-8A7E-73CC88F0741F@nuance.com> Hi Roland I ran into that issue as well ? if you are running 6.3 you need to update to get to the later levels. RH 6.3 is getting a bit dated, so an upgrade might be a good idea ? but I all too well how hard it is to push through those updates! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Roland Pabel > Organization: RRZK Uni K?ln Reply-To: gpfsug main discussion list > Date: Tuesday, April 12, 2016 at 8:25 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes Hi Bob, thanks for your remarks. I already understood that deadlocks are more timeouts than "tangled up balls of code". I was not (yet) planning on changing the whole routine, I'd just like to get a notice when something unexpected happens in the cluster. So, first, I just want to write these notices into a file and email it once it reaches a certain size. From what you are saying, it sounds like it is worth upgrading to 4.1.1.x . We are planning a maintenance next month, I'll try to get this into the todo- list. Upgrading beyond this is going require a longer preparation, unless the prerequisite of "RHEL 6.4 or later" as stated on the IBM FAQ is irrelevant. Our clients still run RHEL 6.3. Best regards, Roland -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Apr 12 23:01:40 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 12 Apr 2016 18:01:40 -0400 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Message-ID: <201604122201.u3CM1o7d031628@d01av02.pok.ibm.com> My understanding is (someone will correct me if I'm wrong) ... GPFS does not have true deadlock detection. As you say it has time outs. The argument is: As a practical matter, it makes not much difference to a sysadmin or user -- if things are gummed up "too long" they start to smell like a deadlock, so we may as well intervene as though there were a true technical deadlock. A genuine true deadlock is a situation where things are gummed up, there is no progress, and one can prove that there will be no progress, no matter how long one waits. E.g. Classically, you have locked resource A and I have locked resource B and now I decide I need resource A and I am waiting indefinitely long for that. And you have decided you need resouce B and you are waiting indefinitely for that. We are then deadlocked. Deadlock can occur on a single node or over multiple nodes. Technically it may be possible to execute a deadlock detection protocol that would identify cyclic, deadlocking dependencies, but it was decided that, for GPFS, it would be more practical to detect "very long waiters"... From: "Oesterlin, Robert" Some general thoughts on ?deadlocks? and automated deadlock detection. I personally don?t like the term ?deadlock? as it implies a condition that won?t ever resolve itself. In GPFS terms, a deadlock is really a ?long RPC waiter? over a certain threshold. RPCs that wait on certain events can and do occur and they can take some time to complete. This is not necessarily a condition that is a problem, but you should be looking into them. GPFS does have automated deadlock detection and collection, but in the early releases it was ? well.. it?s not very ?robust?. With later releases (4.2) it?s MUCH better. I personally don?t rely on it because in larger clusters it can be too aggressive and depending on what?s really going on it can make things worse. This statement is my opinion and it doesn?t mean it?s not a good thing to have. :-) ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Thu Apr 14 15:19:58 2016 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Thu, 14 Apr 2016 15:19:58 +0100 Subject: [gpfsug-discuss] May user group, call for help! Message-ID: Hi All, For the UK May user group meeting, we are hoping to be able to film the sessions so that we can post as many as talks as possible (permission permitting!) online after the event. In order to do this, we require some kit to film the sessions with ... If you are attending the day and have a video camera that we might be able to borrow, please let me or Claire know! If we don't get support from the community then we won't be able to film and share the talks afterwards! So if you are coming along and have something you'd be happy for us to use for the two days, please do let us know! Thanks Simon (UK Group Chair) From Robert.Oesterlin at nuance.com Thu Apr 14 19:10:20 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 14 Apr 2016 18:10:20 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore Message-ID: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> I?m getting these messages (repeating) in the mmfslog after I restored an NSD node ( relocated to a new physical system) with mmsddrestore - the server seems normal otherwise - what should I do? Thu Apr 14 13:44:48.800 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.1' failed (2) Thu Apr 14 13:44:48.801 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) Thu Apr 14 13:44:48.802 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.2' failed (2) Thu Apr 14 13:44:48.803 2016: [N] Load both paxos local files bad Thu Apr 14 13:44:48.804 2016: [N] Open /var/mmfs/ccr/ccr.paxos.1 failed (2) Thu Apr 14 13:44:48.805 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.1' failed (2) Thu Apr 14 13:44:48.806 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) Thu Apr 14 13:44:48.807 2016: [N] Load from file: '/var/mmfs/ccr/ccr.paxos.2' failed (2) Thu Apr 14 13:44:48.808 2016: [N] Load both paxos local files bad Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Thu Apr 14 19:22:41 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 14 Apr 2016 18:22:41 +0000 Subject: [gpfsug-discuss] GPFS 4.2 and 4.1 in multi-cluster environment Message-ID: <7635681D-31ED-461B-82A0-F17DA19DDFF4@vanderbilt.edu> Hi All, We have a multi-cluster environment consisting of: 1) a ?traditional? HPC cluster running on commodity hardware, and 2) a DDN based cluster which is mounted to the HPC cluster and also exports to researchers around campus using both CNFS and SAMBA / CTDB. Both of these cluster are currently running GPFS 4.1.0.8 efix 21. We are considering doing upgrades in May. I would like to take the HPC cluster to GPFS 4.2.0.x not just because that?s the current version, but to get some of the QoS features introduced in 4.2. However, it may not be possible to take the DDN cluster to GPFS 4.2. I?ve got another inquiry in to them about their plans, but the latest information I have is that they only support up thru GPFS 4.1.1.x. I know that it should be possible to run with the HPC cluster at GPFS 4.2.0.x and the DDN cluster at 4.1.1.x ? my question is - is anyone actually doing that? Any suggestions / warnings? I should mention that this question is motivated by the fact that a couple of years ago when both clusters were running GPFS 3.5.0.x, we got them out of sync on the PTF levels (I think the HPC cluster was at PTF 19 and the DDN cluster at PTF 11) and it caused problems. Because of that, we have tried to keep them in sync as much as possible. Thanks in advance, all? ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Thu Apr 14 20:33:17 2016 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 14 Apr 2016 19:33:17 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore In-Reply-To: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> References: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> Message-ID: I would try switching from CCR to primary/secondary config servers, maybe delete the paxos files, and then back to CCR. I believe that's how I got out of a similar situation on a v4.1.1.x installation this january.. -jf tor. 14. apr. 2016 kl. 20.10 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > I?m getting these messages (repeating) in the mmfslog after I restored an > NSD node ( relocated to a new physical system) with mmsddrestore - the > server seems normal otherwise - what should I do? > > Thu Apr 14 13:44:48.800 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.1' failed (2) > Thu Apr 14 13:44:48.801 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) > Thu Apr 14 13:44:48.802 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.2' failed (2) > Thu Apr 14 13:44:48.803 2016: [N] Load both paxos local files bad > Thu Apr 14 13:44:48.804 2016: [N] Open /var/mmfs/ccr/ccr.paxos.1 failed (2) > Thu Apr 14 13:44:48.805 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.1' failed (2) > Thu Apr 14 13:44:48.806 2016: [N] Open /var/mmfs/ccr/ccr.paxos.2 failed (2) > Thu Apr 14 13:44:48.807 2016: [N] Load from file: > '/var/mmfs/ccr/ccr.paxos.2' failed (2) > Thu Apr 14 13:44:48.808 2016: [N] Load both paxos local files bad > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Apr 14 20:39:02 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 14 Apr 2016 19:39:02 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore In-Reply-To: References: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> Message-ID: <4668D451-7C58-456C-B160-54642C07C155@nuance.com> Yea ? turning of CCR means shutting down the entire cluster. Not an option. CCR is VERY POORLY documented. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Jan-Frode Myklebust > Reply-To: gpfsug main discussion list > Date: Thursday, April 14, 2016 at 2:33 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] CCR error messages after mmsdrrestore I would try switching from CCR to primary/secondary config servers, maybe delete the paxos files, and then back to CCR. I believe that's how I got out of a similar situation on a v4.1.1.x installation this january.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Apr 14 21:35:46 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 14 Apr 2016 20:35:46 +0000 Subject: [gpfsug-discuss] CCR error messages after mmsdrrestore In-Reply-To: <4668D451-7C58-456C-B160-54642C07C155@nuance.com> References: <865FB5EC-8C38-49AB-B3DD-743E34B1F0F6@nuance.com> <4668D451-7C58-456C-B160-54642C07C155@nuance.com> Message-ID: <035C8381-5C9E-41A5-9DBC-55AEF25B14CC@nuance.com> Following up to my own problem?. It would appear mmsdrrestore doesn?t work (well) with quorum nodes in a CCR enabled cluster. So: change node to non-quorum mmsdrrestore change back to quorum Hey IBM ? how about we document this! Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Robert Oesterlin > Reply-To: gpfsug main discussion list > Date: Thursday, April 14, 2016 at 2:39 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] CCR error messages after mmsdrrestore Yea ? turning of CCR means shutting down the entire cluster. Not an option. CCR is VERY POORLY documented. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Jan-Frode Myklebust > Reply-To: gpfsug main discussion list > Date: Thursday, April 14, 2016 at 2:33 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] CCR error messages after mmsdrrestore I would try switching from CCR to primary/secondary config servers, maybe delete the paxos files, and then back to CCR. I believe that's how I got out of a similar situation on a v4.1.1.x installation this january.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chekh at stanford.edu Fri Apr 15 00:30:51 2016 From: chekh at stanford.edu (Alex Chekholko) Date: Thu, 14 Apr 2016 16:30:51 -0700 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> Message-ID: <5710282B.6060603@stanford.edu> ++ On 04/12/2016 04:54 AM, Oesterlin, Robert wrote: > For my larger clusters, I dump the cluster waiters on a regular basis > (once a minute: mmlsnode ?N waiters ?L), count the types and dump them > into a database for graphing via Grafana. -- Alex Chekholko chekh at stanford.edu 347-401-4860 From dr.roland.pabel at gmail.com Fri Apr 15 16:50:21 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Fri, 15 Apr 2016 17:50:21 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <5710282B.6060603@stanford.edu> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> Message-ID: <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> Hi, In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So running it every 30 seconds is a bit close. I'll try running it once a minute and then incorporating this into our graphing. Maybe the command is so slow for me because a few nodes are down? Is there a parameter to mmlsnode to configure the timeout? Thanks, Roland > ++ > > On 04/12/2016 04:54 AM, Oesterlin, Robert wrote: > > For my larger clusters, I dump the cluster waiters on a regular basis > > (once a minute: mmlsnode ?N waiters ?L), count the types and dump them > > into a database for graphing via Grafana. -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From Robert.Oesterlin at nuance.com Fri Apr 15 17:02:08 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 15 Apr 2016 16:02:08 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> Message-ID: <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> This command is just using ssh to all the nodes and dumping the waiter information and collecting it. That means if the node is down, slow to respond, or there are a large number of nodes, it could take a while to return. In my 400-500 node clusters this command usually take less than 10 seconds. I do prefix the command with a timeout value in case a node is hung up and ssh never returns (which it sometimes does, and that?s not the fault of GPFS) Something like this: timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L This means I get incomplete information, but if you don?t you end up piling up a lot of hung up commands. I would check over your cluster carefully to see if there are other issues that might cause ssh to hang up ? which could impact other GPFS commands that distribute via ssh. Another approach would be to dump the waiters locally on each node, send node specific information to the database, and then sum it up using the graphing software. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Roland Pabel > Organization: RRZK Uni K?ln Reply-To: gpfsug main discussion list > Date: Friday, April 15, 2016 at 10:50 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes Hi, In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So running it every 30 seconds is a bit close. I'll try running it once a minute and then incorporating this into our graphing. Maybe the command is so slow for me because a few nodes are down? Is there a parameter to mmlsnode to configure the timeout? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tortay at cc.in2p3.fr Fri Apr 15 17:06:41 2016 From: tortay at cc.in2p3.fr (Loic Tortay) Date: Fri, 15 Apr 2016 18:06:41 +0200 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Message-ID: <57111191.4050200@cc.in2p3.fr> Hello, I have a testbed cluster where I have setup AFM for an incremental NFS migration between 2 GPFS filesystems in the same cluster. This is with Spectrum Scale 4.1.1-5 on Linux (CentOS 7). The documentation states: "On a GPFS data source, AFM moves all user extended attributes and ACLs, and file sparseness is maintained." (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) If I'm not mistaken, I have a GPFS data source (since I'm doing a migration from GPFS to GPFS). While file sparseness is mostly maintained, user extended attributes and ACLs in the source/home filesystem do not appear to be migrated to the target/cache filesystem (same goes for basic tests with ACLs): % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 getfattr: Removing leading '/' from absolute path names # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 user.mfiles:sha2-256 % While on the target filesystem: % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 % Am I missing something ? Is there another meaning to "user extended attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | From oehmes at gmail.com Fri Apr 15 17:12:26 2016 From: oehmes at gmail.com (Sven Oehme) Date: Fri, 15 Apr 2016 12:12:26 -0400 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> Message-ID: If you can wait a few more month we will have stats for this in Zimon. Sven On Apr 15, 2016 12:02 PM, "Oesterlin, Robert" wrote: > This command is just using ssh to all the nodes and dumping the waiter > information and collecting it. That means if the node is down, slow to > respond, or there are a large number of nodes, it could take a while to > return. In my 400-500 node clusters this command usually take less than 10 > seconds. I do prefix the command with a timeout value in case a node is > hung up and ssh never returns (which it sometimes does, and that?s not the > fault of GPFS) Something like this: > > timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L > > This means I get incomplete information, but if you don?t you end up > piling up a lot of hung up commands. I would check over your cluster > carefully to see if there are other issues that might cause ssh to hang up > ? which could impact other GPFS commands that distribute via ssh. > > Another approach would be to dump the waiters locally on each node, send > node specific information to the database, and then sum it up using the > graphing software. > > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > From: on behalf of Roland > Pabel > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > Date: Friday, April 15, 2016 at 10:50 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi, > > In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So > running it every 30 seconds is a bit close. I'll try running it once a > minute > and then incorporating this into our graphing. > > Maybe the command is so slow for me because a few nodes are down? > Is there a parameter to mmlsnode to configure the timeout? > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri Apr 15 17:48:14 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 15 Apr 2016 16:48:14 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <76F417E7-35DA-4EF8-A8B0-94EB044453FA@nuance.com> <5710282B.6060603@stanford.edu> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> Message-ID: Excellent! I have Zimon fully deployed and this will make my life much easier. :-) Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Sven Oehme > Reply-To: gpfsug main discussion list > Date: Friday, April 15, 2016 at 11:12 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes If you can wait a few more month we will have stats for this in Zimon. Sven -------------- next part -------------- An HTML attachment was scrubbed... URL: From vpuvvada at in.ibm.com Sat Apr 16 10:23:32 2016 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Sat, 16 Apr 2016 14:53:32 +0530 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <57111191.4050200@cc.in2p3.fr> References: <57111191.4050200@cc.in2p3.fr> Message-ID: <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> Hi, Can you check if AFM was enabled at home cluster using "mmafmconfig enable" command? What is the fileset mode are you using ? Regards, Venkat ------------------------------------------------------------------- Venkateswara R Puvvada/India/IBM at IBMIN vpuvvada at in.ibm.com From: Loic Tortay To: gpfsug-discuss at spectrumscale.org Date: 04/15/2016 09:35 PM Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello, I have a testbed cluster where I have setup AFM for an incremental NFS migration between 2 GPFS filesystems in the same cluster. This is with Spectrum Scale 4.1.1-5 on Linux (CentOS 7). The documentation states: "On a GPFS data source, AFM moves all user extended attributes and ACLs, and file sparseness is maintained." (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) If I'm not mistaken, I have a GPFS data source (since I'm doing a migration from GPFS to GPFS). While file sparseness is mostly maintained, user extended attributes and ACLs in the source/home filesystem do not appear to be migrated to the target/cache filesystem (same goes for basic tests with ACLs): % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 getfattr: Removing leading '/' from absolute path names # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 user.mfiles:sha2-256 % While on the target filesystem: % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 % Am I missing something ? Is there another meaning to "user extended attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From tortay at cc.in2p3.fr Sat Apr 16 10:40:12 2016 From: tortay at cc.in2p3.fr (Loic Tortay) Date: Sat, 16 Apr 2016 11:40:12 +0200 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> References: <57111191.4050200@cc.in2p3.fr> <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> Message-ID: <5712087C.9060608@cc.in2p3.fr> On 16/04/2016 11:23, Venkateswara R Puvvada wrote: > Hi, > > Can you check if AFM was enabled at home cluster using "mmafmconfig > enable" command? What is the fileset mode are you using ? > Hello, AFM was enabled for the 2 home filesets/NFS exports with "mmafmconfig enable /fs1/zone1" & "mmafmconfig enable /fs1/zone2". The fileset mode is read-only for botch cache filesets. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | > Regards, > Venkat > ------------------------------------------------------------------- > Venkateswara R Puvvada/India/IBM at IBMIN > vpuvvada at in.ibm.com > > > > > From: Loic Tortay > To: gpfsug-discuss at spectrumscale.org > Date: 04/15/2016 09:35 PM > Subject: [gpfsug-discuss] Extended attributes and ACLs with > AFM-based "NFS migration" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello, > I have a testbed cluster where I have setup AFM for an incremental NFS > migration between 2 GPFS filesystems in the same cluster. This is with > Spectrum Scale 4.1.1-5 on Linux (CentOS 7). > > The documentation states: "On a GPFS data source, AFM moves all user > extended attributes and ACLs, and file sparseness is maintained." > (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) > > If I'm not mistaken, I have a GPFS data source (since I'm doing a > migration from GPFS to GPFS). > > While file sparseness is mostly maintained, user extended attributes and > ACLs in the source/home filesystem do not appear to be migrated to the > target/cache filesystem (same goes for basic tests with ACLs): > % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > getfattr: Removing leading '/' from absolute path names > # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > user.mfiles:sha2-256 > % > While on the target filesystem: > % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > % > > Am I missing something ? Is there another meaning to "user extended > attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2931 bytes Desc: S/MIME Cryptographic Signature URL: From viccornell at gmail.com Mon Apr 18 14:41:36 2016 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 18 Apr 2016 14:41:36 +0100 Subject: [gpfsug-discuss] AFM Question Message-ID: Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic -------------- next part -------------- An HTML attachment was scrubbed... URL: From pinto at scinet.utoronto.ca Mon Apr 18 14:54:14 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Mon, 18 Apr 2016 09:54:14 -0400 Subject: [gpfsug-discuss] GPFS on ZFS? Message-ID: <20160418095414.10636zytueeqmupy@support.scinet.utoronto.ca> Since we can not get GNR outside ESS/GSS appliances, is anybody using ZFS for software raid on commodity storage? Thanks Jaime --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From dr.roland.pabel at gmail.com Mon Apr 18 16:10:02 2016 From: dr.roland.pabel at gmail.com (Roland Pabel) Date: Mon, 18 Apr 2016 17:10:02 +0200 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> Message-ID: <7692100.SyKvSf6dcU@soliton.rrz.uni-koeln.de> Hi Bob, I'll try the second approach, i.e, collecting "mmfsadm dump waiters" locally and then summing the values up, since it can be done without the overhead of ssh. You mentioned mmlsnode starts all these ssh commands and that made me look into the file itself. I then noticed most of the mm commands are actually scripts. This helps a lot with regards to my original question. mmdsh seems to do what I need. Thanks, Roland > This command is just using ssh to all the nodes and dumping the waiter > information and collecting it. That means if the node is down, slow to > respond, or there are a large number of nodes, it could take a while to > return. In my 400-500 node clusters this command usually take less than 10 > seconds. I do prefix the command with a timeout value in case a node is > hung up and ssh never returns (which it sometimes does, and that?s not the > fault of GPFS) Something like this: > timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L > > This means I get incomplete information, but if you don?t you end up piling > up a lot of hung up commands. I would check over your cluster carefully to > see if there are other issues that might cause ssh to hang up ? which could > impact other GPFS commands that distribute via ssh. > Another approach would be to dump the waiters locally on each node, send > node specific information to the database, and then sum it up using the > graphing software. > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > From: > ctrumscale.org>> on behalf of Roland Pabel > > > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > > > Date: Friday, April 15, 2016 at 10:50 AM > To: gpfsug main discussion list > > > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi, > > In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. So > running it every 30 seconds is a bit close. I'll try running it once a > minute and then incorporating this into our graphing. > > Maybe the command is so slow for me because a few nodes are down? > Is there a parameter to mmlsnode to configure the timeout? > > -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de From JRLang at uwyo.edu Mon Apr 18 17:28:25 2016 From: JRLang at uwyo.edu (Jeffrey R. Lang) Date: Mon, 18 Apr 2016 16:28:25 +0000 Subject: [gpfsug-discuss] Executing Callbacks on other Nodes In-Reply-To: <7692100.SyKvSf6dcU@soliton.rrz.uni-koeln.de> References: <1864107.xeMDsJKa4h@soliton.rrz.uni-koeln.de> <1633236.3yxlj1T8xB@soliton.rrz.uni-koeln.de> <54324DF6-A380-4449-A74E-3AE76F26F68F@nuance.com> <7692100.SyKvSf6dcU@soliton.rrz.uni-koeln.de> Message-ID: Roland Here's a tool written by NCAR that provides waiter information on a per node bases using a light weight daemon on the monitored node. I have been using it for a while and it has helped me find and figure out long waiter nodes. It might do what you are looking for. https://sourceforge.net/projects/gpfsmonitorsuite/ jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Roland Pabel Sent: Monday, April 18, 2016 9:10 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes Hi Bob, I'll try the second approach, i.e, collecting "mmfsadm dump waiters" locally and then summing the values up, since it can be done without the overhead of ssh. You mentioned mmlsnode starts all these ssh commands and that made me look into the file itself. I then noticed most of the mm commands are actually scripts. This helps a lot with regards to my original question. mmdsh seems to do what I need. Thanks, Roland > This command is just using ssh to all the nodes and dumping the waiter > information and collecting it. That means if the node is down, slow to > respond, or there are a large number of nodes, it could take a while > to return. In my 400-500 node clusters this command usually take less > than 10 seconds. I do prefix the command with a timeout value in case > a node is hung up and ssh never returns (which it sometimes does, and > that?s not the fault of GPFS) Something like this: > timeout 45s /usr/lpp/mmfs/bin/mmlsnode -N waiters ?L > > This means I get incomplete information, but if you don?t you end up > piling up a lot of hung up commands. I would check over your cluster > carefully to see if there are other issues that might cause ssh to > hang up ? which could impact other GPFS commands that distribute via ssh. > Another approach would be to dump the waiters locally on each node, > send node specific information to the database, and then sum it up > using the graphing software. > Bob Oesterlin > Sr Storage Engineer, Nuance HPC Grid > > From: > s at spe ctrumscale.org>> on behalf of Roland Pabel > > > Organization: RRZK Uni K?ln > Reply-To: gpfsug main discussion list > org>> > Date: Friday, April 15, 2016 at 10:50 AM > To: gpfsug main discussion list > org>> > Subject: Re: [gpfsug-discuss] Executing Callbacks on other Nodes > > Hi, > > In our cluster, mmlsnode ?N waiters ?L takes about 25 seconds to run. > So running it every 30 seconds is a bit close. I'll try running it > once a minute and then incorporating this into our graphing. > > Maybe the command is so slow for me because a few nodes are down? > Is there a parameter to mmlsnode to configure the timeout? > > -- Dr. Roland Pabel Regionales Rechenzentrum der Universit?t zu K?ln (RRZK) Weyertal 121, Raum 3.07 D-50931 K?ln Tel.: +49 (221) 470-89589 E-Mail: pabel at uni-koeln.de _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From shankbal at in.ibm.com Tue Apr 19 06:47:11 2016 From: shankbal at in.ibm.com (Shankar Balasubramanian) Date: Tue, 19 Apr 2016 11:17:11 +0530 Subject: [gpfsug-discuss] AFM Question In-Reply-To: References: Message-ID: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> SW mode does not support failover. IW does, so this will not work. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell To: gpfsug main discussion list Date: 04/18/2016 07:13 PM Subject: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From vpuvvada at in.ibm.com Tue Apr 19 07:01:07 2016 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Tue, 19 Apr 2016 11:31:07 +0530 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <5712087C.9060608@cc.in2p3.fr> References: <57111191.4050200@cc.in2p3.fr><201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> <5712087C.9060608@cc.in2p3.fr> Message-ID: <201604190602.u3J62bl314745928@d28relay02.in.ibm.com> Hi, AFM usually logs the following message at gateway node if it cannot open control file to read ACLs/EAs. AFM: Cannot find control file for file system fileset in the exported file system at home. ACLs and extended attributes will not be synchronized. Sparse files will have zeros written for holes. If the above message didn't not appear in logs and if AFM failed to bring ACLs, this may be a defect. Please open PMR with supporting traces to debug this issue further. Thanks. Regards, Venkat ------------------------------------------------------------------- Venkateswara R Puvvada/India/IBM at IBMIN vpuvvada at in.ibm.com From: Loic Tortay To: gpfsug main discussion list Date: 04/16/2016 03:10 PM Subject: Re: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Sent by: gpfsug-discuss-bounces at spectrumscale.org On 16/04/2016 11:23, Venkateswara R Puvvada wrote: > Hi, > > Can you check if AFM was enabled at home cluster using "mmafmconfig > enable" command? What is the fileset mode are you using ? > Hello, AFM was enabled for the 2 home filesets/NFS exports with "mmafmconfig enable /fs1/zone1" & "mmafmconfig enable /fs1/zone2". The fileset mode is read-only for botch cache filesets. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | > Regards, > Venkat > ------------------------------------------------------------------- > Venkateswara R Puvvada/India/IBM at IBMIN > vpuvvada at in.ibm.com > > > > > From: Loic Tortay > To: gpfsug-discuss at spectrumscale.org > Date: 04/15/2016 09:35 PM > Subject: [gpfsug-discuss] Extended attributes and ACLs with > AFM-based "NFS migration" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello, > I have a testbed cluster where I have setup AFM for an incremental NFS > migration between 2 GPFS filesystems in the same cluster. This is with > Spectrum Scale 4.1.1-5 on Linux (CentOS 7). > > The documentation states: "On a GPFS data source, AFM moves all user > extended attributes and ACLs, and file sparseness is maintained." > (SpectrumScale 4.1.1 Advanced Administration Guide, page 226) > > If I'm not mistaken, I have a GPFS data source (since I'm doing a > migration from GPFS to GPFS). > > While file sparseness is mostly maintained, user extended attributes and > ACLs in the source/home filesystem do not appear to be migrated to the > target/cache filesystem (same goes for basic tests with ACLs): > % getfattr /fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > getfattr: Removing leading '/' from absolute path names > # file: fs1/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > user.mfiles:sha2-256 > % > While on the target filesystem: > % getfattr /fs2/zone1/s04/1900/3e479a3eb2eb92d419f812ba1287e8c6269 > % > > Am I missing something ? Is there another meaning to "user extended > attributes" than OS level extended attributes (i.e. non-GPFS xattr) ? > [attachment "smime.p7s" deleted by Venkateswara R Puvvada/India/IBM] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Tue Apr 19 11:46:00 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Tue, 19 Apr 2016 10:46:00 +0000 Subject: [gpfsug-discuss] AFM Question In-Reply-To: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> Message-ID: Hi Shankar, Vic, Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Shankar Balasubramanian Sent: 19 April 2016 06:47 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM Question SW mode does not support failover. IW does, so this will not work. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell > To: gpfsug main discussion list > Date: 04/18/2016 07:13 PM Subject: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Tue Apr 19 12:04:31 2016 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 19 Apr 2016 12:04:31 +0100 Subject: [gpfsug-discuss] AFM Question In-Reply-To: References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> Message-ID: Thanks Luke, The whole business of ?promoting? a cache from one type to another isn?t documented very well in the places that I am looking. I would be grateful to anyone with more info to share. I am in the process of investigating Async DR for new customers. It would just be useful to see what can be done with existing ones who have no interest in upgrading. Also Async DR means that I have to create snapshots (and worse delete them) on the ?working? side of a replication pair and this is something I?m not in a tearing hurry to do. Regards, Vic > On 19 Apr 2016, at 11:46, Luke Raimbach wrote: > > Hi Shankar, Vic, > > Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? > > Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. > > I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? > > Cheers, > Luke. > ? <> > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org ] On Behalf Of Shankar Balasubramanian > Sent: 19 April 2016 06:47 > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] AFM Question > > SW mode does not support failover. IW does, so this will not work. > > > Best Regards, > Shankar Balasubramanian > AFM & Async DR Development > IBM Systems > Bangalore - Embassy Golf Links > India > > > > > > From: Vic Cornell > > To: gpfsug main discussion list > > Date: 04/18/2016 07:13 PM > Subject: [gpfsug-discuss] AFM Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hi All, > Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? > > If it is not immediately obvious why this might be useful, see the following scenario: > > Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. > > The system hosting A fails and all data on fileset A is lost. > > Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. > > Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. > > So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? > > Cheers, > > Vic > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From shankbal at in.ibm.com Tue Apr 19 12:07:27 2016 From: shankbal at in.ibm.com (Shankar Balasubramanian) Date: Tue, 19 Apr 2016 16:37:27 +0530 Subject: [gpfsug-discuss] AFM Question In-Reply-To: References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> Message-ID: <201604191117.u3JBHYqi27525232@d28relay04.in.ibm.com> You can disable snapshots creation on DR by simply disabling RPO feature on DR. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell To: gpfsug main discussion list Date: 04/19/2016 04:34 PM Subject: Re: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Luke, The whole business of ?promoting? a cache from one type to another isn?t documented very well in the places that I am looking. I would be grateful to anyone with more info to share. I am in the process of investigating Async DR for new customers. It would just be useful to see what can be done with existing ones who have no interest in upgrading. Also Async DR means that I have to create snapshots (and worse delete them) on the ?working? side of a replication pair and this is something I?m not in a tearing hurry to do. Regards, Vic On 19 Apr 2016, at 11:46, Luke Raimbach wrote: Hi Shankar, Vic, Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Shankar Balasubramanian Sent: 19 April 2016 06:47 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM Question SW mode does not support failover. IW does, so this will not work. Best Regards, Shankar Balasubramanian AFM & Async DR Development IBM Systems Bangalore - Embassy Golf Links India From: Vic Cornell To: gpfsug main discussion list Date: 04/18/2016 07:13 PM Subject: [gpfsug-discuss] AFM Question Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? If it is not immediately obvious why this might be useful, see the following scenario: Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. The system hosting A fails and all data on fileset A is lost. Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? Cheers, Vic _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Tue Apr 19 12:20:08 2016 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 19 Apr 2016 12:20:08 +0100 Subject: [gpfsug-discuss] AFM Question In-Reply-To: <201604191117.u3JBHYqi27525232@d28relay04.in.ibm.com> References: <201604190547.u3J5lAei41746534@d28relay04.in.ibm.com> <201604191117.u3JBHYqi27525232@d28relay04.in.ibm.com> Message-ID: <377D783D-27EE-4E40-9F23-047F73FAFDF4@gmail.com> Thanks Shankar - that was the bit I was looking for. Vic > On 19 Apr 2016, at 12:07, Shankar Balasubramanian wrote: > > You can disable snapshots creation on DR by simply disabling RPO feature on DR. > > > Best Regards, > Shankar Balasubramanian > AFM & Async DR Development > IBM Systems > Bangalore - Embassy Golf Links > India > > > > > > From: Vic Cornell > To: gpfsug main discussion list > Date: 04/19/2016 04:34 PM > Subject: Re: [gpfsug-discuss] AFM Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Thanks Luke, > > The whole business of ?promoting? a cache from one type to another isn?t documented very well in the places that I am looking. I would be grateful to anyone with more info to share. > > I am in the process of investigating Async DR for new customers. It would just be useful to see what can be done with existing ones who have no interest in upgrading. > > Also Async DR means that I have to create snapshots (and worse delete them) on the ?working? side of a replication pair and this is something I?m not in a tearing hurry to do. > > > Regards, > > Vic > > On 19 Apr 2016, at 11:46, Luke Raimbach > wrote: > > Hi Shankar, Vic, > > Would it not be possible, once the original cache site is useable, to bring it up in local-update mode so that you can pre-fetch all the metadata from home? > > Once you are ready to do the switchover: stop writing to home, do a final sync of metadata, then ?promote? the local-update cache to a single-writer; continue writing new data in to the original cache. > > I am assuming the only reason you?d want to repopulate the SW cache with metadata is to prevent someone accidentally creating the same file after the disaster and overwriting the original at home without any knowledge? > > Cheers, > Luke. > <> > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org ] On Behalf Of Shankar Balasubramanian > Sent: 19 April 2016 06:47 > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] AFM Question > > SW mode does not support failover. IW does, so this will not work. > > > Best Regards, > Shankar Balasubramanian > AFM & Async DR Development > IBM Systems > Bangalore - Embassy Golf Links > India > > > > > > From: Vic Cornell > > To: gpfsug main discussion list > > Date: 04/18/2016 07:13 PM > Subject: [gpfsug-discuss] AFM Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > Hi All, > Is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between HOME and CACHE in a single writer AFM relationship? > > If it is not immediately obvious why this might be useful, see the following scenario: > > Fileset A is a GPFS fileset which is acting as CACHE for a single writer HOME on fileset B located on a separate filesystem. > > The system hosting A fails and all data on fileset A is lost. > > Admin uses fileset B as a recovery volume and users read and write data to B until the system hosting A is recovered, albeit without data. > > Admin uses mmafmctl to ?failover? AFM relationship to a new fileset on A, all data are copied from B to A over time and users continue to access the data via B. > > So is there a bandwidth efficient way (downtime is allowed) to reverse the relationship between A and B such that the replication flow is as it was to start with? > > Cheers, > > Vic > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tortay at cc.in2p3.fr Tue Apr 19 14:43:53 2016 From: tortay at cc.in2p3.fr (Loic Tortay) Date: Tue, 19 Apr 2016 15:43:53 +0200 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <201604190602.u3J62bl314745928@d28relay02.in.ibm.com> References: <57111191.4050200@cc.in2p3.fr> <201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com> <5712087C.9060608@cc.in2p3.fr> <201604190602.u3J62bl314745928@d28relay02.in.ibm.com> Message-ID: <57163619.6000500@cc.in2p3.fr> On 04/19/2016 08:01 AM, Venkateswara R Puvvada wrote: > Hi, > > AFM usually logs the following message at gateway node if it cannot open > control file to read ACLs/EAs. > > AFM: Cannot find control file for file system fileset > in the exported file system at home. > ACLs and extended attributes will not be synchronized. > Sparse files will have zeros written for holes. > > If the above message didn't not appear in logs and if AFM failed to bring > ACLs, this may be a defect. Please open PMR with supporting traces to > debug this issue further. Thanks. > Hello, There is no such message on any node in the test cluster. I have opened a PMR (50962,650,706), the "gpfs.snap" output is on ecurep.ibm.com in "/toibm/linux/gpfs.snap.50962.650.706.tar". BTW, it would probably be useful if "gpfs.snap" avoided doing a "find /var/mmfs ..." on AFM gateway nodes (or used appropriate find options), since the NFS mountpoints for AFM are in "/var/mmfs/afm" and their content is scanned too. This can be quite time consuming, for instance our test setup has several million files in the home filesystem. The "offending" 'find' is the one at line 3014 in the version of gpfs.snap included with Spectrum Scale 4.1.1-5. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | From SAnderson at convergeone.com Tue Apr 19 18:56:25 2016 From: SAnderson at convergeone.com (Shaun Anderson) Date: Tue, 19 Apr 2016 17:56:25 +0000 Subject: [gpfsug-discuss] Hello from Idaho Message-ID: <12ff9317b22e40ffb7d56e11bab19a58@NACR502.nacr.com> My name is Shaun Anderson and I work for an IBM Business Partner in Boise, ID, USA. Our main vertical is Health-Care but we do other work in other sectors as well. My experience with GPFS has been via the storage product line (Sonas, V7kU) and now with ESS/Spectrum Archive. I stumbled upon SpectrumScale.org today and am glad to have found it while I prepare to implement a cNFS/CTDB(SAMBA) cluster. Shaun Anderson Storage Architect M 214.263.7014 o 208.577.2112 [http://info.spanlink.com/hubfs/Email_images/C1-EmailSignature-logo_160px.png] NOTICE: This email message and any attachments hereto may contain confidential information. Any unauthorized review, use, disclosure, or distribution of such information is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy the original message and all copies of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2323 bytes Desc: image001.png URL: From bbanister at jumptrading.com Tue Apr 19 19:00:53 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 19 Apr 2016 18:00:53 +0000 Subject: [gpfsug-discuss] Hello from Idaho In-Reply-To: <12ff9317b22e40ffb7d56e11bab19a58@NACR502.nacr.com> References: <12ff9317b22e40ffb7d56e11bab19a58@NACR502.nacr.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB0609E1E6@CHI-EXCHANGEW1.w2k.jumptrading.com> Hello Shaun, welcome to the list. If you haven't already see the new Cluster Export Services (CES) facility in 4.1.1-X and 4.2.X-X releases of Spectrum Scale, which provides cross-protocol support of clustered NFS/SMB/etc, then I would highly suggest looking at that as a fully-supported solution over CTDB w/ SAMBA. Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Shaun Anderson Sent: Tuesday, April 19, 2016 12:56 PM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Hello from Idaho My name is Shaun Anderson and I work for an IBM Business Partner in Boise, ID, USA. Our main vertical is Health-Care but we do other work in other sectors as well. My experience with GPFS has been via the storage product line (Sonas, V7kU) and now with ESS/Spectrum Archive. I stumbled upon SpectrumScale.org today and am glad to have found it while I prepare to implement a cNFS/CTDB(SAMBA) cluster. Shaun Anderson Storage Architect M 214.263.7014 o 208.577.2112 [http://info.spanlink.com/hubfs/Email_images/C1-EmailSignature-logo_160px.png] NOTICE: This email message and any attachments hereto may contain confidential information. Any unauthorized review, use, disclosure, or distribution of such information is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy the original message and all copies of it. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2323 bytes Desc: image001.png URL: From vpuvvada at in.ibm.com Wed Apr 20 12:04:42 2016 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 20 Apr 2016 16:34:42 +0530 Subject: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" In-Reply-To: <57163619.6000500@cc.in2p3.fr> References: <57111191.4050200@cc.in2p3.fr><201604160924.u3G9NxHM22085888@d28relay04.in.ibm.com><5712087C.9060608@cc.in2p3.fr><201604190602.u3J62bl314745928@d28relay02.in.ibm.com> <57163619.6000500@cc.in2p3.fr> Message-ID: <201604201114.u3KBEnww50331902@d28relay01.in.ibm.com> Hi, There is an issue with gpfs.snap which scans AFM internal mounts. This is issue got fixed in later releases. To workaround this problem, 1. cp /usr/lpp/mmfs/bin/gpfs.snap /usr/lpp/mmfs/bin/gpfs.snap.orig 2. Change this line : ccrSnapExcludeListRaw=$($find /var/mmfs \ \( -name "proxy-server*" -o -name "keystone*" -o -name "openrc*" \) \ 2>/dev/null) to this: ccrSnapExcludeListRaw=$($find /var/mmfs -xdev \ \( -name "proxy-server*" -o -name "keystone*" -o -name "openrc*" \) \ 2>/dev/null) Regards, Venkat ------------------------------------------------------------------- Venkateswara R Puvvada/India/IBM at IBMIN vpuvvada at in.ibm.com +91-80-41777734 From: Loic Tortay To: gpfsug main discussion list Date: 04/19/2016 07:13 PM Subject: Re: [gpfsug-discuss] Extended attributes and ACLs with AFM-based "NFS migration" Sent by: gpfsug-discuss-bounces at spectrumscale.org On 04/19/2016 08:01 AM, Venkateswara R Puvvada wrote: > Hi, > > AFM usually logs the following message at gateway node if it cannot open > control file to read ACLs/EAs. > > AFM: Cannot find control file for file system fileset > in the exported file system at home. > ACLs and extended attributes will not be synchronized. > Sparse files will have zeros written for holes. > > If the above message didn't not appear in logs and if AFM failed to bring > ACLs, this may be a defect. Please open PMR with supporting traces to > debug this issue further. Thanks. > Hello, There is no such message on any node in the test cluster. I have opened a PMR (50962,650,706), the "gpfs.snap" output is on ecurep.ibm.com in "/toibm/linux/gpfs.snap.50962.650.706.tar". BTW, it would probably be useful if "gpfs.snap" avoided doing a "find /var/mmfs ..." on AFM gateway nodes (or used appropriate find options), since the NFS mountpoints for AFM are in "/var/mmfs/afm" and their content is scanned too. This can be quite time consuming, for instance our test setup has several million files in the home filesystem. The "offending" 'find' is the one at line 3014 in the version of gpfs.snap included with Spectrum Scale 4.1.1-5. Lo?c. -- | Lo?c Tortay - IN2P3 Computing Centre | _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 13:15:07 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 12:15:07 +0000 Subject: [gpfsug-discuss] mmbackup and filenames Message-ID: Hi, We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, on one we run CES/SMB and run a sync and share tool as well. This means we sometimes end up with filenames containing characters like newline (e.g. >From OSX clients). Mmbackup fails on these filenames, any suggestions on how we can get it to work? Thanks Simon From jonathan at buzzard.me.uk Wed Apr 20 13:28:18 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 20 Apr 2016 13:28:18 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: Message-ID: <1461155298.1434.83.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-20 at 12:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: > Hi, > > We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, > on one we run CES/SMB and run a sync and share tool as well. This means we > sometimes end up with filenames containing characters like newline (e.g. > From OSX clients). Mmbackup fails on these filenames, any suggestions on > how we can get it to work? > OMG, it's like seven/eight years since I reported that as a bug in mmbackup and they *STILL* haven't fixed it!!! I bet it still breaks with back ticks and other wacko characters too. I seem to recall it failed with very long path lengths as well; specifically ones longer than MAX_PATH (google it MAX_PATH is not something you can rely on). Back then mmbackup would just fail completely and not back anything up. Is it still the same or is it just failing on the files with wacko characters? I concluded back then that mmbackup was not suitable for production use. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Wed Apr 20 13:38:21 2016 From: oehmes at us.ibm.com (Sven Oehme) Date: Wed, 20 Apr 2016 12:38:21 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: Message-ID: <201604201239.u3KCdrAb016643@d01av04.pok.ibm.com> Which version of gpfs are you running on this cluster ? Sent from IBM Verse Simon Thompson (Research Computing - IT Services) --- [gpfsug-discuss] mmbackup and filenames --- From:"Simon Thompson (Research Computing - IT Services)" To:gpfsug-discuss at spectrumscale.orgDate:Wed, Apr 20, 2016 5:15 AMSubject:[gpfsug-discuss] mmbackup and filenames Hi,We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems,on one we run CES/SMB and run a sync and share tool as well. This means wesometimes end up with filenames containing characters like newline (e.g.From OSX clients). Mmbackup fails on these filenames, any suggestions onhow we can get it to work?ThanksSimon_______________________________________________gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 13:42:16 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 12:42:16 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <201604201239.u3KCdrAb016643@d01av04.pok.ibm.com> References: , <201604201239.u3KCdrAb016643@d01av04.pok.ibm.com> Message-ID: This is a 4.2 cluster with 7.1.3 protect client. (Probably 4.2.0.0) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sven Oehme [oehmes at us.ibm.com] Sent: 20 April 2016 13:38 To: gpfsug main discussion list Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] mmbackup and filenames Which version of gpfs are you running on this cluster ? Sent from IBM Verse Simon Thompson (Research Computing - IT Services) --- [gpfsug-discuss] mmbackup and filenames --- From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug-discuss at spectrumscale.org Date: Wed, Apr 20, 2016 5:15 AM Subject: [gpfsug-discuss] mmbackup and filenames ________________________________ Hi, We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, on one we run CES/SMB and run a sync and share tool as well. This means we sometimes end up with filenames containing characters like newline (e.g. >From OSX clients). Mmbackup fails on these filenames, any suggestions on how we can get it to work? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Wed Apr 20 15:42:29 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 20 Apr 2016 10:42:29 -0400 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: Message-ID: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. Each path must be specified on a single line. A line can contain only one path. Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 20 16:05:16 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 20 Apr 2016 15:05:16 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> References: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <0F66BEED-E30F-410A-BE20-2F706A5BAC9B@vanderbilt.edu> All, I would like to see this issue get resolved as it has caused us problems as well. We recently had an issue that necessitated us restoring 9.6 million files (out of 260 million) in a filesystem. We were able to restore a little over 8 million of those files relatively easily, but more than a million have been problematic due to various special characters in the filenames. I think there needs to be a recognition that TSM is going to be asked to back up filesystems that are used by Windows and Mac clients via NFS, SAMBA/CTDB, CES, etc., and that the users of those clients cannot be expected to not choose filenames that Unix-savvy users would never in a million years choose. And since I had to write some scripts to generate md5sums of files we restored and therefore had to deal with things in filenames that had me asking ?what in the world were they thinking?!?", I fully recognize that this is not an easy nut to crack. My 2 cents worth? Kevin On Apr 20, 2016, at 9:42 AM, Marc A Kaplan > wrote: The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 16:15:10 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 15:15:10 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 16:19:38 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 15:19:38 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> The Public RFE process sounds like a good way to go... somebody put it up and I'll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 16:27:08 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 15:27:08 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 16:28:47 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 15:28:47 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> Well what a lame restriction... I don't understand why all IBM products don't have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go... somebody put it up and I'll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Wed Apr 20 16:35:04 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 20 Apr 2016 11:35:04 -0400 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <201604201535.u3KFZC28024194@d03av04.boulder.ibm.com> >From a computer science point of view, this is a simple matter of programming. Provide yet-another-option on filelist processing that supports encoding or escaping of special characters. Pick your poison! We and many others have worked through this issue and provided solutions in products apart from TSM. In Spectrum Scale Filesystem, we code filelists with escapes \n and \\. Or if you prefer, use the ESCAPE option. See the Advanced Admin Guide, on or near page 24 in the ILM chapter 2. IBM is a very large organization and sometimes, for some issues, customers have the best, most effective means of communicating requirements to particular product groups within IBM. -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 20 16:41:00 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 20 Apr 2016 15:41:00 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: OK, I might have managed to create a public RFE for this: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Well what a lame restriction? I don?t understand why all IBM products don?t have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 20 16:46:17 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 20 Apr 2016 16:46:17 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> Message-ID: <1461167177.1434.89.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-20 at 15:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: [SNIP] > Who should we approach at IBM as a user community to get this on the > TSM fix list? > I personally raised this with IBM seven or eight years ago and was told that they where aware of the problem and it would be fixed. Clearly they have not fixed it or they did and then let it break again and thus have never heard of a unit test. The basic problem back then was that mmbackup used various standard Unix text processing utilities and was doomed to break if you put "special" but perfectly valid characters in your file names. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From r.horton at imperial.ac.uk Wed Apr 20 16:58:54 2016 From: r.horton at imperial.ac.uk (Robert Horton) Date: Wed, 20 Apr 2016 16:58:54 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: Message-ID: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> On Wed, 2016-04-20 at 12:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: > We use mmbackup with Spectrum Protect (TSM!) to backup our > file-systems, > on one we run CES/SMB and run a sync and share tool as well. This > means we > sometimes end up with filenames containing characters like newline > (e.g. > From OSX clients). Mmbackup fails on these filenames, any suggestions > on > how we can get it to work? I've not had to do do anything with TSM for a couple of years but when I did as a workaround to that I had a wrapper that called mmbackup and then parsed the output and for any files it couldn't handle due to non-ascii characters then called the tsm backup command directly on the whole directory. This does mean some stuff is getting backed up more than necessary but if it's only a handful of files it's a reasonable workaround. Rob -- Robert Horton HPC Systems Support Analyst Imperial College London +44 (0) 20 7594 5759 From scottcumbie at dynamixgroup.com Wed Apr 20 17:23:08 2016 From: scottcumbie at dynamixgroup.com (Scott Cumbie) Date: Wed, 20 Apr 2016 16:23:08 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> References: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> Message-ID: <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> You should open a PMR. This is not a ?feature? request, this is a failure of the code to work as it should. Scott Cumbie, Dynamix Group scottcumbie at dynamixgroup.com Office: (336) 765-9290 Cell: (336) 782-1590 On Apr 20, 2016, at 11:58 AM, Robert Horton > wrote: On Wed, 2016-04-20 at 12:15 +0000, Simon Thompson (Research Computing - IT Services) wrote: We use mmbackup with Spectrum Protect (TSM!) to backup our file-systems, on one we run CES/SMB and run a sync and share tool as well. This means we sometimes end up with filenames containing characters like newline (e.g. From OSX clients). Mmbackup fails on these filenames, any suggestions on how we can get it to work? I've not had to do do anything with TSM for a couple of years but when I did as a workaround to that I had a wrapper that called mmbackup and then parsed the output and for any files it couldn't handle due to non-ascii characters then called the tsm backup command directly on the whole directory. This does mean some stuff is getting backed up more than necessary but if it's only a handful of files it's a reasonable workaround. Rob -- Robert Horton HPC Systems Support Analyst Imperial College London +44 (0) 20 7594 5759 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 20 19:26:27 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 20 Apr 2016 19:26:27 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> References: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> Message-ID: <5717C9D3.8050501@buzzard.me.uk> On 20/04/16 17:23, Scott Cumbie wrote: > You should open a PMR. This is not a ?feature? request, this is a > failure of the code to work as it should. > I did at least seven years ago. I shall see if I can find the reference in my old notebooks tomorrow. Unfortunately one has gone missing so I might not have the reference. I do however wonder if the newlines really are newlines and not some UTF multibyte character that looks like a newline when you parse it as ASCII/ISO-8859-1 or some other legacy encoding? In my experience you have to try really really hard to actually get a newline into a file name. Mostly because the GUI will interpret pressing the return/enter key to think you have finished typing the file name rather than inserting a newline into the file name. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From bbanister at jumptrading.com Wed Apr 20 19:28:54 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 18:28:54 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: References: , <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> , <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> I voted for this! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:41 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames OK, I might have managed to create a public RFE for this: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Well what a lame restriction... I don't understand why all IBM products don't have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go... somebody put it up and I'll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 20 19:42:10 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 20 Apr 2016 18:42:10 +0000 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <4F3BBBF1-34BF-4FE6-8FB4-D21430C4BFCE@vanderbilt.edu> Me too! And I have to say (and those of you in the U.S. will understand this best) that it was kind of nice to really *want* to cast a vote instead of saying, ?I sure wish ?none of the above? was an option?? ;-) Kevin On Apr 20, 2016, at 1:28 PM, Bryan Banister > wrote: I voted for this! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:41 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames OK, I might have managed to create a public RFE for this: https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:28 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Well what a lame restriction? I don?t understand why all IBM products don?t have public RFE options, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] Sent: 20 April 2016 16:19 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Wednesday, April 20, 2016 10:15 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames Hi Mark, I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. Who should we approach at IBM as a user community to get this on the TSM fix list? Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 20 April 2016 15:42 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] mmbackup and filenames The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html ... The files (entries) listed in the filelist must adhere to the following rules: * Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. * Each path must be specified on a single line. A line can contain only one path. * Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). * By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Wed Apr 20 19:56:42 2016 From: viccornell at gmail.com (viccornell at gmail.com) Date: Wed, 20 Apr 2016 19:56:42 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <201604201444.u3KEiDEv021405@d03av04.boulder.ibm.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1954@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A1A68@CHI-EXCHANGEW1.w2k.jumptrading.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A2737@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <584AAC36-28C1-4138-893E-DFC00760C8B0@gmail.com> Me too. Sent from my iPhone > On 20 Apr 2016, at 19:28, Bryan Banister wrote: > > I voted for this! > -B > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) > Sent: Wednesday, April 20, 2016 10:41 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > OK, I might have managed to create a public RFE for this: > > https://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=87176 > > Simon > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] > Sent: 20 April 2016 16:28 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > Well what a lame restriction? I don?t understand why all IBM products don?t have public RFE options, > -B > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) > Sent: Wednesday, April 20, 2016 10:27 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > Hm. I can only log a public RFE against Scale ... and this is a change to Protect ;-) > > Simon > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bryan Banister [bbanister at jumptrading.com] > Sent: 20 April 2016 16:19 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > The Public RFE process sounds like a good way to go? somebody put it up and I?ll vote for it! > -B > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) > Sent: Wednesday, April 20, 2016 10:15 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > Hi Mark, > > I appreciate its a limitation of the TSM client, but Scale (IBM product) and Protect (IBM product) and preferred (?) method of backing it up ... > > I agree with Kevin that given the push for protocol support, and people will use filenames like this, IBM need to get it fixed. > > Who should we approach at IBM as a user community to get this on the TSM fix list? > > Simon > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] > Sent: 20 April 2016 15:42 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] mmbackup and filenames > > The problem is that the Tivoli Storage Manager (Ahem Spectrum Protect) Filelist option has some limitations: > > http://www.ibm.com/support/knowledgecenter/SS8TDQ_7.1.0/com.ibm.itsm.client.doc/r_opt_filelist.html > > ... > The files (entries) listed in the filelist must adhere to the following rules: > Each entry must be a fully-qualified or a relative path to a file or directory. Note that if you include a directory in a filelist entry, the directory is backed up, but the contents of the directory are not. > Each path must be specified on a single line. A line can contain only one path. > Paths must not contain control characters, such as 0x18 (CTRL-X), 0x19 (CTRL-Y) and 0x0A (newline). > By default, paths must not contain wildcard characters. Do not include asterisk (*) or question marks (?) in a path. This restriction can be overridden if you enable the option named ... AND SO ON... > IF TSM would implement some way of encoding or "escaping" special characters in filelists, we would happily fix mmbackup ! > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 20:02:08 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 19:02:08 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed Apr 20 20:05:26 2016 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 20 Apr 2016 19:05:26 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: It?s there for sending data to support, primarily. But we do make use of it for report generation. -- Jonathan Fosburgh Principal Application Systems Analyst Storage Team IT Operations jfosburg at mdanderson.org (713) 745-9346 From: > on behalf of Bryan Banister > Reply-To: gpfsug main discussion list > Date: Wednesday, April 20, 2016 at 2:02 PM To: "gpfsug main discussion list (gpfsug-discuss at spectrumscale.org)" > Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Apparently, though not documented in man pages or any of the GPFS docs that I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS commands that provides output in machine readable fashion?. That?s right kids? no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dan.Foster at bristol.ac.uk Wed Apr 20 21:23:15 2016 From: Dan.Foster at bristol.ac.uk (Dan Foster) Date: Wed, 20 Apr 2016 21:23:15 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: On 20 April 2016 at 20:02, Bryan Banister wrote: > Apparently, though not documented in man pages or any of the GPFS docs that > I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS > commands that provides output in machine readable fashion?. > > That?s right kids? no more parsing obscure, often changed output columns > with your favorite bash/awk/python/magic. This is really useful, thanks for sharing! :) -- Dan Foster | Senior Storage Systems Administrator Advanced Computing Research Centre, University of Bristol From bevans at pixitmedia.com Wed Apr 20 21:38:42 2016 From: bevans at pixitmedia.com (Barry Evans) Date: Wed, 20 Apr 2016 21:38:42 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <5717E8D2.2080107@pixitmedia.com> If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any of the GPFS docs > that I?ve read (at least that I recall), there is a ?-Y? option to > many/most GPFS commands that provides output in machine readable fashion?. > > That?s right kids? no more parsing obscure, often changed output > columns with your favorite bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From duersch at us.ibm.com Wed Apr 20 21:43:11 2016 From: duersch at us.ibm.com (Steve Duersch) Date: Wed, 20 Apr 2016 16:43:11 -0400 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: References: Message-ID: We try our hardest to keep those columns static. Rarely are they changed. We are aware that folks are programming against them and we don't rearrange where things are. Steve Duersch Spectrum Scale (GPFS) FVTest IBM Poughkeepsie, New York >If you build a monitoring pipeline using -Y output, make sure you test >between revisions before upgrading. The columns do have a tendency to >change from time to time. > >Cheers, >Barry >On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any of the GPFS docs > that I?ve read (at least that I recall), there is a ?-Y? option to > many/most GPFS commands that provides output in machine readable fashion?. > > That?s right kids? no more parsing obscure, often changed output > columns with your favorite bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 21:46:04 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 20:46:04 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717E8D2.2080107@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtucker at pixitmedia.com Wed Apr 20 22:12:10 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Wed, 20 Apr 2016 22:12:10 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <5717F0AA.8050901@pixitmedia.com> Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: > > What?s nice is that the ?-Y? output provides a HEADER so that you can > still programmatically determine fields of interest? this is the best! > > I recommend adding ?-Y? option documentation to all supporting GPFS > commands for others to be informed. > > -Bryan > > *From:*gpfsug-discuss-bounces at spectrumscale.org > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry > Evans > *Sent:* Wednesday, April 20, 2016 3:39 PM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > If you build a monitoring pipeline using -Y output, make sure you test > between revisions before upgrading. The columns do have a tendency to > change from time to time. > > Cheers, > Barry > > On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any of the GPFS > docs that I?ve read (at least that I recall), there is a ?-Y? > option to many/most GPFS commands that provides output in machine > readable fashion?. > > That?s right kids? no more parsing obscure, often changed output > columns with your favorite bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the named > addressee(s) only and may contain proprietary, confidential or > privileged information. If you are not the intended recipient, you > are hereby notified that any review, dissemination or copying of > this email is strictly prohibited, and to please notify the sender > immediately and destroy this email and any attachments. Email > transmission cannot be guaranteed to be secure or error-free. The > Company, therefore, does not make any guarantees as to the > completeness or accuracy of this email or any attachments. This > email is for informational purposes only and does not constitute a > recommendation, offer, request or solicitation of any kind to buy, > sell, subscribe, redeem or perform any type of transaction of a > financial product. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtucker at pixitmedia.com Wed Apr 20 22:18:28 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Wed, 20 Apr 2016 22:18:28 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717F0AA.8050901@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> Message-ID: <5717F224.2010100@pixitmedia.com> So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: > Indeed. > > jtucker at elmo:~$ mmlsfs all -Y > mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: > > I must say I've not seen any headers increment above 0:1 since ... er > .. 3.3(?), so they're pretty static. > > Now, if only mmlspool supported -Y ... or if _all_ commands supported > -Y, I might even FedEX beer. > > Jez > > > On 20/04/16 21:46, Bryan Banister wrote: >> >> What?s nice is that the ?-Y? output provides a HEADER so that you can >> still programmatically determine fields of interest? this is the best! >> >> I recommend adding ?-Y? option documentation to all supporting GPFS >> commands for others to be informed. >> >> -Bryan >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of >> *Barry Evans >> *Sent:* Wednesday, April 20, 2016 3:39 PM >> *To:* gpfsug-discuss at spectrumscale.org >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >> didn't... game changer >> >> If you build a monitoring pipeline using -Y output, make sure you >> test between revisions before upgrading. The columns do have a >> tendency to change from time to time. >> >> Cheers, >> Barry >> >> On 20/04/2016 20:02, Bryan Banister wrote: >> >> Apparently, though not documented in man pages or any of the GPFS >> docs that I?ve read (at least that I recall), there is a ?-Y? >> option to many/most GPFS commands that provides output in machine >> readable fashion?. >> >> That?s right kids? no more parsing obscure, often changed output >> columns with your favorite bash/awk/python/magic. >> >> Why IBM would not document this is beyond me, >> >> -B >> >> ------------------------------------------------------------------------ >> >> >> Note: This email is for the confidential use of the named >> addressee(s) only and may contain proprietary, confidential or >> privileged information. If you are not the intended recipient, >> you are hereby notified that any review, dissemination or copying >> of this email is strictly prohibited, and to please notify the >> sender immediately and destroy this email and any attachments. >> Email transmission cannot be guaranteed to be secure or >> error-free. The Company, therefore, does not make any guarantees >> as to the completeness or accuracy of this email or any >> attachments. This email is for informational purposes only and >> does not constitute a recommendation, offer, request or >> solicitation of any kind to buy, sell, subscribe, redeem or >> perform any type of transaction of a financial product. >> >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> This email is confidential in that it is intended for the exclusive >> attention of the addressee(s) indicated. If you are not the intended >> recipient, this email should not be read or disclosed to any other >> person. Please notify the sender immediately and delete this email >> from your computer system. Any opinions expressed are not necessarily >> those of the company from which this email was sent and, whilst to >> the best of our knowledge no viruses or defects exist, no >> responsibility can be accepted for any loss or damage arising from >> its receipt or subsequent use of this email. >> >> >> ------------------------------------------------------------------------ >> >> Note: This email is for the confidential use of the named >> addressee(s) only and may contain proprietary, confidential or >> privileged information. If you are not the intended recipient, you >> are hereby notified that any review, dissemination or copying of this >> email is strictly prohibited, and to please notify the sender >> immediately and destroy this email and any attachments. Email >> transmission cannot be guaranteed to be secure or error-free. The >> Company, therefore, does not make any guarantees as to the >> completeness or accuracy of this email or any attachments. This email >> is for informational purposes only and does not constitute a >> recommendation, offer, request or solicitation of any kind to buy, >> sell, subscribe, redeem or perform any type of transaction of a >> financial product. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -- > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 20 22:24:01 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 20 Apr 2016 21:24:01 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717F0AA.8050901@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> Message-ID: <3360F57F-BC94-4116-82F6-9E1CDFC2919F@vanderbilt.edu> All, Does the unit of measure for *all* fields default to the same as if you ran the command without ?-Y?? For example: mmlsquota:user:HEADER:version:reserved:reserved:filesystemName:quotaType:id:name:blockUsage:blockQuota:blockLimit:blockInDoubt:blockGrace:filesUsage:filesQuota:filesLimit:filesInDoubt:filesGrace:remarks:fid:filesetname: blockUsage, blockLimit, and blockInDoubt are in KB, which makes sense, since that?s the default. But what about blockGrace if a user is over quota? Will it also contain output in varying units of measure (?6 days? or ?2 hours? or ?expired?) just like without the ?-Y?? I think this points to Bryan being right ?-Y? should be documented somewhere / somehow. Thanks? Kevin On Apr 20, 2016, at 4:12 PM, Jez Tucker > wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What?s nice is that the ?-Y? output provides a HEADER so that you can still programmatically determine fields of interest? this is the best! I recommend adding ?-Y? option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS commands that provides output in machine readable fashion?. That?s right kids? no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bevans at pixitmedia.com Wed Apr 20 22:58:27 2016 From: bevans at pixitmedia.com (Barry Evans) Date: Wed, 20 Apr 2016 22:58:27 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717F224.2010100@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> Message-ID: <5717FB83.6020805@pixitmedia.com> Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: > So mmlspool does in 4.1.1.3... perhaps my memory fails me. > I'm pretty certain Yuri told me that mmlspool was completely > unsupported and didn't have -Y a couple of years ago when we did the > original GPFS UG RFEs prior to 4.x. I figure that earns cookies. > > Perhaps in light of the mmbackup thread; "Will fix RFEs for > cookies?". Name your price ;-) > > Jez > > On 20/04/16 22:12, Jez Tucker wrote: >> Indeed. >> >> jtucker at elmo:~$ mmlsfs all -Y >> mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: >> >> I must say I've not seen any headers increment above 0:1 since ... er >> .. 3.3(?), so they're pretty static. >> >> Now, if only mmlspool supported -Y ... or if _all_ commands supported >> -Y, I might even FedEX beer. >> >> Jez >> >> >> On 20/04/16 21:46, Bryan Banister wrote: >>> >>> What?s nice is that the ?-Y? output provides a HEADER so that you >>> can still programmatically determine fields of interest? this is the >>> best! >>> >>> I recommend adding ?-Y? option documentation to all supporting GPFS >>> commands for others to be informed. >>> >>> -Bryan >>> >>> *From:*gpfsug-discuss-bounces at spectrumscale.org >>> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of >>> *Barry Evans >>> *Sent:* Wednesday, April 20, 2016 3:39 PM >>> *To:* gpfsug-discuss at spectrumscale.org >>> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >>> didn't... game changer >>> >>> If you build a monitoring pipeline using -Y output, make sure you >>> test between revisions before upgrading. The columns do have a >>> tendency to change from time to time. >>> >>> Cheers, >>> Barry >>> >>> On 20/04/2016 20:02, Bryan Banister wrote: >>> >>> Apparently, though not documented in man pages or any of the >>> GPFS docs that I?ve read (at least that I recall), there is a >>> ?-Y? option to many/most GPFS commands that provides output in >>> machine readable fashion?. >>> >>> That?s right kids? no more parsing obscure, often changed output >>> columns with your favorite bash/awk/python/magic. >>> >>> Why IBM would not document this is beyond me, >>> >>> -B >>> >>> ------------------------------------------------------------------------ >>> >>> >>> Note: This email is for the confidential use of the named >>> addressee(s) only and may contain proprietary, confidential or >>> privileged information. If you are not the intended recipient, >>> you are hereby notified that any review, dissemination or >>> copying of this email is strictly prohibited, and to please >>> notify the sender immediately and destroy this email and any >>> attachments. Email transmission cannot be guaranteed to be >>> secure or error-free. The Company, therefore, does not make any >>> guarantees as to the completeness or accuracy of this email or >>> any attachments. This email is for informational purposes only >>> and does not constitute a recommendation, offer, request or >>> solicitation of any kind to buy, sell, subscribe, redeem or >>> perform any type of transaction of a financial product. >>> >>> >>> >>> _______________________________________________ >>> >>> gpfsug-discuss mailing list >>> >>> gpfsug-discuss at spectrumscale.org >>> >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >>> >>> This email is confidential in that it is intended for the exclusive >>> attention of the addressee(s) indicated. If you are not the intended >>> recipient, this email should not be read or disclosed to any other >>> person. Please notify the sender immediately and delete this email >>> from your computer system. Any opinions expressed are not >>> necessarily those of the company from which this email was sent and, >>> whilst to the best of our knowledge no viruses or defects exist, no >>> responsibility can be accepted for any loss or damage arising from >>> its receipt or subsequent use of this email. >>> >>> >>> ------------------------------------------------------------------------ >>> >>> Note: This email is for the confidential use of the named >>> addressee(s) only and may contain proprietary, confidential or >>> privileged information. If you are not the intended recipient, you >>> are hereby notified that any review, dissemination or copying of >>> this email is strictly prohibited, and to please notify the sender >>> immediately and destroy this email and any attachments. Email >>> transmission cannot be guaranteed to be secure or error-free. The >>> Company, therefore, does not make any guarantees as to the >>> completeness or accuracy of this email or any attachments. This >>> email is for informational purposes only and does not constitute a >>> recommendation, offer, request or solicitation of any kind to buy, >>> sell, subscribe, redeem or perform any type of transaction of a >>> financial product. >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> -- >> Jez Tucker >> Head of Research & Development >> Pixit Media >> Mobile: +44 (0) 776 419 3820 >> www.pixitmedia.com > > -- > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 23:02:50 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 22:02:50 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717FB83.6020805@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A3684@CHI-EXCHANGEW1.w2k.jumptrading.com> That's a separate topic from having GPFS CLI commands output machine readable format, -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 4:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Apr 20 23:06:18 2016 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 20 Apr 2016 22:06:18 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <5717FB83.6020805@pixitmedia.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> Message-ID: <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> +1 to a real python API. We have written our own, albeit incomplete, library to expose most of what we need. We would be happy to share some general ideas on what should be included, but a real IBM implementation wouldn't have to do what we did. (Think lots of subprocess.Popen + subprocess.communicate and shredding the output of mm commands. And yes, we wrote a parser which could shred the output of either -Y or tabular format.) Thx Paul From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 5:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Apr 20 23:08:39 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 20 Apr 2016 22:08:39 +0000 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> Sounds like a candidate for the GPFS UG Git Hub!! https://github.com/gpfsug/gpfsug-tools -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Sanchez, Paul Sent: Wednesday, April 20, 2016 5:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer +1 to a real python API. We have written our own, albeit incomplete, library to expose most of what we need. We would be happy to share some general ideas on what should be included, but a real IBM implementation wouldn't have to do what we did. (Think lots of subprocess.Popen + subprocess.communicate and shredding the output of mm commands. And yes, we wrote a parser which could shred the output of either -Y or tabular format.) Thx Paul From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 5:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What's nice is that the "-Y" output provides a HEADER so that you can still programmatically determine fields of interest... this is the best! I recommend adding "-Y" option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I've read (at least that I recall), there is a "-Y" option to many/most GPFS commands that provides output in machine readable fashion.... That's right kids... no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [http://pixitmedia.com/sig/sig-cio.jpg] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtucker at pixitmedia.com Thu Apr 21 01:05:39 2016 From: jtucker at pixitmedia.com (Jez Tucker) Date: Thu, 21 Apr 2016 01:05:39 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <57181953.9090506@pixitmedia.com> I'd suggest you attend the UK UG in May then ... ref Agenda: http://www.gpfsug.org/may-2016-uk-user-group/ On 20/04/16 23:08, Bryan Banister wrote: > > Sounds like a candidate for the GPFS UG Git Hub!! > > https://github.com/gpfsug/gpfsug-tools > > -B > > *From:*gpfsug-discuss-bounces at spectrumscale.org > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of > *Sanchez, Paul > *Sent:* Wednesday, April 20, 2016 5:06 PM > *To:* gpfsug main discussion list > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > +1 to a real python API. > > We have written our own, albeit incomplete, library to expose most of > what we need. We would be happy to share some general ideas on what > should be included, but a real IBM implementation wouldn?t have to do > what we did. (Think lots of subprocess.Popen + subprocess.communicate > and shredding the output of mm commands. And yes, we wrote a parser > which could shred the output of either ?Y or tabular format.) > > Thx > > Paul > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry > Evans > *Sent:* Wednesday, April 20, 2016 5:58 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > Someone should just make a python API that just abstracts all of this > > On 20/04/2016 22:18, Jez Tucker wrote: > > So mmlspool does in 4.1.1.3... perhaps my memory fails me. > I'm pretty certain Yuri told me that mmlspool was completely > unsupported and didn't have -Y a couple of years ago when we did > the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. > > Perhaps in light of the mmbackup thread; "Will fix RFEs for > cookies?". Name your price ;-) > > Jez > > On 20/04/16 22:12, Jez Tucker wrote: > > Indeed. > > jtucker at elmo:~$ mmlsfs all -Y > mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: > > I must say I've not seen any headers increment above 0:1 since > ... er .. 3.3(?), so they're pretty static. > > Now, if only mmlspool supported -Y ... or if _all_ commands > supported -Y, I might even FedEX beer. > > Jez > > On 20/04/16 21:46, Bryan Banister wrote: > > What?s nice is that the ?-Y? output provides a HEADER so > that you can still programmatically determine fields of > interest? this is the best! > > I recommend adding ?-Y? option documentation to all > supporting GPFS commands for others to be informed. > > -Bryan > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On > Behalf Of *Barry Evans > *Sent:* Wednesday, April 20, 2016 3:39 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? > I sure didn't... game changer > > If you build a monitoring pipeline using -Y output, make > sure you test between revisions before upgrading. The > columns do have a tendency to change from time to time. > > Cheers, > Barry > > On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any > of the GPFS docs that I?ve read (at least that I > recall), there is a ?-Y? option to many/most GPFS > commands that provides output in machine readable > fashion?. > > That?s right kids? no more parsing obscure, often > changed output columns with your favorite > bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the > named addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not > the intended recipient, you are hereby notified that > any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender > immediately and destroy this email and any > attachments. Email transmission cannot be guaranteed > to be secure or error-free. The Company, therefore, > does not make any guarantees as to the completeness or > accuracy of this email or any attachments. This email > is for informational purposes only and does not > constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, > redeem or perform any type of transaction of a > financial product. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you > are not the intended recipient, this email should not be > read or disclosed to any other person. Please notify the > sender immediately and delete this email from your > computer system. Any opinions expressed are not > necessarily those of the company from which this email was > sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for > any loss or damage arising from its receipt or subsequent > use of this email. > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the named > addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not the > intended recipient, you are hereby notified that any > review, dissemination or copying of this email is strictly > prohibited, and to please notify the sender immediately > and destroy this email and any attachments. Email > transmission cannot be guaranteed to be secure or > error-free. The Company, therefore, does not make any > guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational > purposes only and does not constitute a recommendation, > offer, request or solicitation of any kind to buy, sell, > subscribe, redeem or perform any type of transaction of a > financial product. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you are not > the intended recipient, this email should not be read or disclosed > to any other person. Please notify the sender immediately and > delete this email from your computer system. Any opinions > expressed are not necessarily those of the company from which this > email was sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for any loss > or damage arising from its receipt or subsequent use of this email. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Barry Evans > Technical Director & Co-Founder > Pixit Media > > http://www.pixitmedia.com > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jez.tucker at gpfsug.org Thu Apr 21 01:10:07 2016 From: jez.tucker at gpfsug.org (Jez Tucker) Date: Thu, 21 Apr 2016 01:10:07 +0100 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <57181A5F.4070909@gpfsug.org> Btw. If anyone wants to add anything to the UG github, just send a pull request. Jez On 20/04/16 23:08, Bryan Banister wrote: > > Sounds like a candidate for the GPFS UG Git Hub!! > > https://github.com/gpfsug/gpfsug-tools > > -B > > *From:*gpfsug-discuss-bounces at spectrumscale.org > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of > *Sanchez, Paul > *Sent:* Wednesday, April 20, 2016 5:06 PM > *To:* gpfsug main discussion list > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > +1 to a real python API. > > We have written our own, albeit incomplete, library to expose most of > what we need. We would be happy to share some general ideas on what > should be included, but a real IBM implementation wouldn?t have to do > what we did. (Think lots of subprocess.Popen + subprocess.communicate > and shredding the output of mm commands. And yes, we wrote a parser > which could shred the output of either ?Y or tabular format.) > > Thx > > Paul > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry > Evans > *Sent:* Wednesday, April 20, 2016 5:58 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure > didn't... game changer > > Someone should just make a python API that just abstracts all of this > > On 20/04/2016 22:18, Jez Tucker wrote: > > So mmlspool does in 4.1.1.3... perhaps my memory fails me. > I'm pretty certain Yuri told me that mmlspool was completely > unsupported and didn't have -Y a couple of years ago when we did > the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. > > Perhaps in light of the mmbackup thread; "Will fix RFEs for > cookies?". Name your price ;-) > > Jez > > On 20/04/16 22:12, Jez Tucker wrote: > > Indeed. > > jtucker at elmo:~$ mmlsfs all -Y > mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: > > I must say I've not seen any headers increment above 0:1 since > ... er .. 3.3(?), so they're pretty static. > > Now, if only mmlspool supported -Y ... or if _all_ commands > supported -Y, I might even FedEX beer. > > Jez > > On 20/04/16 21:46, Bryan Banister wrote: > > What?s nice is that the ?-Y? output provides a HEADER so > that you can still programmatically determine fields of > interest? this is the best! > > I recommend adding ?-Y? option documentation to all > supporting GPFS commands for others to be informed. > > -Bryan > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On > Behalf Of *Barry Evans > *Sent:* Wednesday, April 20, 2016 3:39 PM > *To:* gpfsug-discuss at spectrumscale.org > > *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? > I sure didn't... game changer > > If you build a monitoring pipeline using -Y output, make > sure you test between revisions before upgrading. The > columns do have a tendency to change from time to time. > > Cheers, > Barry > > On 20/04/2016 20:02, Bryan Banister wrote: > > Apparently, though not documented in man pages or any > of the GPFS docs that I?ve read (at least that I > recall), there is a ?-Y? option to many/most GPFS > commands that provides output in machine readable > fashion?. > > That?s right kids? no more parsing obscure, often > changed output columns with your favorite > bash/awk/python/magic. > > Why IBM would not document this is beyond me, > > -B > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the > named addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not > the intended recipient, you are hereby notified that > any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender > immediately and destroy this email and any > attachments. Email transmission cannot be guaranteed > to be secure or error-free. The Company, therefore, > does not make any guarantees as to the completeness or > accuracy of this email or any attachments. This email > is for informational purposes only and does not > constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, > redeem or perform any type of transaction of a > financial product. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you > are not the intended recipient, this email should not be > read or disclosed to any other person. Please notify the > sender immediately and delete this email from your > computer system. Any opinions expressed are not > necessarily those of the company from which this email was > sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for > any loss or damage arising from its receipt or subsequent > use of this email. > > ------------------------------------------------------------------------ > > > Note: This email is for the confidential use of the named > addressee(s) only and may contain proprietary, > confidential or privileged information. If you are not the > intended recipient, you are hereby notified that any > review, dissemination or copying of this email is strictly > prohibited, and to please notify the sender immediately > and destroy this email and any attachments. Email > transmission cannot be guaranteed to be secure or > error-free. The Company, therefore, does not make any > guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational > purposes only and does not constitute a recommendation, > offer, request or solicitation of any kind to buy, sell, > subscribe, redeem or perform any type of transaction of a > financial product. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > -- > > Jez Tucker > Head of Research & Development > Pixit Media > Mobile: +44 (0) 776 419 3820 > www.pixitmedia.com > > This email is confidential in that it is intended for the > exclusive attention of the addressee(s) indicated. If you are not > the intended recipient, this email should not be read or disclosed > to any other person. Please notify the sender immediately and > delete this email from your computer system. Any opinions > expressed are not necessarily those of the company from which this > email was sent and, whilst to the best of our knowledge no viruses > or defects exist, no responsibility can be accepted for any loss > or damage arising from its receipt or subsequent use of this email. > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > > Barry Evans > Technical Director & Co-Founder > Pixit Media > > http://www.pixitmedia.com > > This email is confidential in that it is intended for the exclusive > attention of the addressee(s) indicated. If you are not the intended > recipient, this email should not be read or disclosed to any other > person. Please notify the sender immediately and delete this email > from your computer system. Any opinions expressed are not necessarily > those of the company from which this email was sent and, whilst to the > best of our knowledge no viruses or defects exist, no responsibility > can be accepted for any loss or damage arising from its receipt or > subsequent use of this email. > > > ------------------------------------------------------------------------ > > Note: This email is for the confidential use of the named addressee(s) > only and may contain proprietary, confidential or privileged > information. If you are not the intended recipient, you are hereby > notified that any review, dissemination or copying of this email is > strictly prohibited, and to please notify the sender immediately and > destroy this email and any attachments. Email transmission cannot be > guaranteed to be secure or error-free. The Company, therefore, does > not make any guarantees as to the completeness or accuracy of this > email or any attachments. This email is for informational purposes > only and does not constitute a recommendation, offer, request or > solicitation of any kind to buy, sell, subscribe, redeem or perform > any type of transaction of a financial product. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From stijn.deweirdt at ugent.be Thu Apr 21 07:49:03 2016 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 21 Apr 2016 08:49:03 +0200 Subject: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <57181A5F.4070909@gpfsug.org> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com> <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A37BF@CHI-EXCHANGEW1.w2k.jumptrading.com> <57181A5F.4070909@gpfsug.org> Message-ID: <571877DF.6070600@ugent.be> we have a parser, but not an actual API, in case someone is interested. https://github.com/hpcugent/vsc-filesystems/blob/master/lib/vsc/filesystem/gpfs.py anyway, from my experience, the best man page for the mm* commands is reading the bash scripts themself, they often contain other useful but undocumented options ;) stijn On 04/21/2016 02:10 AM, Jez Tucker wrote: > Btw. If anyone wants to add anything to the UG github, just send a pull > request. > > Jez > > On 20/04/16 23:08, Bryan Banister wrote: >> >> Sounds like a candidate for the GPFS UG Git Hub!! >> >> https://github.com/gpfsug/gpfsug-tools >> >> -B >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of >> *Sanchez, Paul >> *Sent:* Wednesday, April 20, 2016 5:06 PM >> *To:* gpfsug main discussion list >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >> didn't... game changer >> >> +1 to a real python API. >> >> We have written our own, albeit incomplete, library to expose most of >> what we need. We would be happy to share some general ideas on what >> should be included, but a real IBM implementation wouldn?t have to do >> what we did. (Think lots of subprocess.Popen + subprocess.communicate >> and shredding the output of mm commands. And yes, we wrote a parser >> which could shred the output of either ?Y or tabular format.) >> >> Thx >> >> Paul >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On Behalf Of *Barry >> Evans >> *Sent:* Wednesday, April 20, 2016 5:58 PM >> *To:* gpfsug-discuss at spectrumscale.org >> >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure >> didn't... game changer >> >> Someone should just make a python API that just abstracts all of this >> >> On 20/04/2016 22:18, Jez Tucker wrote: >> >> So mmlspool does in 4.1.1.3... perhaps my memory fails me. >> I'm pretty certain Yuri told me that mmlspool was completely >> unsupported and didn't have -Y a couple of years ago when we did >> the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. >> >> Perhaps in light of the mmbackup thread; "Will fix RFEs for >> cookies?". Name your price ;-) >> >> Jez >> >> On 20/04/16 22:12, Jez Tucker wrote: >> >> Indeed. >> >> jtucker at elmo:~$ mmlsfs all -Y >> >> mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: >> >> >> I must say I've not seen any headers increment above 0:1 since >> ... er .. 3.3(?), so they're pretty static. >> >> Now, if only mmlspool supported -Y ... or if _all_ commands >> supported -Y, I might even FedEX beer. >> >> Jez >> >> On 20/04/16 21:46, Bryan Banister wrote: >> >> What?s nice is that the ?-Y? output provides a HEADER so >> that you can still programmatically determine fields of >> interest? this is the best! >> >> I recommend adding ?-Y? option documentation to all >> supporting GPFS commands for others to be informed. >> >> -Bryan >> >> *From:*gpfsug-discuss-bounces at spectrumscale.org >> >> [mailto:gpfsug-discuss-bounces at spectrumscale.org] *On >> Behalf Of *Barry Evans >> *Sent:* Wednesday, April 20, 2016 3:39 PM >> *To:* gpfsug-discuss at spectrumscale.org >> >> *Subject:* Re: [gpfsug-discuss] Did you know about "-Y" ?? >> I sure didn't... game changer >> >> If you build a monitoring pipeline using -Y output, make >> sure you test between revisions before upgrading. The >> columns do have a tendency to change from time to time. >> >> Cheers, >> Barry >> >> On 20/04/2016 20:02, Bryan Banister wrote: >> >> Apparently, though not documented in man pages or any >> of the GPFS docs that I?ve read (at least that I >> recall), there is a ?-Y? option to many/most GPFS >> commands that provides output in machine readable >> fashion?. >> >> That?s right kids? no more parsing obscure, often >> changed output columns with your favorite >> bash/awk/python/magic. >> >> Why IBM would not document this is beyond me, >> >> -B >> >> >> ------------------------------------------------------------------------ >> >> >> Note: This email is for the confidential use of the >> named addressee(s) only and may contain proprietary, >> confidential or privileged information. If you are not >> the intended recipient, you are hereby notified that >> any review, dissemination or copying of this email is >> strictly prohibited, and to please notify the sender >> immediately and destroy this email and any >> attachments. Email transmission cannot be guaranteed >> to be secure or error-free. The Company, therefore, >> does not make any guarantees as to the completeness or >> accuracy of this email or any attachments. This email >> is for informational purposes only and does not >> constitute a recommendation, offer, request or >> solicitation of any kind to buy, sell, subscribe, >> redeem or perform any type of transaction of a >> financial product. >> >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> This email is confidential in that it is intended for the >> exclusive attention of the addressee(s) indicated. If you >> are not the intended recipient, this email should not be >> read or disclosed to any other person. Please notify the >> sender immediately and delete this email from your >> computer system. Any opinions expressed are not >> necessarily those of the company from which this email was >> sent and, whilst to the best of our knowledge no viruses >> or defects exist, no responsibility can be accepted for >> any loss or damage arising from its receipt or subsequent >> use of this email. >> >> >> ------------------------------------------------------------------------ >> >> >> Note: This email is for the confidential use of the named >> addressee(s) only and may contain proprietary, >> confidential or privileged information. If you are not the >> intended recipient, you are hereby notified that any >> review, dissemination or copying of this email is strictly >> prohibited, and to please notify the sender immediately >> and destroy this email and any attachments. Email >> transmission cannot be guaranteed to be secure or >> error-free. The Company, therefore, does not make any >> guarantees as to the completeness or accuracy of this >> email or any attachments. This email is for informational >> purposes only and does not constitute a recommendation, >> offer, request or solicitation of any kind to buy, sell, >> subscribe, redeem or perform any type of transaction of a >> financial product. >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> -- >> Jez Tucker >> Head of Research & Development >> Pixit Media >> Mobile: +44 (0) 776 419 3820 >> www.pixitmedia.com >> >> -- >> Jez Tucker >> Head of Research & Development >> Pixit Media >> Mobile: +44 (0) 776 419 3820 >> www.pixitmedia.com >> >> This email is confidential in that it is intended for the >> exclusive attention of the addressee(s) indicated. If you are not >> the intended recipient, this email should not be read or disclosed >> to any other person. Please notify the sender immediately and >> delete this email from your computer system. Any opinions >> expressed are not necessarily those of the company from which this >> email was sent and, whilst to the best of our knowledge no viruses >> or defects exist, no responsibility can be accepted for any loss >> or damage arising from its receipt or subsequent use of this email. >> >> >> >> _______________________________________________ >> >> gpfsug-discuss mailing list >> >> gpfsug-discuss at spectrumscale.org >> >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> -- >> >> Barry Evans >> Technical Director & Co-Founder >> Pixit Media >> >> http://www.pixitmedia.com >> >> This email is confidential in that it is intended for the exclusive >> attention of the addressee(s) indicated. If you are not the intended >> recipient, this email should not be read or disclosed to any other >> person. Please notify the sender immediately and delete this email >> from your computer system. Any opinions expressed are not necessarily >> those of the company from which this email was sent and, whilst to the >> best of our knowledge no viruses or defects exist, no responsibility >> can be accepted for any loss or damage arising from its receipt or >> subsequent use of this email. >> >> >> ------------------------------------------------------------------------ >> >> Note: This email is for the confidential use of the named addressee(s) >> only and may contain proprietary, confidential or privileged >> information. If you are not the intended recipient, you are hereby >> notified that any review, dissemination or copying of this email is >> strictly prohibited, and to please notify the sender immediately and >> destroy this email and any attachments. Email transmission cannot be >> guaranteed to be secure or error-free. The Company, therefore, does >> not make any guarantees as to the completeness or accuracy of this >> email or any attachments. This email is for informational purposes >> only and does not constitute a recommendation, offer, request or >> solicitation of any kind to buy, sell, subscribe, redeem or perform >> any type of transaction of a financial product. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From mweil at genome.wustl.edu Thu Apr 21 16:31:03 2016 From: mweil at genome.wustl.edu (Matt Weil) Date: Thu, 21 Apr 2016 10:31:03 -0500 Subject: [gpfsug-discuss] PMR 78846,122,000 Message-ID: <5718F237.4040705@genome.wustl.edu> Apr 21 07:41:53 linuscs88 mmfs: Shutting down abnormally due to error in /project/sprelfks1/build/rfks1s007a/src/avs/fs/mmfs/ts/tm/tree.C line 1025 retCode 12, reasonCode 56 any ideas? ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. From jonathan at buzzard.me.uk Thu Apr 21 16:51:01 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Thu, 21 Apr 2016 16:51:01 +0100 Subject: [gpfsug-discuss] mmbackup and filenames In-Reply-To: <5717C9D3.8050501@buzzard.me.uk> References: <1461167934.4298.44.camel@cc-rhorton.ad.ic.ac.uk> <44DCA871-EA98-4677-93F9-EDB938D0836F@dynamixgroup.com> <5717C9D3.8050501@buzzard.me.uk> Message-ID: <1461253861.1434.110.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-20 at 19:26 +0100, Jonathan Buzzard wrote: > On 20/04/16 17:23, Scott Cumbie wrote: > > You should open a PMR. This is not a ?feature? request, this is a > > failure of the code to work as it should. > > > > I did at least seven years ago. I shall see if I can find the reference > in my old notebooks tomorrow. Unfortunately one has gone missing so I > might not have the reference. > PMR 30456 is what I have written in my notebook, with a date of 11th June 2009, all under a title of "mmbackup is busted". Though I guess IBM might claim that not backing up the file is a fix because back then mmbackup would crash out completely and not backup anything at all. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From russell.steffen1 at navy.mil Thu Apr 21 22:25:30 2016 From: russell.steffen1 at navy.mil (Steffen, Russell CIV FNMOC, N63) Date: Thu, 21 Apr 2016 21:25:30 +0000 Subject: [gpfsug-discuss] [Non-DoD Source] Re: Did you know about "-Y" ?? I sure didn't... game changer In-Reply-To: <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> References: <21BC488F0AEA2245B2C3E83FC0B33DBB060A28ED@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717E8D2.2080107@pixitmedia.com> <21BC488F0AEA2245B2C3E83FC0B33DBB060A31FF@CHI-EXCHANGEW1.w2k.jumptrading.com> <5717F0AA.8050901@pixitmedia.com> <5717F224.2010100@pixitmedia.com> <5717FB83.6020805@pixitmedia.com>, <2966141ec43f4f0eb0c2e890d3b99caf@mbxtoa1.winmail.deshaw.com> Message-ID: <366F49EE121F9F488D7EA78AA37C01620DF75583@NAWEMUGUXM01V.nadsuswe.nads.navy.mil> Last year I wrote a python package to plot the I/O volume our clusters were generating. In order to do that I ended up reverse-engineering the mmsdrfs file format so that I could determine which NSDs were in which filesystems and served by which NSD servers - basic cluster topology. Everything I was able to figure out is in this python module: https://bitbucket.org/rrs42/iographer/src/6d410073fc39b448a4742da7bb1a9ecf258d611c/iographer/GPFS.py?at=master&fileviewer=file-view-default And if anyone is interested in the package the repository is hosted here: https://bitbucket.org/rrs42/iographer -- Russell Steffen HPC Systems Analyst/Systems Administrator, N63 Fleet Numerical Meteorology and Oceanograph Center russell.steffen1 at navy.mil, Phone 831-656-4218 ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sanchez, Paul [Paul.Sanchez at deshaw.com] Sent: Wednesday, April 20, 2016 3:06 PM To: gpfsug main discussion list Subject: [Non-DoD Source] Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer +1 to a real python API. We have written our own, albeit incomplete, library to expose most of what we need. We would be happy to share some general ideas on what should be included, but a real IBM implementation wouldn?t have to do what we did. (Think lots of subprocess.Popen + subprocess.communicate and shredding the output of mm commands. And yes, we wrote a parser which could shred the output of either ?Y or tabular format.) Thx Paul From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 5:58 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer Someone should just make a python API that just abstracts all of this On 20/04/2016 22:18, Jez Tucker wrote: So mmlspool does in 4.1.1.3... perhaps my memory fails me. I'm pretty certain Yuri told me that mmlspool was completely unsupported and didn't have -Y a couple of years ago when we did the original GPFS UG RFEs prior to 4.x. I figure that earns cookies. Perhaps in light of the mmbackup thread; "Will fix RFEs for cookies?". Name your price ;-) Jez On 20/04/16 22:12, Jez Tucker wrote: Indeed. jtucker at elmo:~$ mmlsfs all -Y mmlsfs::HEADER:version:reserved:reserved:deviceName:fieldName:data:remarks: I must say I've not seen any headers increment above 0:1 since ... er .. 3.3(?), so they're pretty static. Now, if only mmlspool supported -Y ... or if _all_ commands supported -Y, I might even FedEX beer. Jez On 20/04/16 21:46, Bryan Banister wrote: What?s nice is that the ?-Y? output provides a HEADER so that you can still programmatically determine fields of interest? this is the best! I recommend adding ?-Y? option documentation to all supporting GPFS commands for others to be informed. -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Barry Evans Sent: Wednesday, April 20, 2016 3:39 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Did you know about "-Y" ?? I sure didn't... game changer If you build a monitoring pipeline using -Y output, make sure you test between revisions before upgrading. The columns do have a tendency to change from time to time. Cheers, Barry On 20/04/2016 20:02, Bryan Banister wrote: Apparently, though not documented in man pages or any of the GPFS docs that I?ve read (at least that I recall), there is a ?-Y? option to many/most GPFS commands that provides output in machine readable fashion?. That?s right kids? no more parsing obscure, often changed output columns with your favorite bash/awk/python/magic. Why IBM would not document this is beyond me, -B ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [X] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com -- Jez Tucker Head of Research & Development Pixit Media Mobile: +44 (0) 776 419 3820 www.pixitmedia.com [X] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media http://www.pixitmedia.com [X] This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. From chair at spectrumscale.org Fri Apr 22 08:38:55 2016 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Fri, 22 Apr 2016 08:38:55 +0100 Subject: [gpfsug-discuss] ISC June Meeting Message-ID: Hi All, IBM are hoping to put together a short agenda for a meeting at ISC in June this year. They have asked if there are any US based people likely to be attending who would be interested in giving a talk at the ISC, Germany meeting. If you are US based and planning to attend, please let me know and I'll put you in touch with the right people. Its likely to be on the Monday at the start of ISC, further details when its all sorted! Thanks Simon From Kevin.Buterbaugh at Vanderbilt.Edu Fri Apr 22 16:43:00 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 22 Apr 2016 15:43:00 +0000 Subject: [gpfsug-discuss] make InstallImages errors Message-ID: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> Hi All, We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the following errors: /usr/lpp/mmfs/src root at testnsd3# make InstallImages (cd gpl-linux; /usr/bin/make InstallImages; \ exit $?) || exit 1 make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... depmod: ERROR: fstatat(4, mmfs26.ko): No such file or directory depmod: ERROR: fstatat(4, mmfslinux.ko): No such file or directory depmod: ERROR: fstatat(4, tracedev.ko): No such file or directory make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' /usr/lpp/mmfs/src root at testnsd3# However, they don?t seem to actually impact anything ? i.e. GPFS starts up just fine on the box and the upgrade is apparently successful: /root root at testnsd3# mmgetstate Node number Node name GPFS state ------------------------------------------ 3 testnsd3 active /root root at testnsd3# mmdiag --version === mmdiag: version === Current GPFS build: "4.2.0.2 ". Built on Mar 7 2016 at 10:28:55 Running 5 minutes 5 secs /root root at testnsd3# So just to satisfy my own curiosity, has anyone else seen this and can anybody explain what that?s all about? OS is latest CentOS 7, BTW. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Apr 22 20:52:35 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 22 Apr 2016 19:52:35 +0000 Subject: [gpfsug-discuss] make InstallImages errors In-Reply-To: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> References: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> Message-ID: Did you do a kernel upgrade as well? I've seen similar when you get dangling symlinks in the weak updates kernel module directory. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 22 April 2016 16:43 To: gpfsug main discussion list Subject: [gpfsug-discuss] make InstallImages errors Hi All, We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the following errors: /usr/lpp/mmfs/src root at testnsd3# make InstallImages (cd gpl-linux; /usr/bin/make InstallImages; \ exit $?) || exit 1 make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... depmod: ERROR: fstatat(4, mmfs26.ko): No such file or directory depmod: ERROR: fstatat(4, mmfslinux.ko): No such file or directory depmod: ERROR: fstatat(4, tracedev.ko): No such file or directory make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' /usr/lpp/mmfs/src root at testnsd3# However, they don?t seem to actually impact anything ? i.e. GPFS starts up just fine on the box and the upgrade is apparently successful: /root root at testnsd3# mmgetstate Node number Node name GPFS state ------------------------------------------ 3 testnsd3 active /root root at testnsd3# mmdiag --version === mmdiag: version === Current GPFS build: "4.2.0.2 ". Built on Mar 7 2016 at 10:28:55 Running 5 minutes 5 secs /root root at testnsd3# So just to satisfy my own curiosity, has anyone else seen this and can anybody explain what that?s all about? OS is latest CentOS 7, BTW. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 From ewahl at osc.edu Fri Apr 22 21:12:20 2016 From: ewahl at osc.edu (Edward Wahl) Date: Fri, 22 Apr 2016 16:12:20 -0400 Subject: [gpfsug-discuss] make InstallImages errors In-Reply-To: References: <2EFCAC8D-5725-4EED-BDC3-0223CE5197A1@vanderbilt.edu> Message-ID: <20160422161220.135f209a@osc.edu> On Fri, 22 Apr 2016 19:52:35 +0000 "Simon Thompson (Research Computing - IT Services)" wrote: > > Did you do a kernel upgrade as well? > > I've seen similar when you get dangling symlinks in the weak updates kernel > module directory. > Simon I've had exactly the same experience here. From 4.x going back to early 3.4 with this error. Ed > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org > [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Buterbaugh, Kevin L > [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: 22 April 2016 16:43 To: gpfsug main > discussion list Subject: [gpfsug-discuss] make InstallImages errors > > Hi All, > > We have a small test cluster that I am upgrading from GPFS 4.1.0.8 (efix21) > to GPFS 4.2.0.2. I noticed that on 2 of my 3 NSD servers I received the > following errors: > > /usr/lpp/mmfs/src > root at testnsd3# make InstallImages > (cd gpl-linux; /usr/bin/make InstallImages; \ > exit $?) || exit 1 > make[1]: Entering directory `/usr/lpp/mmfs/src/gpl-linux' > Pre-kbuild step 1... > depmod: ERROR: fstatat(4, mmfs26.ko): No such file or directory > depmod: ERROR: fstatat(4, mmfslinux.ko): No such file or directory > depmod: ERROR: fstatat(4, tracedev.ko): No such file or directory > make[1]: Leaving directory `/usr/lpp/mmfs/src/gpl-linux' > /usr/lpp/mmfs/src > root at testnsd3# > > However, they don?t seem to actually impact anything ? i.e. GPFS starts up > just fine on the box and the upgrade is apparently successful: > > /root > root at testnsd3# mmgetstate > > Node number Node name GPFS state > ------------------------------------------ > 3 testnsd3 active > /root > root at testnsd3# mmdiag --version > > === mmdiag: version === > Current GPFS build: "4.2.0.2 ". > Built on Mar 7 2016 at 10:28:55 > Running 5 minutes 5 secs > /root > root at testnsd3# > > So just to satisfy my own curiosity, has anyone else seen this and can > anybody explain what that?s all about? OS is latest CentOS 7, BTW. Thanks? > > Kevin > > ? > Kevin Buterbaugh - Senior System Administrator > Vanderbilt University - Advanced Computing Center for Research and Education > Kevin.Buterbaugh at vanderbilt.edu - > (615)875-9633 > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Ed Wahl Ohio Supercomputer Center 614-292-9302 From jan.finnerman at load.se Mon Apr 25 21:27:13 2016 From: jan.finnerman at load.se (Jan Finnerman Load) Date: Mon, 25 Apr 2016 20:27:13 +0000 Subject: [gpfsug-discuss] Dell Multipath Message-ID: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Hi, I realize this might not be strictly GPFS related but I?m getting a little desperate here? I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer and struggle on a question of disk multipathing for the intended NSD disks with their direct attached SAS disk systems. If I do a multipath ?ll, after a few seconds I just get the prompt back. I expected to see the usual big amount of path info, but nothing there. If I do a multipathd ?k and then a show config, I see all the Dell disk luns with reasonably right parameters. I can see them as /dev/sdf, /dev/sdg, etc. devices. I can also add them in PowerKVM:s Kimchi web interface and even deploy a GPFS installation on it. The big question is, though, how do I get multipathing to work ? Do I need any special driver or setting in the multipath.conf file ? I found some of that but more generic e.g. for RedHat 6, but now we are in PowerKVM country. The platform consists of: 4x IBM S812L servers SAS controller PowerKVM 3.1 Red Hat 7.1 2x Dell MD3460 SAS disk systems No switches Jan ///Jan [cid:E11C3C62-0896-4FE2-9DCF-FFA5CF812B75] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:621A25E3-E641-4D21-B2C3-0C93AB8B73B6] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3[5].png Type: image/png Size: 5565 bytes Desc: F1EE9474-7BCC-41E6-8237-D949E9DC35D3[5].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA[5].png Type: image/png Size: 8584 bytes Desc: E895055E-B11B-47C3-BA29-E12D29D394FA[5].png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1][5].png Type: image/png Size: 6664 bytes Desc: CertPowerSystems_sm[1][5].png URL: From jenocram at gmail.com Mon Apr 25 21:37:18 2016 From: jenocram at gmail.com (Jeno Cram) Date: Mon, 25 Apr 2016 16:37:18 -0400 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Message-ID: Is multipathd running? Also make sure you don't have them blacklisted in your multipath.conf. On Apr 25, 2016 4:27 PM, "Jan Finnerman Load" wrote: > Hi, > > I realize this might not be strictly GPFS related but I?m getting a little > desperate here? > I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer and > struggle on a question of disk multipathing for the intended NSD disks with > their direct attached SAS disk systems. > If I do a *multipath ?ll*, after a few seconds I just get the prompt > back. I expected to see the usual big amount of path info, but nothing > there. > > If I do a *multipathd ?k* and then a show config, I see all the Dell disk > luns with reasonably right parameters. I can see them as /dev/sdf, > /dev/sdg, etc. devices. > I can also add them in PowerKVM:s Kimchi web interface and even deploy a > GPFS installation on it. The big question is, though, how do I get > multipathing to work ? > Do I need any special driver or setting in the multipath.conf file ? > I found some of that but more generic e.g. for RedHat 6, but now we are in > PowerKVM country. > > The platform consists of: > 4x IBM S812L servers > SAS controller > PowerKVM 3.1 > Red Hat 7.1 > 2x Dell MD3460 *SAS* disk systems > No switches > > Jan > > ///Jan > > > Jan Finnerman > > Senior Technical consultant > > > [image: CertTiv_sm] > > > Kista Science Tower > > 164 51 Kista > > Mobil: +46 (0)70 631 66 26 > > Kontor: +46 (0)8 633 66 00/26 > > jan.finnerman at load.se > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: F1EE9474-7BCC-41E6-8237-D949E9DC35D3[5].png Type: image/png Size: 5565 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: CertPowerSystems_sm[1][5].png Type: image/png Size: 6664 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: E895055E-B11B-47C3-BA29-E12D29D394FA[5].png Type: image/png Size: 8584 bytes Desc: not available URL: From ewahl at osc.edu Mon Apr 25 21:48:07 2016 From: ewahl at osc.edu (Edward Wahl) Date: Mon, 25 Apr 2016 16:48:07 -0400 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Message-ID: <20160425164807.52f40d7a@osc.edu> Sounds like too wide of a blacklist. Have you specifically added the MD devices to the blacklist_exceptions? What does the overall blacklist and blacklist_exceptions look like? A quick 'lsscsi' should give you the vendor/product to stick into the blacklist_exception. Wildcards work with quotes there, as well if you have multiple similar but not exact enclosures. eg: "IBM 1818 FAStT" can become: device { vendor "IBM" product "1818*" } or Dell MD*, etc. If you have issues with things working in the interactive mode or debug mode (which usually turns out to be a timing problem) run a "multipath -v3" and check the output. It will normally tell you exactly why each disk device is being skipped. Things like "device node name blacklisted" or whitelisted. Ed Wahl OSC On Mon, 25 Apr 2016 20:27:13 +0000 Jan Finnerman Load wrote: > Hi, > > I realize this might not be strictly GPFS related but I?m getting a little > desperate here? I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a > customer and struggle on a question of disk multipathing for the intended NSD > disks with their direct attached SAS disk systems. If I do a multipath ?ll, > after a few seconds I just get the prompt back. I expected to see the usual > big amount of path info, but nothing there. > > If I do a multipathd ?k and then a show config, I see all the Dell disk luns > with reasonably right parameters. I can see them as /dev/sdf, /dev/sdg, etc. > devices. I can also add them in PowerKVM:s Kimchi web interface and even > deploy a GPFS installation on it. The big question is, though, how do I get > multipathing to work ? Do I need any special driver or setting in the > multipath.conf file ? I found some of that but more generic e.g. for RedHat > 6, but now we are in PowerKVM country. > > The platform consists of: > 4x IBM S812L servers > SAS controller > PowerKVM 3.1 > Red Hat 7.1 > 2x Dell MD3460 SAS disk systems > No switches > > Jan > ///Jan > > [cid:E11C3C62-0896-4FE2-9DCF-FFA5CF812B75] > Jan Finnerman > Senior Technical consultant > > [CertTiv_sm] > > [cid:621A25E3-E641-4D21-B2C3-0C93AB8B73B6] > Kista Science Tower > 164 51 Kista > Mobil: +46 (0)70 631 66 26 > Kontor: +46 (0)8 633 66 00/26 > jan.finnerman at load.se -- Ed Wahl Ohio Supercomputer Center 614-292-9302 From mweil at genome.wustl.edu Mon Apr 25 21:50:02 2016 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 25 Apr 2016 15:50:02 -0500 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se> Message-ID: <571E82FA.2000008@genome.wustl.edu> enable mpathconf --enable --with_multipathd y show config multipathd show config On 4/25/16 3:27 PM, Jan Finnerman Load wrote: > Hi, > > I realize this might not be strictly GPFS related but I?m getting a > little desperate here? > I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer > and struggle on a question of disk multipathing for the intended NSD > disks with their direct attached SAS disk systems. > If I do a /*multipath ?ll*/, after a few seconds I just get the > prompt back. I expected to see the usual big amount of path info, but > nothing there. > > If I do a /*multipathd ?k*/ and then a show config, I see all the Dell > disk luns with reasonably right parameters. I can see them as > /dev/sdf, /dev/sdg, etc. devices. > I can also add them in PowerKVM:s Kimchi web interface and even deploy > a GPFS installation on it. The big question is, though, how do I get > multipathing to work ? > Do I need any special driver or setting in the multipath.conf file ? > I found some of that but more generic e.g. for RedHat 6, but now we > are in PowerKVM country. > > The platform consists of: > 4x IBM S812L servers > SAS controller > PowerKVM 3.1 > Red Hat 7.1 > 2x Dell MD3460 *SAS* disk systems > No switches > > Jan > > ///Jan > > > Jan Finnerman > > Senior Technical consultant > > > CertTiv_sm > > > Kista Science Tower > > 164 51 Kista > > Mobil: +46 (0)70 631 66 26 > > Kontor: +46 (0)8 633 66 00/26 > > jan.finnerman at load.se > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 8584 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 5565 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 6664 bytes Desc: not available URL: From stefan.dietrich at desy.de Tue Apr 26 22:01:52 2016 From: stefan.dietrich at desy.de (Dietrich, Stefan) Date: Tue, 26 Apr 2016 23:01:52 +0200 (CEST) Subject: [gpfsug-discuss] CES behind DNS RR and 16 group limitation? Message-ID: <183207187.6100390.1461704512921.JavaMail.zimbra@desy.de> Hello, we will soon start to deploy CES in our clusters, however two questions popped up. - According to the "CES NFS Support" in the "Implementing Cluster Export Services" documentation, DNS round-robin might lead to corrupted data with NFSv3: If a DNS Round Robin (RR) entry name is used to mount an NFSv3 export, data corruption and data unavailability might occur. The lock manager on the GPFS file system is not clustered-system-aware. The documentation does not state anything about NFSv4, so this restriction does not apply? Has somebody already experience with NFS and SMB mounts/exports behind a DNS RR entry? - For NFSv3 there is the known 16 supplementary group limitation. The CES option MANAGE_GIDS lifts this limitation and group lookup is performed on the protocl node itself. However, the NFS version is not mentioned in the docs. Would this work for NFSv4 with secType=sys as well or is this limited to NFSv3? With NFSv4 and secType=krb the 16 group limit does not apply, but I can think of some use-cases where the ticket handling might be problematic. Regards, Stefan -- ------------------------------------------------------------------------ Stefan Dietrich Deutsches Elektronen-Synchrotron (IT-Systems) Ein Forschungszentrum der Helmholtz-Gemeinschaft Notkestr. 85 phone: +49-40-8998-4696 22607 Hamburg e-mail: stefan.dietrich at desy.de Germany ------------------------------------------------------------------------ From S.J.Thompson at bham.ac.uk Tue Apr 26 22:09:18 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Tue, 26 Apr 2016 21:09:18 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: Hi, We've had some reports from some of our users that out CES SMB exports are slow to access. It appears that this is only when the client is a Linux system and using SMB to access the file-system. In fact if we dual boot the same box, we can get sensible speeds out of it (I.e. Not network problems to the client system). They also report that access to real Windows based file-servers works at sensible speeds. Maybe the Win file servers support SMB1, but has anyone else seen this, or have any suggestions? Thanks Simon From jonathan at buzzard.me.uk Tue Apr 26 22:27:24 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 26 Apr 2016 22:27:24 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <571FDD3C.3080801@buzzard.me.uk> On 26/04/16 22:09, Simon Thompson (Research Computing - IT Services) wrote: > Hi, > > We've had some reports from some of our users that out CES SMB > exports are slow to access. > > It appears that this is only when the client is a Linux system and > using SMB to access the file-system. In fact if we dual boot the same > box, we can get sensible speeds out of it (I.e. Not network problems > to the client system). > > They also report that access to real Windows based file-servers works > at sensible speeds. Maybe the Win file servers support SMB1, but has > anyone else seen this, or have any suggestions? > In the past I have seen huge difference between opening up a terminal and doing a mount -t cifs ... and mapping the drive in Gnome. The later is a fraction of the performance of the first. I suspect that KDE is similar but I have not used KDE in anger now for 17 years. I would say we need to know what version of Linux you are having issues with and what method of attaching to the server you are using. In general best performance comes from a proper mount. If you have not tried that yet do so first. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at gmail.com Tue Apr 26 23:48:23 2016 From: oehmes at gmail.com (Sven Oehme) Date: Tue, 26 Apr 2016 15:48:23 -0700 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: can you check what OS level this is ? i have seen reports from issues with RHEL 7 clients and SMB On Tue, Apr 26, 2016 at 2:09 PM, Simon Thompson (Research Computing - IT Services) wrote: > Hi, > > We've had some reports from some of our users that out CES SMB exports are > slow to access. > > It appears that this is only when the client is a Linux system and using > SMB to access the file-system. In fact if we dual boot the same box, we can > get sensible speeds out of it (I.e. Not network problems to the client > system). > > They also report that access to real Windows based file-servers works at > sensible speeds. Maybe the Win file servers support SMB1, but has anyone > else seen this, or have any suggestions? > > Thanks > > Simon > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Wed Apr 27 01:21:09 2016 From: YARD at il.ibm.com (Yaron Daniel) Date: Wed, 27 Apr 2016 03:21:09 +0300 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> Hi Please run this command: # mmsmb export list export path guest ok smb encrypt cifs /gpfs1/cifs no disabled mixed /gpfs1/mixed no disabled cifs-text /gpfs/gpfs2/cifs-text/ no auto nfs-text /gpfs/gpfs2/nfs-text/ no auto Try to disable "smb encrypt" value, and try again. Example: #mmsmb export change --option "smb encrypt=disabled" cifs-text Regards Yaron Daniel 94 Em Ha'Moshavot Rd Server, Storage and Data Services - Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Sven Oehme To: gpfsug main discussion list Date: 04/27/2016 01:48 AM Subject: Re: [gpfsug-discuss] SMB access speed Sent by: gpfsug-discuss-bounces at spectrumscale.org can you check what OS level this is ? i have seen reports from issues with RHEL 7 clients and SMB On Tue, Apr 26, 2016 at 2:09 PM, Simon Thompson (Research Computing - IT Services) wrote: Hi, We've had some reports from some of our users that out CES SMB exports are slow to access. It appears that this is only when the client is a Linux system and using SMB to access the file-system. In fact if we dual boot the same box, we can get sensible speeds out of it (I.e. Not network problems to the client system). They also report that access to real Windows based file-servers works at sensible speeds. Maybe the Win file servers support SMB1, but has anyone else seen this, or have any suggestions? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From A.K.Ghumra at bham.ac.uk Wed Apr 27 09:11:35 2016 From: A.K.Ghumra at bham.ac.uk (Aslam Ghumra (IT Services, Facilities Management)) Date: Wed, 27 Apr 2016 08:11:35 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear -------------- next part -------------- An HTML attachment was scrubbed... URL: From secretary at gpfsug.org Wed Apr 27 10:46:18 2016 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Wed, 27 Apr 2016 10:46:18 +0100 Subject: [gpfsug-discuss] US GPFS/Spectrum Scale Events Message-ID: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We'd like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 [1] Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: * 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 * Enhancements for CORAL from IBM * Panel discussion with customers, topic TBD * AFM and integration with Spectrum Protect * Best practices for GPFS or Spectrum Scale Tuning. * At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ---- 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ---- We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal Links: ------ [1] https://www.spxxl.org/?q=New-York-City-2016 -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.K.Ghumra at bham.ac.uk Wed Apr 27 12:35:55 2016 From: A.K.Ghumra at bham.ac.uk (Aslam Ghumra (IT Services, Facilities Management)) Date: Wed, 27 Apr 2016 11:35:55 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: Apologies, I meant Mbps not Gbps Regards, Aslam Research Computing Team DDI: +44 (121) 414 5877 | Skype: JanitorX | Twitter: @aslamghumra | a.k.ghumra at bham.ac.uk | intranet.birmingham.ac.uk/bear -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of gpfsug-discuss-request at spectrumscale.org Sent: 27 April 2016 12:00 To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 51, Issue 48 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. SMB access speed (Aslam Ghumra (IT Services, Facilities Management)) 2. US GPFS/Spectrum Scale Events (Secretary GPFS UG) ---------------------------------------------------------------------- Message: 1 Date: Wed, 27 Apr 2016 08:11:35 +0000 From: "Aslam Ghumra (IT Services, Facilities Management)" To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] SMB access speed Message-ID: Content-Type: text/plain; charset="iso-8859-1" As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Wed, 27 Apr 2016 10:46:18 +0100 From: Secretary GPFS UG To: gpfsug main discussion list Cc: "usa-principal-gpfsug.org" , usa-co-principal at gpfsug.org, Chair , Gorini Stefano Claudio Subject: [gpfsug-discuss] US GPFS/Spectrum Scale Events Message-ID: <21b651c4a310b67c139fccff707dce97 at webmail.gpfsug.org> Content-Type: text/plain; charset="us-ascii" Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We'd like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 [1] Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: * 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 * Enhancements for CORAL from IBM * Panel discussion with customers, topic TBD * AFM and integration with Spectrum Protect * Best practices for GPFS or Spectrum Scale Tuning. * At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ---- 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ---- We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal Links: ------ [1] https://www.spxxl.org/?q=New-York-City-2016 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 51, Issue 48 ********************************************** From jonathan at buzzard.me.uk Wed Apr 27 12:40:37 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 27 Apr 2016 12:40:37 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <1461757237.1434.178.camel@buzzard.phy.strath.ac.uk> On Wed, 2016-04-27 at 08:11 +0000, Aslam Ghumra (IT Services, Facilities Management) wrote: > As Simon has reported, the speed of access on Linux system are slow. > > > We've just used the mount command as below > > > mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o > noperm //<> /media/mnt1 > Try dialing back on the SMB version would be my first port of call. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Kevin.Buterbaugh at Vanderbilt.Edu Wed Apr 27 14:10:32 2016 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Wed, 27 Apr 2016 13:10:32 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Hi All, Question - why are you SAMBA mounting to Linux clients instead of CNFS mounting? We don?t use CES (yet) here, but our ?rules? are: 1) if you?re a Linux client, you CNFS mount. 2) if you?re a Windows client, you SAMBA mount. 3) if you?re a Mac client, you can do either. (C)NFS seems to be must more stable and less problematic than SAMBA, in our experience. Just trying to understand? Kevin On Apr 27, 2016, at 3:11 AM, Aslam Ghumra (IT Services, Facilities Management) > wrote: As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 27 14:16:57 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 27 Apr 2016 13:16:57 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> References: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Message-ID: We don't manage the Linux systems, wr have no control over identity or authentication on them, but we do for SMB access. Simon -----Original Message----- From: Buterbaugh, Kevin L [Kevin.Buterbaugh at Vanderbilt.Edu] Sent: Wednesday, April 27, 2016 02:11 PM GMT Standard Time To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB access speed Hi All, Question - why are you SAMBA mounting to Linux clients instead of CNFS mounting? We don?t use CES (yet) here, but our ?rules? are: 1) if you?re a Linux client, you CNFS mount. 2) if you?re a Windows client, you SAMBA mount. 3) if you?re a Mac client, you can do either. (C)NFS seems to be must more stable and less problematic than SAMBA, in our experience. Just trying to understand? Kevin On Apr 27, 2016, at 3:11 AM, Aslam Ghumra (IT Services, Facilities Management) > wrote: As Simon has reported, the speed of access on Linux system are slow. We've just used the mount command as below mount -t cifs -o vers=3.0 -o domain=ADF -o username=USERNAME -o noperm //<> /media/mnt1 However we've seen other users The users we've been talking to use Ubuntu, version 14.04 Smbclient version : Version 4.1.6-Ubuntu Mount.cifs version is : 6.0 version 15.10 Smbclient version : Version 4.3.8-Ubuntu Mount.cifs version is : 6.4 We've run the same tests and have seen similar speeds around 6Gbps, whereas windows gives is around 40Gbps The machine is connected via ethernet cable and we've also had one user add the export as part of the /etc/fstab file, but the issues of speed still persist whether you use the mount command or use the fstab file. I've not tested on centos / redhat but will do that and report back. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 27 19:57:33 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 27 Apr 2016 19:57:33 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> References: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Message-ID: <57210B9D.8080906@buzzard.me.uk> On 27/04/16 14:10, Buterbaugh, Kevin L wrote: > Hi All, > > Question - why are you SAMBA mounting to Linux clients instead of CNFS > mounting? We don?t use CES (yet) here, but our ?rules? are: > > 1) if you?re a Linux client, you CNFS mount. > 2) if you?re a Windows client, you SAMBA mount. > 3) if you?re a Mac client, you can do either. > > (C)NFS seems to be must more stable and less problematic than SAMBA, in > our experience. Just trying to understand? > My rule that trumps all those is that a given share is available via SMB *OR* NFS, but never both. Therein lies the path to great pain in the future. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From bpappas at dstonline.com Wed Apr 27 20:38:06 2016 From: bpappas at dstonline.com (Bill Pappas) Date: Wed, 27 Apr 2016 19:38:06 +0000 Subject: [gpfsug-discuss] GPFS discussions Message-ID: Where do other users in this group meet to discuss GPFS advancements and share experiences/how-tos? How often? I am speaking of conferences, etc. Thank you, Bill Pappas 901-619-0585 bpappas at dstonline.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Apr 27 20:47:55 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 27 Apr 2016 20:47:55 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: <1A3FECFA-E32E-40D9-8375-EF1A8E93896E@vanderbilt.edu> Message-ID: <5721176B.5020809@buzzard.me.uk> On 27/04/16 14:16, Simon Thompson (Research Computing - IT Services) wrote: > We don't manage the Linux systems, wr have no control over identity or > authentication on them, but we do for SMB access. > Does not the combination of Ganesha and NFSv4 with Kerberos fix that? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From S.J.Thompson at bham.ac.uk Wed Apr 27 20:52:46 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Wed, 27 Apr 2016 19:52:46 +0000 Subject: [gpfsug-discuss] GPFS discussions In-Reply-To: References: Message-ID: Hi Bill, As a user community, we organise events in the UK and USA, we post them on the mailing list and the group website - www.spectrumscale.org. There are a few types of events, meet the devs, which are typically a small group of customers, an integrator or two, and a few developers. We also do @conference events, for example at Super Computing (USA), Computing Insights UK, ibm are also trying to get a meeting running at ISC as well. We then have the larger annual events, for example in the UK we have a meeting in May. These are typically larger meetings with IBM speakers, customer talks and partner talks. Finally there are events organsied/advertised with other groups, for example SPXXL, where in the UK last year we ran with SPXXL's meeting. This is also happening in NYC in a few weeks. In the UK we have a much smaller geographic problem than the USA, we've also been going a lot longer - the USA side chapter only launched September last year, and Kristy and Bob are building the activity over there. I think if there was interest in a an informal (e.g.) state meeting that people wanted to coordinate with Kristy/Bob, then we could advertise to the list. Of course all of those involved in organising from the user side of things have real jobs as well and getting big meetings up and running takes quite a lot of work (agendas, speakers, venues, lunches, registration...) Simon (uk group chair) ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bill Pappas [bpappas at dstonline.com] Sent: 27 April 2016 20:38 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] GPFS discussions Where do other users in this group meet to discuss GPFS advancements and share experiences/how-tos? How often? I am speaking of conferences, etc. Thank you, Bill Pappas 901-619-0585 bpappas at dstonline.com From Greg.Lehmann at csiro.au Thu Apr 28 00:27:03 2016 From: Greg.Lehmann at csiro.au (Greg.Lehmann at csiro.au) Date: Wed, 27 Apr 2016 23:27:03 +0000 Subject: [gpfsug-discuss] GPFS discussions In-Reply-To: References: Message-ID: Hi Bill, In Australia, I've been lobbying IBM to do something locally, after the great UG meeting at SC15 in Austin. It is looking like they might tack something onto the annual tech symposium they have here - no time frame yet but August has been when it happened for the last couple of years. At that event we should be able to gauge interest on whether we can form a local UG. The advantage of the timing is that a lot of experts will be in the country for the Tech Symposium. They are also talking about another local HPC focused event in the same time frame. My guess is it may well be all bundled together. Here's hoping it comes off. It might give some of you an excuse to come to Australia! Seriously, I am jealous of the events I see happening in the UK. Cheers, Greg Lehmann Senior High Performance Data Specialist Data Services | Scientific Computing Platforms CSIRO Information Management and Technology Phone: +61 7 3327 4137 | Fax: +61 1 3327 4455 Greg.Lehmann at csiro.au | www.csiro.au Address: 1 Technology Court, Pullenvale, QLD 4069 PLEASE NOTE The information contained in this email may be confidential or privileged. Any unauthorised use or disclosure is prohibited. If you have received this email in error, please delete it immediately and notify the sender by return email. Thank you. To the extent permitted by law, CSIRO does not represent, warrant and/or guarantee that the integrity of this communication has been maintained or that the communication is free of errors, virus, interception or interference. Please consider the environment before printing this email. -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (Research Computing - IT Services) Sent: Thursday, 28 April 2016 5:53 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFS discussions Hi Bill, As a user community, we organise events in the UK and USA, we post them on the mailing list and the group website - www.spectrumscale.org. There are a few types of events, meet the devs, which are typically a small group of customers, an integrator or two, and a few developers. We also do @conference events, for example at Super Computing (USA), Computing Insights UK, ibm are also trying to get a meeting running at ISC as well. We then have the larger annual events, for example in the UK we have a meeting in May. These are typically larger meetings with IBM speakers, customer talks and partner talks. Finally there are events organsied/advertised with other groups, for example SPXXL, where in the UK last year we ran with SPXXL's meeting. This is also happening in NYC in a few weeks. In the UK we have a much smaller geographic problem than the USA, we've also been going a lot longer - the USA side chapter only launched September last year, and Kristy and Bob are building the activity over there. I think if there was interest in a an informal (e.g.) state meeting that people wanted to coordinate with Kristy/Bob, then we could advertise to the list. Of course all of those involved in organising from the user side of things have real jobs as well and getting big meetings up and running takes quite a lot of work (agendas, speakers, venues, lunches, registration...) Simon (uk group chair) ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Bill Pappas [bpappas at dstonline.com] Sent: 27 April 2016 20:38 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] GPFS discussions Where do other users in this group meet to discuss GPFS advancements and share experiences/how-tos? How often? I am speaking of conferences, etc. Thank you, Bill Pappas 901-619-0585 bpappas at dstonline.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From usa-principal at gpfsug.org Thu Apr 28 15:19:51 2016 From: usa-principal at gpfsug.org (GPFS UG USA Principal) Date: Thu, 28 Apr 2016 10:19:51 -0400 Subject: [gpfsug-discuss] US GPFS/Spectrum Scale Events In-Reply-To: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> References: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> Message-ID: Thank you Claire. All, please note on the SPXXL registration page referenced below there is now a $0 May 26 GPFS Day Registration option. -Kristy > On Apr 27, 2016, at 5:46 AM, Secretary GPFS UG wrote: > > Dear All, > > Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. > > Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 > > This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 > > If you wish to register, please do so via the Eventbrite page. > > Kind regards, > > -- > Claire O'Toole > Spectrum Scale/GPFS User Group Secretary > +44 (0)7508 033896 > www.spectrumscaleug.org > > > --- > > Hello all, > > We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. > > 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. > > > Tentative Agenda: > ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 > ? Enhancements for CORAL from IBM > ? Panel discussion with customers, topic TBD > ? AFM and integration with Spectrum Protect > ? Best practices for GPFS or Spectrum Scale Tuning. > ? At least one site update > > Location: > New York Academy of Medicine > 1216 Fifth Avenue > New York, NY 10029 > > ?? > > 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! > > Location: Argonne National Lab more details and final agenda will come later. > > Tentative Agenda: > > > 9:00a-12:30p > 9-9:30a - Opening Remarks > 9:30-10a Deep Dive - Update on ESS > 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) > 11-11:30 Break > 11:30a-Noon - Deep Dive - Protect & Scale integration > Noon-12:30p HDFS/Hadoop > > 12:30 - 1:30p Lunch > > 1:30p-5:00p > 1:30 - 2:00p IBM AFM Update > 2:00-2:30p ANL: AFM as a burst buffer > 2:30-3:00p ANL: GHI (GPFS HPSS Integration) > 3:00-3:30p Break > 3:30p - 4:00p LANL: ? or other site preso > 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences > 4:30p -5:00p Closing comments and Open Forum for Questions > > 5:00 - ? > Beer hunting? > > > ?? > > > We hope you can attend one or both of these events. > > Best, > Kristy Kallback-Rose & Bob Oesterlin > GPFS Users Group - USA Chapter - Principal & Co-principal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mark.Roberts at awe.co.uk Thu Apr 28 15:40:18 2016 From: Mark.Roberts at awe.co.uk (Mark.Roberts at awe.co.uk) Date: Thu, 28 Apr 2016 14:40:18 +0000 Subject: [gpfsug-discuss] EXTERNAL: Re: US GPFS/Spectrum Scale Events In-Reply-To: References: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> Message-ID: <201604281438.u3SEckmo029951@msw1.awe.co.uk> Kirsty, Thank you for the heads up. I?m guessing that those people who have already registered for XXL prior to this option should proceed to the Eventbrite page and register the GPFS day ? Regards Mark Roberts AWE From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of GPFS UG USA Principal Sent: 28 April 2016 15:20 To: Secretary GPFS UG Cc: usa-co-principal at gpfsug.org; Chair ; gpfsug main discussion list ; Gorini Stefano Claudio Subject: EXTERNAL: Re: [gpfsug-discuss] US GPFS/Spectrum Scale Events Thank you Claire. All, please note on the SPXXL registration page referenced below there is now a $0 May 26 GPFS Day Registration option. -Kristy On Apr 27, 2016, at 5:46 AM, Secretary GPFS UG > wrote: Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 ? Enhancements for CORAL from IBM ? Panel discussion with customers, topic TBD ? AFM and integration with Spectrum Protect ? Best practices for GPFS or Spectrum Scale Tuning. ? At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ?? 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ?? We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From kallbac at iu.edu Thu Apr 28 15:47:18 2016 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Thu, 28 Apr 2016 14:47:18 +0000 Subject: [gpfsug-discuss] EXTERNAL: Re: US GPFS/Spectrum Scale Events In-Reply-To: <201604281438.u3SEckmo029951@msw1.awe.co.uk> References: <21b651c4a310b67c139fccff707dce97@webmail.gpfsug.org> <201604281438.u3SEckmo029951@msw1.awe.co.uk> Message-ID: Stefano, Can you take this one? Thanks, Kristy On Apr 28, 2016, at 10:40 AM, Mark.Roberts at awe.co.uk wrote: Kirsty, Thank you for the heads up. I?m guessing that those people who have already registered for XXL prior to this option should proceed to the Eventbrite page and register the GPFS day ? Regards Mark Roberts AWE From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of GPFS UG USA Principal Sent: 28 April 2016 15:20 To: Secretary GPFS UG > Cc: usa-co-principal at gpfsug.org; Chair >; gpfsug main discussion list >; Gorini Stefano Claudio > Subject: EXTERNAL: Re: [gpfsug-discuss] US GPFS/Spectrum Scale Events Thank you Claire. All, please note on the SPXXL registration page referenced below there is now a $0 May 26 GPFS Day Registration option. -Kristy On Apr 27, 2016, at 5:46 AM, Secretary GPFS UG > wrote: Dear All, Following on from Kristy and Bob's email a few weeks ago, the SPXXL meeting this year has been organised in collaboration with the US section of Spectrum Scale UG. Registration is now open for the GPFS day on Eventbrite: https://www.eventbrite.com/e/spxxlscicomp-2016-summer-meeting-registration-24444020724 This is the page related to the event itself: https://www.spxxl.org/?q=New-York-City-2016 If you wish to register, please do so via the Eventbrite page. Kind regards, -- Claire O'Toole Spectrum Scale/GPFS User Group Secretary +44 (0)7508 033896 www.spectrumscaleug.org --- Hello all, We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. Tentative Agenda: ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 ? Enhancements for CORAL from IBM ? Panel discussion with customers, topic TBD ? AFM and integration with Spectrum Protect ? Best practices for GPFS or Spectrum Scale Tuning. ? At least one site update Location: New York Academy of Medicine 1216 Fifth Avenue New York, NY 10029 ?? 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! Location: Argonne National Lab more details and final agenda will come later. Tentative Agenda: 9:00a-12:30p 9-9:30a - Opening Remarks 9:30-10a Deep Dive - Update on ESS 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) 11-11:30 Break 11:30a-Noon - Deep Dive - Protect & Scale integration Noon-12:30p HDFS/Hadoop 12:30 - 1:30p Lunch 1:30p-5:00p 1:30 - 2:00p IBM AFM Update 2:00-2:30p ANL: AFM as a burst buffer 2:30-3:00p ANL: GHI (GPFS HPSS Integration) 3:00-3:30p Break 3:30p - 4:00p LANL: ? or other site preso 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences 4:30p -5:00p Closing comments and Open Forum for Questions 5:00 - ? Beer hunting? ?? We hope you can attend one or both of these events. Best, Kristy Kallback-Rose & Bob Oesterlin GPFS Users Group - USA Chapter - Principal & Co-principal The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Apr 28 22:04:58 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 28 Apr 2016 21:04:58 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> References: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> Message-ID: Ok, we are going to try this out and see if this makes a difference. The Windows server which is "faster" from Linux is running Server 2008R2, so I guess isn't doing encrypted SMB. Will report back next week once we've run some tests. Simon -----Original Message----- From: Yaron Daniel [YARD at il.ibm.com] Sent: Wednesday, April 27, 2016 01:21 AM GMT Standard Time To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB access speed Hi Please run this command: # mmsmb export list export path guest ok smb encrypt cifs /gpfs1/cifs no disabled mixed /gpfs1/mixed no disabled cifs-text /gpfs/gpfs2/cifs-text/ no auto nfs-text /gpfs/gpfs2/nfs-text/ no auto Try to disable "smb encrypt" value, and try again. Example: #mmsmb export change --option "smb encrypt=disabled" cifs-text Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd [cid:_1_0D90DCD00D90D73C0001EFFAC2257FA2] Server, Storage and Data Services- Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Sven Oehme To: gpfsug main discussion list Date: 04/27/2016 01:48 AM Subject: Re: [gpfsug-discuss] SMB access speed Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ can you check what OS level this is ? i have seen reports from issues with RHEL 7 clients and SMB On Tue, Apr 26, 2016 at 2:09 PM, Simon Thompson (Research Computing - IT Services) > wrote: Hi, We've had some reports from some of our users that out CES SMB exports are slow to access. It appears that this is only when the client is a Linux system and using SMB to access the file-system. In fact if we dual boot the same box, we can get sensible speeds out of it (I.e. Not network problems to the client system). They also report that access to real Windows based file-servers works at sensible speeds. Maybe the Win file servers support SMB1, but has anyone else seen this, or have any suggestions? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00001.gif Type: image/gif Size: 1851 bytes Desc: ATT00001.gif URL: From usa-principal at gpfsug.org Thu Apr 28 22:44:32 2016 From: usa-principal at gpfsug.org (GPFS UG USA Principal) Date: Thu, 28 Apr 2016 17:44:32 -0400 Subject: [gpfsug-discuss] GPFS/Spectrum Scale Upcoming US Events - Save the Dates In-Reply-To: <4192D644-E3AB-4B7B-AF7A-96D3D617FA7B@gpfsug.org> References: <4192D644-E3AB-4B7B-AF7A-96D3D617FA7B@gpfsug.org> Message-ID: <9489DBA2-1F12-4B05-A968-5D4855FBEA40@gpfsug.org> All, the registration page for the second event listed below at Argonne National Lab on June 10th is now up. An updated agenda is also at this site. Please register here: https://www.regonline.com/Spectrumscalemeeting We look forward to seeing some of you at these upcoming events. Feel free to send suggestions for future events in your area. Cheers, -Kristy > On Apr 4, 2016, at 4:52 PM, GPFS UG USA Principal wrote: > > Hello all, > > We?d like to announce two upcoming US GPFS/Spectrum Scale Events. One on the east coast, one in the midwest. > > 1) May 26th (full day event): GPFS/Spectrum Scale Day at the SPXXL conference in NYC https://www.spxxl.org/?q=New-York-City-2016 Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged. More details about the agenda, hotel and other logistics will come later this month. > > Tentative Agenda: > ? 10 reasons for upgrading from GPFS 3.5 to Spectrum Scale 4.2.1 > ? Enhancements for CORAL from IBM > ? Panel discussion with customers, topic TBD > ? AFM and integration with Spectrum Protect > ? Best practices for GPFS or Spectrum Scale Tuning. > ? At least one site update > > Location: > New York Academy of Medicine > 1216 Fifth Avenue > New York, NY 10029 > > ?? > > 2) June 10th (full day event): GPFS/Spectrum Scale Users Group Meeting at Argonne National Lab (ANL). Thanks to Argonne for hosting this event. Developers and Engineers from IBM will be at the meeting to cover topics, and open dialogue between IBM and customers will be encouraged, as usual no marketing pitches! > > Location: Argonne National Lab more details and final agenda will come later. > > Tentative Agenda: > > 9:00a-12:30p > 9-9:30a - Opening Remarks > 9:30-10a Deep Dive - Update on ESS > 10a-11a Deep Dive - Problem Determination (Presentation 30 min/Panel 30 min?) > 11-11:30 Break > 11:30a-Noon - Deep Dive - Protect & Scale integration > Noon-12:30p HDFS/Hadoop > > 12:30 - 1:30p Lunch > > 1:30p-5:00p > 1:30 - 2:00p IBM AFM Update > 2:00-2:30p ANL: AFM as a burst buffer > 2:30-3:00p ANL: GHI (GPFS HPSS Integration) > 3:00-3:30p Break > 3:30p - 4:00p LANL: ? or other site preso > 4:00-4:30p Nuance: GPFS Performance Sensors Deployment Experiences > 4:30p -5:00p Closing comments and Open Forum for Questions > > 5:00 - ? > Beer hunting? > > ?? > > We hope you can attend one or both of these events. > > Best, > Kristy Kallback-Rose & Bob Oesterlin > GPFS Users Group - USA Chapter - Principal & Co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Thu Apr 28 23:57:42 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Thu, 28 Apr 2016 23:57:42 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: <201604270021.u3R0LEUB003277@d06av02.portsmouth.uk.ibm.com> Message-ID: <57229566.7060009@buzzard.me.uk> On 28/04/16 22:04, Simon Thompson (Research Computing - IT Services) wrote: > Ok, we are going to try this out and see if this makes a difference. The > Windows server which is "faster" from Linux is running Server 2008R2, so > I guess isn't doing encrypted SMB. > A quick poke in the Linux source code suggests that the CIFS encryption is handled with standard kernel crypto routines, but and here is the big but, whether you get any hardware acceleration is going to depend heavily on the CPU in the machine. Don't have the right CPU and you won't get it being done in hardware and the performance would I expect take a dive. I imagine it is like scp; making sure all your ducks are lined up and both server and client are doing hardware accelerated encryption is more complicated that it appears at first sight. Lots of lower end CPU's seem to be missing hardware accelerated encryption. Anyway boot into Windows 7 and you get don't get encryption, connect to 2008R2 and you don't get encryption and it all looks better. A quick Google suggests encryption didn't hit till Windows 8 and Server 2012. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From zgiles at gmail.com Fri Apr 29 05:22:03 2016 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 29 Apr 2016 00:22:03 -0400 Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? Message-ID: Fellow GPFS Users, I have a silly question about file replicas... I've been playing around with copies=2 (or 3) and hoping that this would protect against data corruption on poor-quality RAID controllers.. but it seems that if I purposefully corrupt blocks on a LUN used by GPFS, the "replica" doesn't take over, rather GPFS just returns corrupt data. This includes if just "dd" into the disk, or if I break the RAID controller somehow by yanking whole chassis and the controller responds poorly for a few seconds. Originally my thinking was that replicas were for mirroring and GPFS would somehow return whichever is the "good" copy of your data, but now I'm thinking it's just intended for better file placement.. such as having a near replica and a far replica so you dont have to cross buildings for access, etc. That, and / or, disk outages where the outage is not corruption, just simply outage either by failure or for disk-moves, SAN rewiring, etc. In those cases you wouldn't have to "move" all the data since you already have a second copy. I can see how that would makes sense.. Somehow I guess I always knew this.. but it seems many people say they will just turn on copies=2 and be "safe".. but it's not the case.. Which way is the intended? Has anyone else had experience with this realization? Thanks, -Zach -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From UWEFALKE at de.ibm.com Fri Apr 29 10:22:10 2016 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Fri, 29 Apr 2016 11:22:10 +0200 Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? In-Reply-To: References: Message-ID: <201604290922.u3T9MGYY000400@d06av01.portsmouth.uk.ibm.com> Zach, GPFS replication does not include automatically a comparison of the replica copies. It protects against one part (i.e. one FG, or two with 3-fold replication) of the storage being down. How should GPFS know what version is the good one if both replica copies are readable? There are tools in 4.x to compare the replicas, but do use them only from 4.2 onward (problems with prior versions). Still then you need to decide what is the "good" copy (there is a consistency check on MD replicas though, but correct/incorrect data blocks cannot be auto-detected for obvious reasons). E2E Check-summing (as in GNR) would of course help here. Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Frank Hammer, Thorsten Moehring Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: Zachary Giles To: gpfsug main discussion list Date: 04/29/2016 06:22 AM Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? Sent by: gpfsug-discuss-bounces at spectrumscale.org Fellow GPFS Users, I have a silly question about file replicas... I've been playing around with copies=2 (or 3) and hoping that this would protect against data corruption on poor-quality RAID controllers.. but it seems that if I purposefully corrupt blocks on a LUN used by GPFS, the "replica" doesn't take over, rather GPFS just returns corrupt data. This includes if just "dd" into the disk, or if I break the RAID controller somehow by yanking whole chassis and the controller responds poorly for a few seconds. Originally my thinking was that replicas were for mirroring and GPFS would somehow return whichever is the "good" copy of your data, but now I'm thinking it's just intended for better file placement.. such as having a near replica and a far replica so you dont have to cross buildings for access, etc. That, and / or, disk outages where the outage is not corruption, just simply outage either by failure or for disk-moves, SAN rewiring, etc. In those cases you wouldn't have to "move" all the data since you already have a second copy. I can see how that would makes sense.. Somehow I guess I always knew this.. but it seems many people say they will just turn on copies=2 and be "safe".. but it's not the case.. Which way is the intended? Has anyone else had experience with this realization? Thanks, -Zach -- Zach Giles zgiles at gmail.com_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From zgiles at gmail.com Fri Apr 29 13:18:29 2016 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 29 Apr 2016 08:18:29 -0400 Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? In-Reply-To: <201604290922.u3T9MGYY000400@d06av01.portsmouth.uk.ibm.com> References: <201604290922.u3T9MGYY000400@d06av01.portsmouth.uk.ibm.com> Message-ID: Hi Uwe, You're right.. how would it know which one is the good one? I had imagined it would at least compare some piece of metadata to the block's metadata on retrieval, maybe generation number, something... However, when I think about that, it doesnt make any sense. The block on-disk is purely the data, no metadata. Thus, there won't be any structural issues when retrieving a bad block. What is the tool in 4.2 that you are referring to for comparing replicas? I'd be interested in trying it out. I didn't happen to pass-by any mmrestripefs options for that.. maybe I missed something. E2E I guess is what I'm looking for, but not on GNR. I'm just trying to investigate failure cases possible on standard-RAID hardware. I'm sure we've all had a RAID controller or two that have failed in interesting ways... -Zach On Fri, Apr 29, 2016 at 5:22 AM, Uwe Falke wrote: > Zach, > GPFS replication does not include automatically a comparison of the > replica copies. > It protects against one part (i.e. one FG, or two with 3-fold replication) > of the storage being down. > How should GPFS know what version is the good one if both replica copies > are readable? > > There are tools in 4.x to compare the replicas, but do use them only from > 4.2 onward (problems with prior versions). Still then you need to decide > what is the "good" copy (there is a consistency check on MD replicas > though, but correct/incorrect data blocks cannot be auto-detected for > obvious reasons). E2E Check-summing (as in GNR) would of course help here. > > > Mit freundlichen Gr??en / Kind regards > > > Dr. Uwe Falke > > IT Specialist > High Performance Computing Services / Integrated Technology Services / > Data Center Services > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland > Rathausstr. 7 > 09111 Chemnitz > Phone: +49 371 6978 2165 > Mobile: +49 175 575 2877 > E-Mail: uwefalke at de.ibm.com > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: > Frank Hammer, Thorsten Moehring > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, > HRB 17122 > > > > > From: Zachary Giles > To: gpfsug main discussion list > Date: 04/29/2016 06:22 AM > Subject: [gpfsug-discuss] GPFS and replication.. not a mirror? > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Fellow GPFS Users, > > I have a silly question about file replicas... I've been playing around > with copies=2 (or 3) and hoping that this would protect against data > corruption on poor-quality RAID controllers.. but it seems that if I > purposefully corrupt blocks on a LUN used by GPFS, the "replica" doesn't > take over, rather GPFS just returns corrupt data. This includes if just > "dd" into the disk, or if I break the RAID controller somehow by yanking > whole chassis and the controller responds poorly for a few seconds. > > Originally my thinking was that replicas were for mirroring and GPFS would > somehow return whichever is the "good" copy of your data, but now I'm > thinking it's just intended for better file placement.. such as having a > near replica and a far replica so you dont have to cross buildings for > access, etc. That, and / or, disk outages where the outage is not > corruption, just simply outage either by failure or for disk-moves, SAN > rewiring, etc. In those cases you wouldn't have to "move" all the data > since you already have a second copy. I can see how that would makes > sense.. > > Somehow I guess I always knew this.. but it seems many people say they > will just turn on copies=2 and be "safe".. but it's not the case.. > > Which way is the intended? > Has anyone else had experience with this realization? > > Thanks, > -Zach > > > -- > Zach Giles > zgiles at gmail.com_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.K.Ghumra at bham.ac.uk Fri Apr 29 17:07:17 2016 From: A.K.Ghumra at bham.ac.uk (Aslam Ghumra (IT Services, Facilities Management)) Date: Fri, 29 Apr 2016 16:07:17 +0000 Subject: [gpfsug-discuss] SMB access speed Message-ID: Many thanks Yaron, after the change to disable encryption we were able to increase the speed via Ubuntu of copying files from the local desktop to our gpfs filestore with average speeds of 60Mbps. We also tried changing the mount from vers=3.0 to vers=2.1, which gave similar figures However, using the Ubuntu gui ( Unity ) the speed drops down to 7Mbps, however, we're not concerned as the user will use rsync / cp. The other issue is copying data from gpfs filestore to the local HDD, which resulted in 4Mbps. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear -------------- next part -------------- An HTML attachment was scrubbed... URL: From L.A.Hurst at bham.ac.uk Fri Apr 29 17:22:48 2016 From: L.A.Hurst at bham.ac.uk (Laurence Alexander Hurst (IT Services)) Date: Fri, 29 Apr 2016 16:22:48 +0000 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: On 29/04/2016 17:07, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Aslam Ghumra (IT Services, Facilities Management)" wrote: >Many thanks Yaron, after the change to disable encryption we were able to >increase the speed via Ubuntu of copying files from the local desktop to >our gpfs filestore with average speeds of 60Mbps. > >We also tried changing the mount from vers=3.0 to vers=2.1, which gave >similar figures > >However, using the Ubuntu gui ( Unity ) the speed drops down to 7Mbps, >however, we?re not concerned as the user will use rsync / cp. > > >The other issue is copying data from gpfs filestore to the local HDD, >which resulted in 4Mbps. > >Aslam Ghumra >Research Data Management I wonder if Unity uses what used to be called the "gnome virtual filesystem" to connect. It may be using it's own implementation that's not such a well written samba/cifs (which ever they are using) client than the implementation used if you mount it "properly" with mount.smb/mount.cifs. Laurence -- Laurence Hurst Research Computing, IT Services, University of Birmingham w: http://www.birmingham.ac.uk/bear (http://servicedesk.bham.ac.uk/ for support) e: l.a.hurst at bham.ac.uk From jonathan at buzzard.me.uk Fri Apr 29 21:05:02 2016 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 29 Apr 2016 21:05:02 +0100 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <5723BE6E.6000403@buzzard.me.uk> On 29/04/16 17:22, Laurence Alexander Hurst (IT Services) wrote: [SNIP] > I wonder if Unity uses what used to be called the "gnome virtual > filesystem" to connect. It may be using it's own implementation that's > not such a well written samba/cifs (which ever they are using) client than > the implementation used if you mount it "properly" with > mount.smb/mount.cifs. Probably, as I said previously these desktop VFS CIF's clients are significantly slower than the kernel client. It's worth remembering that a few years back the Linux kernel CIFS client was extensively optimized for speed, and was at on point at least giving better performance than the NFS client. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From p.childs at qmul.ac.uk Fri Apr 29 21:58:53 2016 From: p.childs at qmul.ac.uk (Peter Childs) Date: Fri, 29 Apr 2016 20:58:53 +0000 Subject: [gpfsug-discuss] Dell Multipath In-Reply-To: <571E82FA.2000008@genome.wustl.edu> References: <309ECED7-0A51-414F-A5FA-4710270FB347@load.se>, <571E82FA.2000008@genome.wustl.edu> Message-ID: >From my experience using a Dell md3460 with zfs (not gpfs). I've not tried it with gpfs but it looks very simular to our IBM dcs3700 we run gpfs on. To get multipath to work correctly, we had to install the storage manager software from the cd that can be downloaded from Dells website, which made a few modifications to multipath.conf. Broadly speaking the blacklist comments others have made are correct. You also need to enable and start multipathd (chkconfig multipathd on) Peter Childs ITS Research and Teaching Support Queen Mary, University of London ---- Matt Weil wrote ---- enable mpathconf --enable --with_multipathd y show config multipathd show config On 4/25/16 3:27 PM, Jan Finnerman Load wrote: Hi, I realize this might not be strictly GPFS related but I?m getting a little desperate here? I?m doing an implementation of GPFS/Spectrum Scale 4.2 at a customer and struggle on a question of disk multipathing for the intended NSD disks with their direct attached SAS disk systems. If I do a multipath ?ll, after a few seconds I just get the prompt back. I expected to see the usual big amount of path info, but nothing there. If I do a multipathd ?k and then a show config, I see all the Dell disk luns with reasonably right parameters. I can see them as /dev/sdf, /dev/sdg, etc. devices. I can also add them in PowerKVM:s Kimchi web interface and even deploy a GPFS installation on it. The big question is, though, how do I get multipathing to work ? Do I need any special driver or setting in the multipath.conf file ? I found some of that but more generic e.g. for RedHat 6, but now we are in PowerKVM country. The platform consists of: 4x IBM S812L servers SAS controller PowerKVM 3.1 Red Hat 7.1 2x Dell MD3460 SAS disk systems No switches Jan ///Jan [cid:part1.01010308.03000406 at genome.wustl.edu] Jan Finnerman Senior Technical consultant [CertTiv_sm] [cid:part3.01010404.04060703 at genome.wustl.edu] Kista Science Tower 164 51 Kista Mobil: +46 (0)70 631 66 26 Kontor: +46 (0)8 633 66 00/26 jan.finnerman at load.se _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00001.png Type: image/png Size: 8584 bytes Desc: ATT00001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00002.png Type: image/png Size: 5565 bytes Desc: ATT00002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ATT00003.png Type: image/png Size: 6664 bytes Desc: ATT00003.png URL: From YARD at il.ibm.com Sat Apr 30 06:17:28 2016 From: YARD at il.ibm.com (Yaron Daniel) Date: Sat, 30 Apr 2016 08:17:28 +0300 Subject: [gpfsug-discuss] SMB access speed In-Reply-To: References: Message-ID: <201604300517.u3U5HcbY022432@d06av12.portsmouth.uk.ibm.com> Hi It could be that GUI use in the "background" default command which use smb v1. Regard copy files from GPFS to Local HDD, it might be related to the local HDD settings. What is the speed transfer between the local HHD ? Cache Settings and so.. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Server, Storage and Data Services - Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: "Aslam Ghumra (IT Services, Facilities Management)" To: "gpfsug-discuss at spectrumscale.org" Date: 04/29/2016 07:07 PM Subject: [gpfsug-discuss] SMB access speed Sent by: gpfsug-discuss-bounces at spectrumscale.org Many thanks Yaron, after the change to disable encryption we were able to increase the speed via Ubuntu of copying files from the local desktop to our gpfs filestore with average speeds of 60Mbps. We also tried changing the mount from vers=3.0 to vers=2.1, which gave similar figures However, using the Ubuntu gui ( Unity ) the speed drops down to 7Mbps, however, we?re not concerned as the user will use rsync / cp. The other issue is copying data from gpfs filestore to the local HDD, which resulted in 4Mbps. Aslam Ghumra Research Data Management ____________________________ IT Services Elms Road Data Centre Building G5 Edgbaston Birmingham B15 2TT T: 0121 414 5877 F; 0121 414 3952 Skype : JanitorX Twitter : @aslamghumra http://intranet.bham.ac.uk/bear _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: