From pavel.pokorny at datera.cz Thu Apr 9 22:00:07 2015 From: pavel.pokorny at datera.cz (Pavel Pokorny) Date: Thu, 9 Apr 2015 23:00:07 +0200 Subject: [gpfsug-discuss] Share GPFS from Windows? Message-ID: Hello to all, Is there any technical reason why it is not supported to export GPFS from Windows nodes using CIFS? As stated in GPFs FAQ ( http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html ): *Exporting GPFS file systems as Server Message Block (SMB) shares (also known as CIFS shares) from GPFS Windows nodes is not supported.* Or it is limitation more related to licensing and business point of view. Thank you for answers, Pavel -- Ing. Pavel Pokorn? DATERA s.r.o. | Ovocn? trh 580/2 | Praha | Czech Republic www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz -------------- next part -------------- An HTML attachment was scrubbed... URL: From BOLIK at de.ibm.com Fri Apr 10 12:24:14 2015 From: BOLIK at de.ibm.com (Christian Bolik) Date: Fri, 10 Apr 2015 13:24:14 +0200 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Just wanted to let you know that recently GPFS support has been added to TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get answers to the following questions, across any number of GPFS clusters which have been added to TPC: - Which of my clusters are running out of free space? - Which of my clusters or nodes have a health problem? - Which file systems and pools are running out of capacity? - Which file systems are mounted on which nodes? - How much space is occupied by snapshots? Are there any very old, potentially obsolete ones? - Which quotas are close to being exceeded or have already been exceeded? - Which filesets are close to running out of free inodes? - Which NSDs are at risk of becoming unavailable, or are unavailable? - Are the volumes backing my NSDs performing OK? - Are all nodes fulfilling critical roles in the cluster up and running? - How can I be notified when nodes go offline or file systems fill up beyond a threshold? There's a short 6-minute video available on YouTube which shows how TPC helps answering these questions: https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be For more information about TPC, please check out the product wiki on developerWorks: http://ibm.co/1adWNFK Thanks, Christian Bolik IBM Storage Software Development From zgiles at gmail.com Fri Apr 10 16:27:36 2015 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 10 Apr 2015 11:27:36 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Christian: Interesting and thanks for the latest news. May I ask: Is there an intent moving forward that TPC and / or other Tivoli products will be a required part of GPFS? The concern I have is that GPFS is pretty straightforward at the moment and has very logical requirements to operate (min servers, quorum, etc), whereas there are many IBM products that require two or three more servers just to manage the servers managing the service.. too much. It would be nice to make sure, going forward, that the core of GPFS can still function without additional web servers, Java, a suite of middleware, and a handful of DB2 instance .. :) -Zach On Fri, Apr 10, 2015 at 7:24 AM, Christian Bolik wrote: > > Just wanted to let you know that recently GPFS support has been added to > TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed > to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get > answers to the following questions, across any number of GPFS clusters > which have been added to TPC: > > - Which of my clusters are running out of free space? > - Which of my clusters or nodes have a health problem? > - Which file systems and pools are running out of capacity? > - Which file systems are mounted on which nodes? > - How much space is occupied by snapshots? Are there any very old, > potentially obsolete ones? > - Which quotas are close to being exceeded or have already been exceeded? > - Which filesets are close to running out of free inodes? > - Which NSDs are at risk of becoming unavailable, or are unavailable? > - Are the volumes backing my NSDs performing OK? > - Are all nodes fulfilling critical roles in the cluster up and running? > - How can I be notified when nodes go offline or file systems fill up > beyond a threshold? > > There's a short 6-minute video available on YouTube which shows how TPC > helps answering these questions: > https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be > > For more information about TPC, please check out the product wiki on > developerWorks: http://ibm.co/1adWNFK > > Thanks, > Christian Bolik > IBM Storage Software Development > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Fri Apr 10 18:45:14 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 10 Apr 2015 10:45:14 -0700 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Hi Zach, The summary is that GPFS is being integrated much more across the portfolio... With GPFS itself, there is a video below demonstrating the ESS/GSS GUI and monitoring feature that is in the product today. Moving forward, as you can probably see, there is a push in IBM to move GPFS to software-defined, which includes features such as the GUI as well... https://www.youtube.com/watch?v=Mv9Sn-VYoGU Dean From: Zachary Giles To: gpfsug main discussion list Date: 04/10/2015 08:27 AM Subject: Re: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Sent by: gpfsug-discuss-bounces at gpfsug.org Christian: Interesting and thanks for the latest news. May I ask: Is there an intent moving forward that TPC and / or other Tivoli products will be a required part of GPFS? The concern I have is that GPFS is pretty straightforward at the moment and has very logical requirements to operate (min servers, quorum, etc), whereas there are many IBM products that require two or three more servers just to manage the servers managing the service.. too much. It would be nice to make sure, going forward, that the core of GPFS can still function without additional web servers, Java, a suite of middleware, and a handful of DB2 instance .. :) -Zach On Fri, Apr 10, 2015 at 7:24 AM, Christian Bolik wrote: Just wanted to let you know that recently GPFS support has been added to TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get answers to the following questions, across any number of GPFS clusters which have been added to TPC: - Which of my clusters are running out of free space? - Which of my clusters or nodes have a health problem? - Which file systems and pools are running out of capacity? - Which file systems are mounted on which nodes? - How much space is occupied by snapshots? Are there any very old, potentially obsolete ones? - Which quotas are close to being exceeded or have already been exceeded? - Which filesets are close to running out of free inodes? - Which NSDs are at risk of becoming unavailable, or are unavailable? - Are the volumes backing my NSDs performing OK? - Are all nodes fulfilling critical roles in the cluster up and running? - How can I be notified when nodes go offline or file systems fill up beyond a threshold? There's a short 6-minute video available on YouTube which shows how TPC helps answering these questions: https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be For more information about TPC, please check out the product wiki on developerWorks: http://ibm.co/1adWNFK Thanks, Christian Bolik IBM Storage Software Development _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Zach Giles zgiles at gmail.com_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From BOLIK at de.ibm.com Mon Apr 13 14:11:01 2015 From: BOLIK at de.ibm.com (Christian Bolik) Date: Mon, 13 Apr 2015 15:11:01 +0200 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Hi Zach, I'm not aware of any intent to make TPC or any other Tivoli/IBM product a prereq for GPFS, and I don't think any such plans exist. Rather, as Dean also pointed out, we're investing work to improve integration of GPFS/Spectrum Scale into other products being members of the newly announced IBM Spectrum Storage family, with the goal of improving manageability of the individual components (rather than worsening it...). Cheers, Christian > Christian: > Interesting and thanks for the latest news. > > May I ask: Is there an intent moving forward that TPC and / or other Tivoli > products will be a required part of GPFS? > The concern I have is that GPFS is pretty straightforward at the moment and > has very logical requirements to operate (min servers, quorum, etc), > whereas there are many IBM products that require two or three more servers > just to manage the servers managing the service.. too much. It would be > nice to make sure, going forward, that the core of GPFS can still function > without additional web servers, Java, a suite of middleware, and a handful > of DB2 instance .. :) > > -Zach Christian Bolik Software Defined Storage Development IBM Deutschland Research & Development GmbH, Hechtsheimer Str. 2, 55131 Mainz, Germany Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From zgiles at gmail.com Mon Apr 13 17:05:11 2015 From: zgiles at gmail.com (Zachary Giles) Date: Mon, 13 Apr 2015 12:05:11 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Thanks for your replies. I can definitely appreciate the the goal of improving management of components, and I agree that if GPFS will be using within other products (which it is and will continue to be), then it would be great for those products to be able to manage GPFS via an interface. My fear with the idea of the above mentioned as a prereq is that the "improvement" of management might looks like an improvement when you only have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and another fileset called "user files", but if you have hundreds or thousands of filesets and several tiers of storage, with both GSS and non-GSS systems in the same cluster, then the GUI may actually be more cumbersome than the original method. So, I just want to voice an opinion that we should continue to be able to configure / maintain / monitor GPFS in a programatic / scriptable non-point-and-click way, if possible. On Mon, Apr 13, 2015 at 9:11 AM, Christian Bolik wrote: > > Hi Zach, > > I'm not aware of any intent to make TPC or any other Tivoli/IBM product a > prereq for GPFS, and I don't think any such plans exist. Rather, as Dean > also pointed out, we're investing work to improve integration of > GPFS/Spectrum Scale into other products being members of the newly > announced IBM Spectrum Storage family, with the goal of improving > manageability of the individual components (rather than worsening it...). > > Cheers, > Christian > > > Christian: > > Interesting and thanks for the latest news. > > > > May I ask: Is there an intent moving forward that TPC and / or other > Tivoli > > products will be a required part of GPFS? > > The concern I have is that GPFS is pretty straightforward at the moment > and > > has very logical requirements to operate (min servers, quorum, etc), > > whereas there are many IBM products that require two or three more > servers > > just to manage the servers managing the service.. too much. It would be > > nice to make sure, going forward, that the core of GPFS can still > function > > without additional web servers, Java, a suite of middleware, and a > handful > > of DB2 instance .. :) > > > > -Zach > > Christian Bolik > > Software Defined Storage Development > > IBM Deutschland Research & Development GmbH, Hechtsheimer Str. 2, 55131 > Mainz, Germany > Vorsitzende des Aufsichtsrats: Martina Koederitz > Gesch?ftsf?hrung: Dirk Wittkopp > Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, > HRB 243294 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Tue Apr 14 13:23:59 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Tue, 14 Apr 2015 08:23:59 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Zach, Not sure if I'm formatting this message properly (need to switch off of digest mode), but I know of no plans to replace the GPFS command line interface with a GUI. With the GPFS GUI, TPC monitoring, etc., we want to enable a wider variety of users to effectively use and manage GPFS, but the command line will of course remain available for power users, script-writers, etc. I certainly don't intend to throw away any of my nice mm-themed test programs :-) Jamie Davis GPFS Test Date: Mon, 13 Apr 2015 12:05:11 -0400 From: Zachary Giles To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Content-Type: text/plain; charset="utf-8" Thanks for your replies. I can definitely appreciate the the goal of improving management of components, and I agree that if GPFS will be using within other products (which it is and will continue to be), then it would be great for those products to be able to manage GPFS via an interface. My fear with the idea of the above mentioned as a prereq is that the "improvement" of management might looks like an improvement when you only have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and another fileset called "user files", but if you have hundreds or thousands of filesets and several tiers of storage, with both GSS and non-GSS systems in the same cluster, then the GUI may actually be more cumbersome than the original method. So, I just want to voice an opinion that we should continue to be able to configure / maintain / monitor GPFS in a programatic / scriptable non-point-and-click way, if possible. Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecgarris at iu.edu Thu Apr 16 21:33:59 2015 From: ecgarris at iu.edu (Garrison, E Chris) Date: Thu, 16 Apr 2015 20:33:59 +0000 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Message-ID: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage -------------- next part -------------- An HTML attachment was scrubbed... URL: From zgiles at gmail.com Thu Apr 16 21:39:51 2015 From: zgiles at gmail.com (Zachary Giles) Date: Thu, 16 Apr 2015 16:39:51 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Thanks Jamie. I appreciate the input. Glad to hear it. On Tue, Apr 14, 2015 at 8:23 AM, James Davis wrote: > Zach, > > Not sure if I'm formatting this message properly (need to switch off of > digest mode), but I know of no plans to replace the GPFS command line > interface with a GUI. With the GPFS GUI, TPC monitoring, etc., we want to > enable a wider variety of users to effectively use and manage GPFS, but the > command line will of course remain available for power users, > script-writers, etc. I certainly don't intend to throw away any of my nice > mm-themed test programs :-) > > Jamie Davis > GPFS Test > > Date: Mon, 13 Apr 2015 12:05:11 -0400 > From: Zachary Giles > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Monitoring capacity and health status > for a multitude of GPFS clusters > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Thanks for your replies. I can definitely appreciate the the goal of > improving management of components, and I agree that if GPFS will be using > within other products (which it is and will continue to be), then it would > be great for those products to be able to manage GPFS via an interface. > > My fear with the idea of the above mentioned as a prereq is that the > "improvement" of management might looks like an improvement when you only > have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and > another fileset called "user files", but if you have hundreds or thousands > of filesets and several tiers of storage, with both GSS and non-GSS systems > in the same cluster, then the GUI may actually be more cumbersome than the > original method. So, I just want to voice an opinion that we should > continue to be able to configure / maintain / monitor GPFS in a programatic > / scriptable non-point-and-click way, if possible. > > > Jamie Davis > GPFS Functional Verification Test (FVT) > jamiedavis at us.ibm.com > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fleers at gmail.com Thu Apr 16 23:21:34 2015 From: fleers at gmail.com (Frank Leers) Date: Thu, 16 Apr 2015 15:21:34 -0700 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: > Hello, > > My site is working up to upgrading our paired GridScaler system from > GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and > that's an advertised feature of 4.1. We already have over 100 TB of data, > synchronously replicated between two geographically separated sites, and we > have concerns about how the upgrade, as well as the application of > encryption to all that data, will go. > > I'd like to hear from admins who've been through this upgrade. What > gotchas should we look out for? Can it easily be done in place, or would we > need some extra equipment to "slosh" our data to and from so that it is > written to an encrypted GPFS? > > Thank you for your time, and for any sage advice on this process. > > Chris > -- > Chris Garrison > Indiana University > Research Systems Storage > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Apr 17 13:15:40 2015 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 17 Apr 2015 13:15:40 +0100 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel Kidger No. 1 The Square, Technical Specialist SDI (formerly Platform Computing) Temple Quay, Bristol BS1 6DG Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: From jamiedavis at us.ibm.com Fri Apr 17 15:11:24 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Fri, 17 Apr 2015 10:11:24 -0400 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, Based on your question I think you are aware of this already, but just in case... There is not currently an encrypt-in-place solution for GPFS encryption. A file's encryption state is determined at create-time. In order to encrypt your existing 100TB of data, you will need to apply an encryption policy to the current (or a new) GPFS file system(s) and then do something like: cp file file.enc #now file.new is encrypted mv file.enc file #replace the unencrypted file with the encrypted file This can be done in parallel using mmapplypolicy if you want. In 4.1.1 (forthcoming) I plan to provide an improved version of the /usr/lpp/mmfs/samples/ilm/mmfind (a find-esque interface to mmapplypolicy) that shipped in the last release; this should be an effective tool for the job. Cheers, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com From: Daniel Kidger To: gpfsug main discussion list Date: 17-04-15 08:16 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel No. 1 The Square, Kidger Technical Temple Quay, Specialist Bristol BS1 6DG SDI (formerly Platform Computing) Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20386063.gif Type: image/gif Size: 360 bytes Desc: not available URL: From makaplan at us.ibm.com Fri Apr 17 15:37:55 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 17 Apr 2015 10:37:55 -0400 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: An "Encrypt-in-place" feature would not survive a serious, paranoid security audit. If sensitive data ever was written to a disk, then the paranoid would say you can never be 100% sure that you have erased it. That said, you decide how worried or paranoid you'd like to be and you can do an almost in place encryption as James Davis suggests by simply copying the unencrypted file to a new file that will be subject to a GPFS encryption policy (SET ENCRYPTION) rule. The safest way would be to copy-encrypt all the data to a new file system and then crush all the equipment that was used to store the clear-text files. If crushing is too extreme, you might settle for a multipass sector by sector soft scrubbing by writing both carefully chosen and random data patterns. Even then, unless you trust the manufacturer AND the manufacturer has provided you the means to "scrub" all the tracks/sectors of the disk you won't be sure... Suppose that some sensitive data was written to a sector number that was later declared (partially) defective and remapped by the disk micro-code, and the original sector is put in a bad sector list that you can no longer address with standard disk driver software... Or it could be on some NVRAM or a disk that a service technician swapped out... Ooops... it went out the door... and is now in the hands of the bad guys... From: James Davis/Poughkeepsie/IBM at IBMUS To: gpfsug main discussion list Date: 04/17/2015 10:12 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Based on your question I think you are aware of this already, but just in case... There is not currently an encrypt-in-place solution for GPFS encryption. A file's encryption state is determined at create-time. In order to encrypt your existing 100TB of data, you will need to apply an encryption policy to the current (or a new) GPFS file system(s) and then do something like: cp file file.enc #now file.new is encrypted mv file.enc file #replace the unencrypted file with the encrypted file This can be done in parallel using mmapplypolicy if you want. In 4.1.1 (forthcoming) I plan to provide an improved version of the /usr/lpp/mmfs/samples/ilm/mmfind (a find-esque interface to mmapplypolicy) that shipped in the last release; this should be an effective tool for the job. Cheers, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com Daniel Kidger ---17-04-2015 08:16:54 AM---Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra From: Daniel Kidger To: gpfsug main discussion list Date: 17-04-15 08:16 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel Kidger No. 1 The Square, Technical Specialist SDI (formerly Platform Computing) Temple Quay, Bristol BS1 6DG Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: From secretary at gpfsug.org Fri Apr 24 15:05:25 2015 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Fri, 24 Apr 2015 15:05:25 +0100 Subject: [gpfsug-discuss] GPFS User Group Meeting Agenda Message-ID: Dear Members, The agenda for the next GPFS User Group Meeting is now available: 10:00 arrival for a 10:30 start 10:30 Introductions - Jez Tucker, Group Chair & Claire Robson, Group Secretary 10:35 Keynote - Doris Conti 10:50 4.1.1 Roadmap / High-level futures - Scott Fadden 11:40 Failure Events, Recovery & Problem determination - Scott Fadden 12:00 Monitoring IBM Spectrum Scale using IBM Spectrum Control (VSC/TPC) - Christian Bolik 12:30 Lunch 13:00 User Experience from University of Birmingham & CLIMB - Simon Thompson 13:20 User Experience from NERSC - Jason Hick 13:40 AFM & Async DR Use Cases - Shankar Balasubramanian 14:25 mmbackup + TSM Integration - Stefan Bender 15:10 Break 15:25 IBM Spectrum Scale (formerly GPFS) performance update, protocol performance and sizing - Sven Oehme 16:50 Closing summary - Jez Tucker, Group Chair & Claire Robson, Group Secretary 17:00 Ends All attendees are invited to attend: 19:00 Buffet and drinks at York National Railway Museum There are still places available for the event. If you have not registered and would like to attend, please email me, secretary at gpfsug.org with your name, job title, organisation, telephone and any dietary requirements. We hope to see you in May! Kind regards, Claire -- Claire Robson GPFS User Group Secretary From Dan.Foster at bristol.ac.uk Wed Apr 29 08:51:03 2015 From: Dan.Foster at bristol.ac.uk (Dan Foster) Date: Wed, 29 Apr 2015 08:51:03 +0100 Subject: [gpfsug-discuss] file system format version / software version compatibility matrix Message-ID: Hi All, I'm trying to determine which versions of the GPFS file system format are compatible with certain versions of the GPFS software. Specifically in this instance I'm interested if file system format v11.05 (from GPFS 3.3.0.2) is compatible with GPFS 3.5 . The "File system format changes between versions of GPFS" [1] chapter mentions format levels as old a v6, which infers that they are compatible. But it would be useful to know for certain. Thanks, Dan. [1] http://www-01.ibm.com/support/knowledgecenter/SSFKCN_3.5.0/com.ibm.cluster.gpfs.v3r5.gpfs100.doc/bl1adm_fsmigissues.htm?lang=en -- Dan Foster | Senior Storage Systems Administrator | IT Services e: dan.foster at bristol.ac.uk | t: 0117 3941170 [x41170] m: Advanced Computing Research Centre, University of Bristol, 8-10 Berkeley Square, Bristol BS8 1HH From jamiedavis at us.ibm.com Wed Apr 29 13:43:08 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Wed, 29 Apr 2015 08:43:08 -0400 Subject: [gpfsug-discuss] file system format version / software version compatibility matrix In-Reply-To: References: Message-ID: Dan, A "new" GPFS should be able to mount and interact with file systems with "old" versions. Specifically I do not believe you will have trouble getting GPFS 3.5 to talk to an FS created on GPFS 3.3.0.2. The "Concepts, planning, and install guide" provides information about migrating to GPFS 4.1 from GPFS 3.2 or earlier; this would implies file systems created at GPFS 3.2 being supported on GPFS 4.1, which is even more compatibility than you are asking about. The section to which I refer is titled "Migrating to GPFS 4.1 from GPFS 3.2 or earlier releases of GPFS". Note that without running mmchfs -V {full|compat} you will not have access to some of the newer GPFS features. See the section titled "Completing the migration to a new level of GPFS", also in the concepts guide. Hope this helps, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com From: Dan Foster To: gpfsug main discussion list Date: 29-04-15 03:51 AM Subject: [gpfsug-discuss] file system format version / software version compatibility matrix Sent by: gpfsug-discuss-bounces at gpfsug.org Hi All, I'm trying to determine which versions of the GPFS file system format are compatible with certain versions of the GPFS software. Specifically in this instance I'm interested if file system format v11.05 (from GPFS 3.3.0.2) is compatible with GPFS 3.5 . The "File system format changes between versions of GPFS" [1] chapter mentions format levels as old a v6, which infers that they are compatible. But it would be useful to know for certain. Thanks, Dan. [1] http://www-01.ibm.com/support/knowledgecenter/SSFKCN_3.5.0/com.ibm.cluster.gpfs.v3r5.gpfs100.doc/bl1adm_fsmigissues.htm?lang=en -- Dan Foster | Senior Storage Systems Administrator | IT Services e: dan.foster at bristol.ac.uk | t: 0117 3941170 [x41170] m: Advanced Computing Research Centre, University of Bristol, 8-10 Berkeley Square, Bristol BS8 1HH _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From pavel.pokorny at datera.cz Thu Apr 9 22:00:07 2015 From: pavel.pokorny at datera.cz (Pavel Pokorny) Date: Thu, 9 Apr 2015 23:00:07 +0200 Subject: [gpfsug-discuss] Share GPFS from Windows? Message-ID: Hello to all, Is there any technical reason why it is not supported to export GPFS from Windows nodes using CIFS? As stated in GPFs FAQ ( http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html ): *Exporting GPFS file systems as Server Message Block (SMB) shares (also known as CIFS shares) from GPFS Windows nodes is not supported.* Or it is limitation more related to licensing and business point of view. Thank you for answers, Pavel -- Ing. Pavel Pokorn? DATERA s.r.o. | Ovocn? trh 580/2 | Praha | Czech Republic www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz -------------- next part -------------- An HTML attachment was scrubbed... URL: From BOLIK at de.ibm.com Fri Apr 10 12:24:14 2015 From: BOLIK at de.ibm.com (Christian Bolik) Date: Fri, 10 Apr 2015 13:24:14 +0200 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Just wanted to let you know that recently GPFS support has been added to TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get answers to the following questions, across any number of GPFS clusters which have been added to TPC: - Which of my clusters are running out of free space? - Which of my clusters or nodes have a health problem? - Which file systems and pools are running out of capacity? - Which file systems are mounted on which nodes? - How much space is occupied by snapshots? Are there any very old, potentially obsolete ones? - Which quotas are close to being exceeded or have already been exceeded? - Which filesets are close to running out of free inodes? - Which NSDs are at risk of becoming unavailable, or are unavailable? - Are the volumes backing my NSDs performing OK? - Are all nodes fulfilling critical roles in the cluster up and running? - How can I be notified when nodes go offline or file systems fill up beyond a threshold? There's a short 6-minute video available on YouTube which shows how TPC helps answering these questions: https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be For more information about TPC, please check out the product wiki on developerWorks: http://ibm.co/1adWNFK Thanks, Christian Bolik IBM Storage Software Development From zgiles at gmail.com Fri Apr 10 16:27:36 2015 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 10 Apr 2015 11:27:36 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Christian: Interesting and thanks for the latest news. May I ask: Is there an intent moving forward that TPC and / or other Tivoli products will be a required part of GPFS? The concern I have is that GPFS is pretty straightforward at the moment and has very logical requirements to operate (min servers, quorum, etc), whereas there are many IBM products that require two or three more servers just to manage the servers managing the service.. too much. It would be nice to make sure, going forward, that the core of GPFS can still function without additional web servers, Java, a suite of middleware, and a handful of DB2 instance .. :) -Zach On Fri, Apr 10, 2015 at 7:24 AM, Christian Bolik wrote: > > Just wanted to let you know that recently GPFS support has been added to > TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed > to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get > answers to the following questions, across any number of GPFS clusters > which have been added to TPC: > > - Which of my clusters are running out of free space? > - Which of my clusters or nodes have a health problem? > - Which file systems and pools are running out of capacity? > - Which file systems are mounted on which nodes? > - How much space is occupied by snapshots? Are there any very old, > potentially obsolete ones? > - Which quotas are close to being exceeded or have already been exceeded? > - Which filesets are close to running out of free inodes? > - Which NSDs are at risk of becoming unavailable, or are unavailable? > - Are the volumes backing my NSDs performing OK? > - Are all nodes fulfilling critical roles in the cluster up and running? > - How can I be notified when nodes go offline or file systems fill up > beyond a threshold? > > There's a short 6-minute video available on YouTube which shows how TPC > helps answering these questions: > https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be > > For more information about TPC, please check out the product wiki on > developerWorks: http://ibm.co/1adWNFK > > Thanks, > Christian Bolik > IBM Storage Software Development > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Fri Apr 10 18:45:14 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 10 Apr 2015 10:45:14 -0700 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Hi Zach, The summary is that GPFS is being integrated much more across the portfolio... With GPFS itself, there is a video below demonstrating the ESS/GSS GUI and monitoring feature that is in the product today. Moving forward, as you can probably see, there is a push in IBM to move GPFS to software-defined, which includes features such as the GUI as well... https://www.youtube.com/watch?v=Mv9Sn-VYoGU Dean From: Zachary Giles To: gpfsug main discussion list Date: 04/10/2015 08:27 AM Subject: Re: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Sent by: gpfsug-discuss-bounces at gpfsug.org Christian: Interesting and thanks for the latest news. May I ask: Is there an intent moving forward that TPC and / or other Tivoli products will be a required part of GPFS? The concern I have is that GPFS is pretty straightforward at the moment and has very logical requirements to operate (min servers, quorum, etc), whereas there are many IBM products that require two or three more servers just to manage the servers managing the service.. too much. It would be nice to make sure, going forward, that the core of GPFS can still function without additional web servers, Java, a suite of middleware, and a handful of DB2 instance .. :) -Zach On Fri, Apr 10, 2015 at 7:24 AM, Christian Bolik wrote: Just wanted to let you know that recently GPFS support has been added to TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get answers to the following questions, across any number of GPFS clusters which have been added to TPC: - Which of my clusters are running out of free space? - Which of my clusters or nodes have a health problem? - Which file systems and pools are running out of capacity? - Which file systems are mounted on which nodes? - How much space is occupied by snapshots? Are there any very old, potentially obsolete ones? - Which quotas are close to being exceeded or have already been exceeded? - Which filesets are close to running out of free inodes? - Which NSDs are at risk of becoming unavailable, or are unavailable? - Are the volumes backing my NSDs performing OK? - Are all nodes fulfilling critical roles in the cluster up and running? - How can I be notified when nodes go offline or file systems fill up beyond a threshold? There's a short 6-minute video available on YouTube which shows how TPC helps answering these questions: https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be For more information about TPC, please check out the product wiki on developerWorks: http://ibm.co/1adWNFK Thanks, Christian Bolik IBM Storage Software Development _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Zach Giles zgiles at gmail.com_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From BOLIK at de.ibm.com Mon Apr 13 14:11:01 2015 From: BOLIK at de.ibm.com (Christian Bolik) Date: Mon, 13 Apr 2015 15:11:01 +0200 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Hi Zach, I'm not aware of any intent to make TPC or any other Tivoli/IBM product a prereq for GPFS, and I don't think any such plans exist. Rather, as Dean also pointed out, we're investing work to improve integration of GPFS/Spectrum Scale into other products being members of the newly announced IBM Spectrum Storage family, with the goal of improving manageability of the individual components (rather than worsening it...). Cheers, Christian > Christian: > Interesting and thanks for the latest news. > > May I ask: Is there an intent moving forward that TPC and / or other Tivoli > products will be a required part of GPFS? > The concern I have is that GPFS is pretty straightforward at the moment and > has very logical requirements to operate (min servers, quorum, etc), > whereas there are many IBM products that require two or three more servers > just to manage the servers managing the service.. too much. It would be > nice to make sure, going forward, that the core of GPFS can still function > without additional web servers, Java, a suite of middleware, and a handful > of DB2 instance .. :) > > -Zach Christian Bolik Software Defined Storage Development IBM Deutschland Research & Development GmbH, Hechtsheimer Str. 2, 55131 Mainz, Germany Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From zgiles at gmail.com Mon Apr 13 17:05:11 2015 From: zgiles at gmail.com (Zachary Giles) Date: Mon, 13 Apr 2015 12:05:11 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Thanks for your replies. I can definitely appreciate the the goal of improving management of components, and I agree that if GPFS will be using within other products (which it is and will continue to be), then it would be great for those products to be able to manage GPFS via an interface. My fear with the idea of the above mentioned as a prereq is that the "improvement" of management might looks like an improvement when you only have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and another fileset called "user files", but if you have hundreds or thousands of filesets and several tiers of storage, with both GSS and non-GSS systems in the same cluster, then the GUI may actually be more cumbersome than the original method. So, I just want to voice an opinion that we should continue to be able to configure / maintain / monitor GPFS in a programatic / scriptable non-point-and-click way, if possible. On Mon, Apr 13, 2015 at 9:11 AM, Christian Bolik wrote: > > Hi Zach, > > I'm not aware of any intent to make TPC or any other Tivoli/IBM product a > prereq for GPFS, and I don't think any such plans exist. Rather, as Dean > also pointed out, we're investing work to improve integration of > GPFS/Spectrum Scale into other products being members of the newly > announced IBM Spectrum Storage family, with the goal of improving > manageability of the individual components (rather than worsening it...). > > Cheers, > Christian > > > Christian: > > Interesting and thanks for the latest news. > > > > May I ask: Is there an intent moving forward that TPC and / or other > Tivoli > > products will be a required part of GPFS? > > The concern I have is that GPFS is pretty straightforward at the moment > and > > has very logical requirements to operate (min servers, quorum, etc), > > whereas there are many IBM products that require two or three more > servers > > just to manage the servers managing the service.. too much. It would be > > nice to make sure, going forward, that the core of GPFS can still > function > > without additional web servers, Java, a suite of middleware, and a > handful > > of DB2 instance .. :) > > > > -Zach > > Christian Bolik > > Software Defined Storage Development > > IBM Deutschland Research & Development GmbH, Hechtsheimer Str. 2, 55131 > Mainz, Germany > Vorsitzende des Aufsichtsrats: Martina Koederitz > Gesch?ftsf?hrung: Dirk Wittkopp > Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, > HRB 243294 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Tue Apr 14 13:23:59 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Tue, 14 Apr 2015 08:23:59 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Zach, Not sure if I'm formatting this message properly (need to switch off of digest mode), but I know of no plans to replace the GPFS command line interface with a GUI. With the GPFS GUI, TPC monitoring, etc., we want to enable a wider variety of users to effectively use and manage GPFS, but the command line will of course remain available for power users, script-writers, etc. I certainly don't intend to throw away any of my nice mm-themed test programs :-) Jamie Davis GPFS Test Date: Mon, 13 Apr 2015 12:05:11 -0400 From: Zachary Giles To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Content-Type: text/plain; charset="utf-8" Thanks for your replies. I can definitely appreciate the the goal of improving management of components, and I agree that if GPFS will be using within other products (which it is and will continue to be), then it would be great for those products to be able to manage GPFS via an interface. My fear with the idea of the above mentioned as a prereq is that the "improvement" of management might looks like an improvement when you only have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and another fileset called "user files", but if you have hundreds or thousands of filesets and several tiers of storage, with both GSS and non-GSS systems in the same cluster, then the GUI may actually be more cumbersome than the original method. So, I just want to voice an opinion that we should continue to be able to configure / maintain / monitor GPFS in a programatic / scriptable non-point-and-click way, if possible. Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecgarris at iu.edu Thu Apr 16 21:33:59 2015 From: ecgarris at iu.edu (Garrison, E Chris) Date: Thu, 16 Apr 2015 20:33:59 +0000 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Message-ID: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage -------------- next part -------------- An HTML attachment was scrubbed... URL: From zgiles at gmail.com Thu Apr 16 21:39:51 2015 From: zgiles at gmail.com (Zachary Giles) Date: Thu, 16 Apr 2015 16:39:51 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Thanks Jamie. I appreciate the input. Glad to hear it. On Tue, Apr 14, 2015 at 8:23 AM, James Davis wrote: > Zach, > > Not sure if I'm formatting this message properly (need to switch off of > digest mode), but I know of no plans to replace the GPFS command line > interface with a GUI. With the GPFS GUI, TPC monitoring, etc., we want to > enable a wider variety of users to effectively use and manage GPFS, but the > command line will of course remain available for power users, > script-writers, etc. I certainly don't intend to throw away any of my nice > mm-themed test programs :-) > > Jamie Davis > GPFS Test > > Date: Mon, 13 Apr 2015 12:05:11 -0400 > From: Zachary Giles > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Monitoring capacity and health status > for a multitude of GPFS clusters > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Thanks for your replies. I can definitely appreciate the the goal of > improving management of components, and I agree that if GPFS will be using > within other products (which it is and will continue to be), then it would > be great for those products to be able to manage GPFS via an interface. > > My fear with the idea of the above mentioned as a prereq is that the > "improvement" of management might looks like an improvement when you only > have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and > another fileset called "user files", but if you have hundreds or thousands > of filesets and several tiers of storage, with both GSS and non-GSS systems > in the same cluster, then the GUI may actually be more cumbersome than the > original method. So, I just want to voice an opinion that we should > continue to be able to configure / maintain / monitor GPFS in a programatic > / scriptable non-point-and-click way, if possible. > > > Jamie Davis > GPFS Functional Verification Test (FVT) > jamiedavis at us.ibm.com > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fleers at gmail.com Thu Apr 16 23:21:34 2015 From: fleers at gmail.com (Frank Leers) Date: Thu, 16 Apr 2015 15:21:34 -0700 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: > Hello, > > My site is working up to upgrading our paired GridScaler system from > GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and > that's an advertised feature of 4.1. We already have over 100 TB of data, > synchronously replicated between two geographically separated sites, and we > have concerns about how the upgrade, as well as the application of > encryption to all that data, will go. > > I'd like to hear from admins who've been through this upgrade. What > gotchas should we look out for? Can it easily be done in place, or would we > need some extra equipment to "slosh" our data to and from so that it is > written to an encrypted GPFS? > > Thank you for your time, and for any sage advice on this process. > > Chris > -- > Chris Garrison > Indiana University > Research Systems Storage > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Apr 17 13:15:40 2015 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 17 Apr 2015 13:15:40 +0100 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel Kidger No. 1 The Square, Technical Specialist SDI (formerly Platform Computing) Temple Quay, Bristol BS1 6DG Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: From jamiedavis at us.ibm.com Fri Apr 17 15:11:24 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Fri, 17 Apr 2015 10:11:24 -0400 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, Based on your question I think you are aware of this already, but just in case... There is not currently an encrypt-in-place solution for GPFS encryption. A file's encryption state is determined at create-time. In order to encrypt your existing 100TB of data, you will need to apply an encryption policy to the current (or a new) GPFS file system(s) and then do something like: cp file file.enc #now file.new is encrypted mv file.enc file #replace the unencrypted file with the encrypted file This can be done in parallel using mmapplypolicy if you want. In 4.1.1 (forthcoming) I plan to provide an improved version of the /usr/lpp/mmfs/samples/ilm/mmfind (a find-esque interface to mmapplypolicy) that shipped in the last release; this should be an effective tool for the job. Cheers, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com From: Daniel Kidger To: gpfsug main discussion list Date: 17-04-15 08:16 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel No. 1 The Square, Kidger Technical Temple Quay, Specialist Bristol BS1 6DG SDI (formerly Platform Computing) Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20386063.gif Type: image/gif Size: 360 bytes Desc: not available URL: From makaplan at us.ibm.com Fri Apr 17 15:37:55 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 17 Apr 2015 10:37:55 -0400 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: An "Encrypt-in-place" feature would not survive a serious, paranoid security audit. If sensitive data ever was written to a disk, then the paranoid would say you can never be 100% sure that you have erased it. That said, you decide how worried or paranoid you'd like to be and you can do an almost in place encryption as James Davis suggests by simply copying the unencrypted file to a new file that will be subject to a GPFS encryption policy (SET ENCRYPTION) rule. The safest way would be to copy-encrypt all the data to a new file system and then crush all the equipment that was used to store the clear-text files. If crushing is too extreme, you might settle for a multipass sector by sector soft scrubbing by writing both carefully chosen and random data patterns. Even then, unless you trust the manufacturer AND the manufacturer has provided you the means to "scrub" all the tracks/sectors of the disk you won't be sure... Suppose that some sensitive data was written to a sector number that was later declared (partially) defective and remapped by the disk micro-code, and the original sector is put in a bad sector list that you can no longer address with standard disk driver software... Or it could be on some NVRAM or a disk that a service technician swapped out... Ooops... it went out the door... and is now in the hands of the bad guys... From: James Davis/Poughkeepsie/IBM at IBMUS To: gpfsug main discussion list Date: 04/17/2015 10:12 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Based on your question I think you are aware of this already, but just in case... There is not currently an encrypt-in-place solution for GPFS encryption. A file's encryption state is determined at create-time. In order to encrypt your existing 100TB of data, you will need to apply an encryption policy to the current (or a new) GPFS file system(s) and then do something like: cp file file.enc #now file.new is encrypted mv file.enc file #replace the unencrypted file with the encrypted file This can be done in parallel using mmapplypolicy if you want. In 4.1.1 (forthcoming) I plan to provide an improved version of the /usr/lpp/mmfs/samples/ilm/mmfind (a find-esque interface to mmapplypolicy) that shipped in the last release; this should be an effective tool for the job. Cheers, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com Daniel Kidger ---17-04-2015 08:16:54 AM---Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra From: Daniel Kidger To: gpfsug main discussion list Date: 17-04-15 08:16 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel Kidger No. 1 The Square, Technical Specialist SDI (formerly Platform Computing) Temple Quay, Bristol BS1 6DG Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: From secretary at gpfsug.org Fri Apr 24 15:05:25 2015 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Fri, 24 Apr 2015 15:05:25 +0100 Subject: [gpfsug-discuss] GPFS User Group Meeting Agenda Message-ID: Dear Members, The agenda for the next GPFS User Group Meeting is now available: 10:00 arrival for a 10:30 start 10:30 Introductions - Jez Tucker, Group Chair & Claire Robson, Group Secretary 10:35 Keynote - Doris Conti 10:50 4.1.1 Roadmap / High-level futures - Scott Fadden 11:40 Failure Events, Recovery & Problem determination - Scott Fadden 12:00 Monitoring IBM Spectrum Scale using IBM Spectrum Control (VSC/TPC) - Christian Bolik 12:30 Lunch 13:00 User Experience from University of Birmingham & CLIMB - Simon Thompson 13:20 User Experience from NERSC - Jason Hick 13:40 AFM & Async DR Use Cases - Shankar Balasubramanian 14:25 mmbackup + TSM Integration - Stefan Bender 15:10 Break 15:25 IBM Spectrum Scale (formerly GPFS) performance update, protocol performance and sizing - Sven Oehme 16:50 Closing summary - Jez Tucker, Group Chair & Claire Robson, Group Secretary 17:00 Ends All attendees are invited to attend: 19:00 Buffet and drinks at York National Railway Museum There are still places available for the event. If you have not registered and would like to attend, please email me, secretary at gpfsug.org with your name, job title, organisation, telephone and any dietary requirements. We hope to see you in May! Kind regards, Claire -- Claire Robson GPFS User Group Secretary From Dan.Foster at bristol.ac.uk Wed Apr 29 08:51:03 2015 From: Dan.Foster at bristol.ac.uk (Dan Foster) Date: Wed, 29 Apr 2015 08:51:03 +0100 Subject: [gpfsug-discuss] file system format version / software version compatibility matrix Message-ID: Hi All, I'm trying to determine which versions of the GPFS file system format are compatible with certain versions of the GPFS software. Specifically in this instance I'm interested if file system format v11.05 (from GPFS 3.3.0.2) is compatible with GPFS 3.5 . The "File system format changes between versions of GPFS" [1] chapter mentions format levels as old a v6, which infers that they are compatible. But it would be useful to know for certain. Thanks, Dan. [1] http://www-01.ibm.com/support/knowledgecenter/SSFKCN_3.5.0/com.ibm.cluster.gpfs.v3r5.gpfs100.doc/bl1adm_fsmigissues.htm?lang=en -- Dan Foster | Senior Storage Systems Administrator | IT Services e: dan.foster at bristol.ac.uk | t: 0117 3941170 [x41170] m: Advanced Computing Research Centre, University of Bristol, 8-10 Berkeley Square, Bristol BS8 1HH From jamiedavis at us.ibm.com Wed Apr 29 13:43:08 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Wed, 29 Apr 2015 08:43:08 -0400 Subject: [gpfsug-discuss] file system format version / software version compatibility matrix In-Reply-To: References: Message-ID: Dan, A "new" GPFS should be able to mount and interact with file systems with "old" versions. Specifically I do not believe you will have trouble getting GPFS 3.5 to talk to an FS created on GPFS 3.3.0.2. The "Concepts, planning, and install guide" provides information about migrating to GPFS 4.1 from GPFS 3.2 or earlier; this would implies file systems created at GPFS 3.2 being supported on GPFS 4.1, which is even more compatibility than you are asking about. The section to which I refer is titled "Migrating to GPFS 4.1 from GPFS 3.2 or earlier releases of GPFS". Note that without running mmchfs -V {full|compat} you will not have access to some of the newer GPFS features. See the section titled "Completing the migration to a new level of GPFS", also in the concepts guide. Hope this helps, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com From: Dan Foster To: gpfsug main discussion list Date: 29-04-15 03:51 AM Subject: [gpfsug-discuss] file system format version / software version compatibility matrix Sent by: gpfsug-discuss-bounces at gpfsug.org Hi All, I'm trying to determine which versions of the GPFS file system format are compatible with certain versions of the GPFS software. Specifically in this instance I'm interested if file system format v11.05 (from GPFS 3.3.0.2) is compatible with GPFS 3.5 . The "File system format changes between versions of GPFS" [1] chapter mentions format levels as old a v6, which infers that they are compatible. But it would be useful to know for certain. Thanks, Dan. [1] http://www-01.ibm.com/support/knowledgecenter/SSFKCN_3.5.0/com.ibm.cluster.gpfs.v3r5.gpfs100.doc/bl1adm_fsmigissues.htm?lang=en -- Dan Foster | Senior Storage Systems Administrator | IT Services e: dan.foster at bristol.ac.uk | t: 0117 3941170 [x41170] m: Advanced Computing Research Centre, University of Bristol, 8-10 Berkeley Square, Bristol BS8 1HH _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From pavel.pokorny at datera.cz Thu Apr 9 22:00:07 2015 From: pavel.pokorny at datera.cz (Pavel Pokorny) Date: Thu, 9 Apr 2015 23:00:07 +0200 Subject: [gpfsug-discuss] Share GPFS from Windows? Message-ID: Hello to all, Is there any technical reason why it is not supported to export GPFS from Windows nodes using CIFS? As stated in GPFs FAQ ( http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html ): *Exporting GPFS file systems as Server Message Block (SMB) shares (also known as CIFS shares) from GPFS Windows nodes is not supported.* Or it is limitation more related to licensing and business point of view. Thank you for answers, Pavel -- Ing. Pavel Pokorn? DATERA s.r.o. | Ovocn? trh 580/2 | Praha | Czech Republic www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz -------------- next part -------------- An HTML attachment was scrubbed... URL: From BOLIK at de.ibm.com Fri Apr 10 12:24:14 2015 From: BOLIK at de.ibm.com (Christian Bolik) Date: Fri, 10 Apr 2015 13:24:14 +0200 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Just wanted to let you know that recently GPFS support has been added to TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get answers to the following questions, across any number of GPFS clusters which have been added to TPC: - Which of my clusters are running out of free space? - Which of my clusters or nodes have a health problem? - Which file systems and pools are running out of capacity? - Which file systems are mounted on which nodes? - How much space is occupied by snapshots? Are there any very old, potentially obsolete ones? - Which quotas are close to being exceeded or have already been exceeded? - Which filesets are close to running out of free inodes? - Which NSDs are at risk of becoming unavailable, or are unavailable? - Are the volumes backing my NSDs performing OK? - Are all nodes fulfilling critical roles in the cluster up and running? - How can I be notified when nodes go offline or file systems fill up beyond a threshold? There's a short 6-minute video available on YouTube which shows how TPC helps answering these questions: https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be For more information about TPC, please check out the product wiki on developerWorks: http://ibm.co/1adWNFK Thanks, Christian Bolik IBM Storage Software Development From zgiles at gmail.com Fri Apr 10 16:27:36 2015 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 10 Apr 2015 11:27:36 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Christian: Interesting and thanks for the latest news. May I ask: Is there an intent moving forward that TPC and / or other Tivoli products will be a required part of GPFS? The concern I have is that GPFS is pretty straightforward at the moment and has very logical requirements to operate (min servers, quorum, etc), whereas there are many IBM products that require two or three more servers just to manage the servers managing the service.. too much. It would be nice to make sure, going forward, that the core of GPFS can still function without additional web servers, Java, a suite of middleware, and a handful of DB2 instance .. :) -Zach On Fri, Apr 10, 2015 at 7:24 AM, Christian Bolik wrote: > > Just wanted to let you know that recently GPFS support has been added to > TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed > to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get > answers to the following questions, across any number of GPFS clusters > which have been added to TPC: > > - Which of my clusters are running out of free space? > - Which of my clusters or nodes have a health problem? > - Which file systems and pools are running out of capacity? > - Which file systems are mounted on which nodes? > - How much space is occupied by snapshots? Are there any very old, > potentially obsolete ones? > - Which quotas are close to being exceeded or have already been exceeded? > - Which filesets are close to running out of free inodes? > - Which NSDs are at risk of becoming unavailable, or are unavailable? > - Are the volumes backing my NSDs performing OK? > - Are all nodes fulfilling critical roles in the cluster up and running? > - How can I be notified when nodes go offline or file systems fill up > beyond a threshold? > > There's a short 6-minute video available on YouTube which shows how TPC > helps answering these questions: > https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be > > For more information about TPC, please check out the product wiki on > developerWorks: http://ibm.co/1adWNFK > > Thanks, > Christian Bolik > IBM Storage Software Development > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Fri Apr 10 18:45:14 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 10 Apr 2015 10:45:14 -0700 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Hi Zach, The summary is that GPFS is being integrated much more across the portfolio... With GPFS itself, there is a video below demonstrating the ESS/GSS GUI and monitoring feature that is in the product today. Moving forward, as you can probably see, there is a push in IBM to move GPFS to software-defined, which includes features such as the GUI as well... https://www.youtube.com/watch?v=Mv9Sn-VYoGU Dean From: Zachary Giles To: gpfsug main discussion list Date: 04/10/2015 08:27 AM Subject: Re: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Sent by: gpfsug-discuss-bounces at gpfsug.org Christian: Interesting and thanks for the latest news. May I ask: Is there an intent moving forward that TPC and / or other Tivoli products will be a required part of GPFS? The concern I have is that GPFS is pretty straightforward at the moment and has very logical requirements to operate (min servers, quorum, etc), whereas there are many IBM products that require two or three more servers just to manage the servers managing the service.. too much. It would be nice to make sure, going forward, that the core of GPFS can still function without additional web servers, Java, a suite of middleware, and a handful of DB2 instance .. :) -Zach On Fri, Apr 10, 2015 at 7:24 AM, Christian Bolik wrote: Just wanted to let you know that recently GPFS support has been added to TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get answers to the following questions, across any number of GPFS clusters which have been added to TPC: - Which of my clusters are running out of free space? - Which of my clusters or nodes have a health problem? - Which file systems and pools are running out of capacity? - Which file systems are mounted on which nodes? - How much space is occupied by snapshots? Are there any very old, potentially obsolete ones? - Which quotas are close to being exceeded or have already been exceeded? - Which filesets are close to running out of free inodes? - Which NSDs are at risk of becoming unavailable, or are unavailable? - Are the volumes backing my NSDs performing OK? - Are all nodes fulfilling critical roles in the cluster up and running? - How can I be notified when nodes go offline or file systems fill up beyond a threshold? There's a short 6-minute video available on YouTube which shows how TPC helps answering these questions: https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be For more information about TPC, please check out the product wiki on developerWorks: http://ibm.co/1adWNFK Thanks, Christian Bolik IBM Storage Software Development _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Zach Giles zgiles at gmail.com_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From BOLIK at de.ibm.com Mon Apr 13 14:11:01 2015 From: BOLIK at de.ibm.com (Christian Bolik) Date: Mon, 13 Apr 2015 15:11:01 +0200 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Hi Zach, I'm not aware of any intent to make TPC or any other Tivoli/IBM product a prereq for GPFS, and I don't think any such plans exist. Rather, as Dean also pointed out, we're investing work to improve integration of GPFS/Spectrum Scale into other products being members of the newly announced IBM Spectrum Storage family, with the goal of improving manageability of the individual components (rather than worsening it...). Cheers, Christian > Christian: > Interesting and thanks for the latest news. > > May I ask: Is there an intent moving forward that TPC and / or other Tivoli > products will be a required part of GPFS? > The concern I have is that GPFS is pretty straightforward at the moment and > has very logical requirements to operate (min servers, quorum, etc), > whereas there are many IBM products that require two or three more servers > just to manage the servers managing the service.. too much. It would be > nice to make sure, going forward, that the core of GPFS can still function > without additional web servers, Java, a suite of middleware, and a handful > of DB2 instance .. :) > > -Zach Christian Bolik Software Defined Storage Development IBM Deutschland Research & Development GmbH, Hechtsheimer Str. 2, 55131 Mainz, Germany Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From zgiles at gmail.com Mon Apr 13 17:05:11 2015 From: zgiles at gmail.com (Zachary Giles) Date: Mon, 13 Apr 2015 12:05:11 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Thanks for your replies. I can definitely appreciate the the goal of improving management of components, and I agree that if GPFS will be using within other products (which it is and will continue to be), then it would be great for those products to be able to manage GPFS via an interface. My fear with the idea of the above mentioned as a prereq is that the "improvement" of management might looks like an improvement when you only have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and another fileset called "user files", but if you have hundreds or thousands of filesets and several tiers of storage, with both GSS and non-GSS systems in the same cluster, then the GUI may actually be more cumbersome than the original method. So, I just want to voice an opinion that we should continue to be able to configure / maintain / monitor GPFS in a programatic / scriptable non-point-and-click way, if possible. On Mon, Apr 13, 2015 at 9:11 AM, Christian Bolik wrote: > > Hi Zach, > > I'm not aware of any intent to make TPC or any other Tivoli/IBM product a > prereq for GPFS, and I don't think any such plans exist. Rather, as Dean > also pointed out, we're investing work to improve integration of > GPFS/Spectrum Scale into other products being members of the newly > announced IBM Spectrum Storage family, with the goal of improving > manageability of the individual components (rather than worsening it...). > > Cheers, > Christian > > > Christian: > > Interesting and thanks for the latest news. > > > > May I ask: Is there an intent moving forward that TPC and / or other > Tivoli > > products will be a required part of GPFS? > > The concern I have is that GPFS is pretty straightforward at the moment > and > > has very logical requirements to operate (min servers, quorum, etc), > > whereas there are many IBM products that require two or three more > servers > > just to manage the servers managing the service.. too much. It would be > > nice to make sure, going forward, that the core of GPFS can still > function > > without additional web servers, Java, a suite of middleware, and a > handful > > of DB2 instance .. :) > > > > -Zach > > Christian Bolik > > Software Defined Storage Development > > IBM Deutschland Research & Development GmbH, Hechtsheimer Str. 2, 55131 > Mainz, Germany > Vorsitzende des Aufsichtsrats: Martina Koederitz > Gesch?ftsf?hrung: Dirk Wittkopp > Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, > HRB 243294 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Tue Apr 14 13:23:59 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Tue, 14 Apr 2015 08:23:59 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Zach, Not sure if I'm formatting this message properly (need to switch off of digest mode), but I know of no plans to replace the GPFS command line interface with a GUI. With the GPFS GUI, TPC monitoring, etc., we want to enable a wider variety of users to effectively use and manage GPFS, but the command line will of course remain available for power users, script-writers, etc. I certainly don't intend to throw away any of my nice mm-themed test programs :-) Jamie Davis GPFS Test Date: Mon, 13 Apr 2015 12:05:11 -0400 From: Zachary Giles To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Content-Type: text/plain; charset="utf-8" Thanks for your replies. I can definitely appreciate the the goal of improving management of components, and I agree that if GPFS will be using within other products (which it is and will continue to be), then it would be great for those products to be able to manage GPFS via an interface. My fear with the idea of the above mentioned as a prereq is that the "improvement" of management might looks like an improvement when you only have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and another fileset called "user files", but if you have hundreds or thousands of filesets and several tiers of storage, with both GSS and non-GSS systems in the same cluster, then the GUI may actually be more cumbersome than the original method. So, I just want to voice an opinion that we should continue to be able to configure / maintain / monitor GPFS in a programatic / scriptable non-point-and-click way, if possible. Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecgarris at iu.edu Thu Apr 16 21:33:59 2015 From: ecgarris at iu.edu (Garrison, E Chris) Date: Thu, 16 Apr 2015 20:33:59 +0000 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Message-ID: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage -------------- next part -------------- An HTML attachment was scrubbed... URL: From zgiles at gmail.com Thu Apr 16 21:39:51 2015 From: zgiles at gmail.com (Zachary Giles) Date: Thu, 16 Apr 2015 16:39:51 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Thanks Jamie. I appreciate the input. Glad to hear it. On Tue, Apr 14, 2015 at 8:23 AM, James Davis wrote: > Zach, > > Not sure if I'm formatting this message properly (need to switch off of > digest mode), but I know of no plans to replace the GPFS command line > interface with a GUI. With the GPFS GUI, TPC monitoring, etc., we want to > enable a wider variety of users to effectively use and manage GPFS, but the > command line will of course remain available for power users, > script-writers, etc. I certainly don't intend to throw away any of my nice > mm-themed test programs :-) > > Jamie Davis > GPFS Test > > Date: Mon, 13 Apr 2015 12:05:11 -0400 > From: Zachary Giles > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Monitoring capacity and health status > for a multitude of GPFS clusters > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Thanks for your replies. I can definitely appreciate the the goal of > improving management of components, and I agree that if GPFS will be using > within other products (which it is and will continue to be), then it would > be great for those products to be able to manage GPFS via an interface. > > My fear with the idea of the above mentioned as a prereq is that the > "improvement" of management might looks like an improvement when you only > have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and > another fileset called "user files", but if you have hundreds or thousands > of filesets and several tiers of storage, with both GSS and non-GSS systems > in the same cluster, then the GUI may actually be more cumbersome than the > original method. So, I just want to voice an opinion that we should > continue to be able to configure / maintain / monitor GPFS in a programatic > / scriptable non-point-and-click way, if possible. > > > Jamie Davis > GPFS Functional Verification Test (FVT) > jamiedavis at us.ibm.com > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fleers at gmail.com Thu Apr 16 23:21:34 2015 From: fleers at gmail.com (Frank Leers) Date: Thu, 16 Apr 2015 15:21:34 -0700 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: > Hello, > > My site is working up to upgrading our paired GridScaler system from > GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and > that's an advertised feature of 4.1. We already have over 100 TB of data, > synchronously replicated between two geographically separated sites, and we > have concerns about how the upgrade, as well as the application of > encryption to all that data, will go. > > I'd like to hear from admins who've been through this upgrade. What > gotchas should we look out for? Can it easily be done in place, or would we > need some extra equipment to "slosh" our data to and from so that it is > written to an encrypted GPFS? > > Thank you for your time, and for any sage advice on this process. > > Chris > -- > Chris Garrison > Indiana University > Research Systems Storage > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Apr 17 13:15:40 2015 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 17 Apr 2015 13:15:40 +0100 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel Kidger No. 1 The Square, Technical Specialist SDI (formerly Platform Computing) Temple Quay, Bristol BS1 6DG Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: From jamiedavis at us.ibm.com Fri Apr 17 15:11:24 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Fri, 17 Apr 2015 10:11:24 -0400 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, Based on your question I think you are aware of this already, but just in case... There is not currently an encrypt-in-place solution for GPFS encryption. A file's encryption state is determined at create-time. In order to encrypt your existing 100TB of data, you will need to apply an encryption policy to the current (or a new) GPFS file system(s) and then do something like: cp file file.enc #now file.new is encrypted mv file.enc file #replace the unencrypted file with the encrypted file This can be done in parallel using mmapplypolicy if you want. In 4.1.1 (forthcoming) I plan to provide an improved version of the /usr/lpp/mmfs/samples/ilm/mmfind (a find-esque interface to mmapplypolicy) that shipped in the last release; this should be an effective tool for the job. Cheers, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com From: Daniel Kidger To: gpfsug main discussion list Date: 17-04-15 08:16 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel No. 1 The Square, Kidger Technical Temple Quay, Specialist Bristol BS1 6DG SDI (formerly Platform Computing) Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20386063.gif Type: image/gif Size: 360 bytes Desc: not available URL: From makaplan at us.ibm.com Fri Apr 17 15:37:55 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 17 Apr 2015 10:37:55 -0400 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: An "Encrypt-in-place" feature would not survive a serious, paranoid security audit. If sensitive data ever was written to a disk, then the paranoid would say you can never be 100% sure that you have erased it. That said, you decide how worried or paranoid you'd like to be and you can do an almost in place encryption as James Davis suggests by simply copying the unencrypted file to a new file that will be subject to a GPFS encryption policy (SET ENCRYPTION) rule. The safest way would be to copy-encrypt all the data to a new file system and then crush all the equipment that was used to store the clear-text files. If crushing is too extreme, you might settle for a multipass sector by sector soft scrubbing by writing both carefully chosen and random data patterns. Even then, unless you trust the manufacturer AND the manufacturer has provided you the means to "scrub" all the tracks/sectors of the disk you won't be sure... Suppose that some sensitive data was written to a sector number that was later declared (partially) defective and remapped by the disk micro-code, and the original sector is put in a bad sector list that you can no longer address with standard disk driver software... Or it could be on some NVRAM or a disk that a service technician swapped out... Ooops... it went out the door... and is now in the hands of the bad guys... From: James Davis/Poughkeepsie/IBM at IBMUS To: gpfsug main discussion list Date: 04/17/2015 10:12 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Based on your question I think you are aware of this already, but just in case... There is not currently an encrypt-in-place solution for GPFS encryption. A file's encryption state is determined at create-time. In order to encrypt your existing 100TB of data, you will need to apply an encryption policy to the current (or a new) GPFS file system(s) and then do something like: cp file file.enc #now file.new is encrypted mv file.enc file #replace the unencrypted file with the encrypted file This can be done in parallel using mmapplypolicy if you want. In 4.1.1 (forthcoming) I plan to provide an improved version of the /usr/lpp/mmfs/samples/ilm/mmfind (a find-esque interface to mmapplypolicy) that shipped in the last release; this should be an effective tool for the job. Cheers, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com Daniel Kidger ---17-04-2015 08:16:54 AM---Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra From: Daniel Kidger To: gpfsug main discussion list Date: 17-04-15 08:16 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel Kidger No. 1 The Square, Technical Specialist SDI (formerly Platform Computing) Temple Quay, Bristol BS1 6DG Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: From secretary at gpfsug.org Fri Apr 24 15:05:25 2015 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Fri, 24 Apr 2015 15:05:25 +0100 Subject: [gpfsug-discuss] GPFS User Group Meeting Agenda Message-ID: Dear Members, The agenda for the next GPFS User Group Meeting is now available: 10:00 arrival for a 10:30 start 10:30 Introductions - Jez Tucker, Group Chair & Claire Robson, Group Secretary 10:35 Keynote - Doris Conti 10:50 4.1.1 Roadmap / High-level futures - Scott Fadden 11:40 Failure Events, Recovery & Problem determination - Scott Fadden 12:00 Monitoring IBM Spectrum Scale using IBM Spectrum Control (VSC/TPC) - Christian Bolik 12:30 Lunch 13:00 User Experience from University of Birmingham & CLIMB - Simon Thompson 13:20 User Experience from NERSC - Jason Hick 13:40 AFM & Async DR Use Cases - Shankar Balasubramanian 14:25 mmbackup + TSM Integration - Stefan Bender 15:10 Break 15:25 IBM Spectrum Scale (formerly GPFS) performance update, protocol performance and sizing - Sven Oehme 16:50 Closing summary - Jez Tucker, Group Chair & Claire Robson, Group Secretary 17:00 Ends All attendees are invited to attend: 19:00 Buffet and drinks at York National Railway Museum There are still places available for the event. If you have not registered and would like to attend, please email me, secretary at gpfsug.org with your name, job title, organisation, telephone and any dietary requirements. We hope to see you in May! Kind regards, Claire -- Claire Robson GPFS User Group Secretary From Dan.Foster at bristol.ac.uk Wed Apr 29 08:51:03 2015 From: Dan.Foster at bristol.ac.uk (Dan Foster) Date: Wed, 29 Apr 2015 08:51:03 +0100 Subject: [gpfsug-discuss] file system format version / software version compatibility matrix Message-ID: Hi All, I'm trying to determine which versions of the GPFS file system format are compatible with certain versions of the GPFS software. Specifically in this instance I'm interested if file system format v11.05 (from GPFS 3.3.0.2) is compatible with GPFS 3.5 . The "File system format changes between versions of GPFS" [1] chapter mentions format levels as old a v6, which infers that they are compatible. But it would be useful to know for certain. Thanks, Dan. [1] http://www-01.ibm.com/support/knowledgecenter/SSFKCN_3.5.0/com.ibm.cluster.gpfs.v3r5.gpfs100.doc/bl1adm_fsmigissues.htm?lang=en -- Dan Foster | Senior Storage Systems Administrator | IT Services e: dan.foster at bristol.ac.uk | t: 0117 3941170 [x41170] m: Advanced Computing Research Centre, University of Bristol, 8-10 Berkeley Square, Bristol BS8 1HH From jamiedavis at us.ibm.com Wed Apr 29 13:43:08 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Wed, 29 Apr 2015 08:43:08 -0400 Subject: [gpfsug-discuss] file system format version / software version compatibility matrix In-Reply-To: References: Message-ID: Dan, A "new" GPFS should be able to mount and interact with file systems with "old" versions. Specifically I do not believe you will have trouble getting GPFS 3.5 to talk to an FS created on GPFS 3.3.0.2. The "Concepts, planning, and install guide" provides information about migrating to GPFS 4.1 from GPFS 3.2 or earlier; this would implies file systems created at GPFS 3.2 being supported on GPFS 4.1, which is even more compatibility than you are asking about. The section to which I refer is titled "Migrating to GPFS 4.1 from GPFS 3.2 or earlier releases of GPFS". Note that without running mmchfs -V {full|compat} you will not have access to some of the newer GPFS features. See the section titled "Completing the migration to a new level of GPFS", also in the concepts guide. Hope this helps, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com From: Dan Foster To: gpfsug main discussion list Date: 29-04-15 03:51 AM Subject: [gpfsug-discuss] file system format version / software version compatibility matrix Sent by: gpfsug-discuss-bounces at gpfsug.org Hi All, I'm trying to determine which versions of the GPFS file system format are compatible with certain versions of the GPFS software. Specifically in this instance I'm interested if file system format v11.05 (from GPFS 3.3.0.2) is compatible with GPFS 3.5 . The "File system format changes between versions of GPFS" [1] chapter mentions format levels as old a v6, which infers that they are compatible. But it would be useful to know for certain. Thanks, Dan. [1] http://www-01.ibm.com/support/knowledgecenter/SSFKCN_3.5.0/com.ibm.cluster.gpfs.v3r5.gpfs100.doc/bl1adm_fsmigissues.htm?lang=en -- Dan Foster | Senior Storage Systems Administrator | IT Services e: dan.foster at bristol.ac.uk | t: 0117 3941170 [x41170] m: Advanced Computing Research Centre, University of Bristol, 8-10 Berkeley Square, Bristol BS8 1HH _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From pavel.pokorny at datera.cz Thu Apr 9 22:00:07 2015 From: pavel.pokorny at datera.cz (Pavel Pokorny) Date: Thu, 9 Apr 2015 23:00:07 +0200 Subject: [gpfsug-discuss] Share GPFS from Windows? Message-ID: Hello to all, Is there any technical reason why it is not supported to export GPFS from Windows nodes using CIFS? As stated in GPFs FAQ ( http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html ): *Exporting GPFS file systems as Server Message Block (SMB) shares (also known as CIFS shares) from GPFS Windows nodes is not supported.* Or it is limitation more related to licensing and business point of view. Thank you for answers, Pavel -- Ing. Pavel Pokorn? DATERA s.r.o. | Ovocn? trh 580/2 | Praha | Czech Republic www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz -------------- next part -------------- An HTML attachment was scrubbed... URL: From BOLIK at de.ibm.com Fri Apr 10 12:24:14 2015 From: BOLIK at de.ibm.com (Christian Bolik) Date: Fri, 10 Apr 2015 13:24:14 +0200 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Just wanted to let you know that recently GPFS support has been added to TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get answers to the following questions, across any number of GPFS clusters which have been added to TPC: - Which of my clusters are running out of free space? - Which of my clusters or nodes have a health problem? - Which file systems and pools are running out of capacity? - Which file systems are mounted on which nodes? - How much space is occupied by snapshots? Are there any very old, potentially obsolete ones? - Which quotas are close to being exceeded or have already been exceeded? - Which filesets are close to running out of free inodes? - Which NSDs are at risk of becoming unavailable, or are unavailable? - Are the volumes backing my NSDs performing OK? - Are all nodes fulfilling critical roles in the cluster up and running? - How can I be notified when nodes go offline or file systems fill up beyond a threshold? There's a short 6-minute video available on YouTube which shows how TPC helps answering these questions: https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be For more information about TPC, please check out the product wiki on developerWorks: http://ibm.co/1adWNFK Thanks, Christian Bolik IBM Storage Software Development From zgiles at gmail.com Fri Apr 10 16:27:36 2015 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 10 Apr 2015 11:27:36 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Christian: Interesting and thanks for the latest news. May I ask: Is there an intent moving forward that TPC and / or other Tivoli products will be a required part of GPFS? The concern I have is that GPFS is pretty straightforward at the moment and has very logical requirements to operate (min servers, quorum, etc), whereas there are many IBM products that require two or three more servers just to manage the servers managing the service.. too much. It would be nice to make sure, going forward, that the core of GPFS can still function without additional web servers, Java, a suite of middleware, and a handful of DB2 instance .. :) -Zach On Fri, Apr 10, 2015 at 7:24 AM, Christian Bolik wrote: > > Just wanted to let you know that recently GPFS support has been added to > TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed > to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get > answers to the following questions, across any number of GPFS clusters > which have been added to TPC: > > - Which of my clusters are running out of free space? > - Which of my clusters or nodes have a health problem? > - Which file systems and pools are running out of capacity? > - Which file systems are mounted on which nodes? > - How much space is occupied by snapshots? Are there any very old, > potentially obsolete ones? > - Which quotas are close to being exceeded or have already been exceeded? > - Which filesets are close to running out of free inodes? > - Which NSDs are at risk of becoming unavailable, or are unavailable? > - Are the volumes backing my NSDs performing OK? > - Are all nodes fulfilling critical roles in the cluster up and running? > - How can I be notified when nodes go offline or file systems fill up > beyond a threshold? > > There's a short 6-minute video available on YouTube which shows how TPC > helps answering these questions: > https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be > > For more information about TPC, please check out the product wiki on > developerWorks: http://ibm.co/1adWNFK > > Thanks, > Christian Bolik > IBM Storage Software Development > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Fri Apr 10 18:45:14 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 10 Apr 2015 10:45:14 -0700 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Hi Zach, The summary is that GPFS is being integrated much more across the portfolio... With GPFS itself, there is a video below demonstrating the ESS/GSS GUI and monitoring feature that is in the product today. Moving forward, as you can probably see, there is a push in IBM to move GPFS to software-defined, which includes features such as the GUI as well... https://www.youtube.com/watch?v=Mv9Sn-VYoGU Dean From: Zachary Giles To: gpfsug main discussion list Date: 04/10/2015 08:27 AM Subject: Re: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Sent by: gpfsug-discuss-bounces at gpfsug.org Christian: Interesting and thanks for the latest news. May I ask: Is there an intent moving forward that TPC and / or other Tivoli products will be a required part of GPFS? The concern I have is that GPFS is pretty straightforward at the moment and has very logical requirements to operate (min servers, quorum, etc), whereas there are many IBM products that require two or three more servers just to manage the servers managing the service.. too much. It would be nice to make sure, going forward, that the core of GPFS can still function without additional web servers, Java, a suite of middleware, and a handful of DB2 instance .. :) -Zach On Fri, Apr 10, 2015 at 7:24 AM, Christian Bolik wrote: Just wanted to let you know that recently GPFS support has been added to TPC, which is IBM's Tivoli Storage Productivity Center (soon to be renamed to IBM Spectrum Control). As of now, TPC allows GPFS administrators to get answers to the following questions, across any number of GPFS clusters which have been added to TPC: - Which of my clusters are running out of free space? - Which of my clusters or nodes have a health problem? - Which file systems and pools are running out of capacity? - Which file systems are mounted on which nodes? - How much space is occupied by snapshots? Are there any very old, potentially obsolete ones? - Which quotas are close to being exceeded or have already been exceeded? - Which filesets are close to running out of free inodes? - Which NSDs are at risk of becoming unavailable, or are unavailable? - Are the volumes backing my NSDs performing OK? - Are all nodes fulfilling critical roles in the cluster up and running? - How can I be notified when nodes go offline or file systems fill up beyond a threshold? There's a short 6-minute video available on YouTube which shows how TPC helps answering these questions: https://www.youtube.com/watch?v=8Esk5U_cYw8&feature=youtu.be For more information about TPC, please check out the product wiki on developerWorks: http://ibm.co/1adWNFK Thanks, Christian Bolik IBM Storage Software Development _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Zach Giles zgiles at gmail.com_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From BOLIK at de.ibm.com Mon Apr 13 14:11:01 2015 From: BOLIK at de.ibm.com (Christian Bolik) Date: Mon, 13 Apr 2015 15:11:01 +0200 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Hi Zach, I'm not aware of any intent to make TPC or any other Tivoli/IBM product a prereq for GPFS, and I don't think any such plans exist. Rather, as Dean also pointed out, we're investing work to improve integration of GPFS/Spectrum Scale into other products being members of the newly announced IBM Spectrum Storage family, with the goal of improving manageability of the individual components (rather than worsening it...). Cheers, Christian > Christian: > Interesting and thanks for the latest news. > > May I ask: Is there an intent moving forward that TPC and / or other Tivoli > products will be a required part of GPFS? > The concern I have is that GPFS is pretty straightforward at the moment and > has very logical requirements to operate (min servers, quorum, etc), > whereas there are many IBM products that require two or three more servers > just to manage the servers managing the service.. too much. It would be > nice to make sure, going forward, that the core of GPFS can still function > without additional web servers, Java, a suite of middleware, and a handful > of DB2 instance .. :) > > -Zach Christian Bolik Software Defined Storage Development IBM Deutschland Research & Development GmbH, Hechtsheimer Str. 2, 55131 Mainz, Germany Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From zgiles at gmail.com Mon Apr 13 17:05:11 2015 From: zgiles at gmail.com (Zachary Giles) Date: Mon, 13 Apr 2015 12:05:11 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Thanks for your replies. I can definitely appreciate the the goal of improving management of components, and I agree that if GPFS will be using within other products (which it is and will continue to be), then it would be great for those products to be able to manage GPFS via an interface. My fear with the idea of the above mentioned as a prereq is that the "improvement" of management might looks like an improvement when you only have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and another fileset called "user files", but if you have hundreds or thousands of filesets and several tiers of storage, with both GSS and non-GSS systems in the same cluster, then the GUI may actually be more cumbersome than the original method. So, I just want to voice an opinion that we should continue to be able to configure / maintain / monitor GPFS in a programatic / scriptable non-point-and-click way, if possible. On Mon, Apr 13, 2015 at 9:11 AM, Christian Bolik wrote: > > Hi Zach, > > I'm not aware of any intent to make TPC or any other Tivoli/IBM product a > prereq for GPFS, and I don't think any such plans exist. Rather, as Dean > also pointed out, we're investing work to improve integration of > GPFS/Spectrum Scale into other products being members of the newly > announced IBM Spectrum Storage family, with the goal of improving > manageability of the individual components (rather than worsening it...). > > Cheers, > Christian > > > Christian: > > Interesting and thanks for the latest news. > > > > May I ask: Is there an intent moving forward that TPC and / or other > Tivoli > > products will be a required part of GPFS? > > The concern I have is that GPFS is pretty straightforward at the moment > and > > has very logical requirements to operate (min servers, quorum, etc), > > whereas there are many IBM products that require two or three more > servers > > just to manage the servers managing the service.. too much. It would be > > nice to make sure, going forward, that the core of GPFS can still > function > > without additional web servers, Java, a suite of middleware, and a > handful > > of DB2 instance .. :) > > > > -Zach > > Christian Bolik > > Software Defined Storage Development > > IBM Deutschland Research & Development GmbH, Hechtsheimer Str. 2, 55131 > Mainz, Germany > Vorsitzende des Aufsichtsrats: Martina Koederitz > Gesch?ftsf?hrung: Dirk Wittkopp > Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, > HRB 243294 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamiedavis at us.ibm.com Tue Apr 14 13:23:59 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Tue, 14 Apr 2015 08:23:59 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Zach, Not sure if I'm formatting this message properly (need to switch off of digest mode), but I know of no plans to replace the GPFS command line interface with a GUI. With the GPFS GUI, TPC monitoring, etc., we want to enable a wider variety of users to effectively use and manage GPFS, but the command line will of course remain available for power users, script-writers, etc. I certainly don't intend to throw away any of my nice mm-themed test programs :-) Jamie Davis GPFS Test Date: Mon, 13 Apr 2015 12:05:11 -0400 From: Zachary Giles To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters Message-ID: Content-Type: text/plain; charset="utf-8" Thanks for your replies. I can definitely appreciate the the goal of improving management of components, and I agree that if GPFS will be using within other products (which it is and will continue to be), then it would be great for those products to be able to manage GPFS via an interface. My fear with the idea of the above mentioned as a prereq is that the "improvement" of management might looks like an improvement when you only have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and another fileset called "user files", but if you have hundreds or thousands of filesets and several tiers of storage, with both GSS and non-GSS systems in the same cluster, then the GUI may actually be more cumbersome than the original method. So, I just want to voice an opinion that we should continue to be able to configure / maintain / monitor GPFS in a programatic / scriptable non-point-and-click way, if possible. Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ecgarris at iu.edu Thu Apr 16 21:33:59 2015 From: ecgarris at iu.edu (Garrison, E Chris) Date: Thu, 16 Apr 2015 20:33:59 +0000 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Message-ID: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage -------------- next part -------------- An HTML attachment was scrubbed... URL: From zgiles at gmail.com Thu Apr 16 21:39:51 2015 From: zgiles at gmail.com (Zachary Giles) Date: Thu, 16 Apr 2015 16:39:51 -0400 Subject: [gpfsug-discuss] Monitoring capacity and health status for a multitude of GPFS clusters In-Reply-To: References: Message-ID: Thanks Jamie. I appreciate the input. Glad to hear it. On Tue, Apr 14, 2015 at 8:23 AM, James Davis wrote: > Zach, > > Not sure if I'm formatting this message properly (need to switch off of > digest mode), but I know of no plans to replace the GPFS command line > interface with a GUI. With the GPFS GUI, TPC monitoring, etc., we want to > enable a wider variety of users to effectively use and manage GPFS, but the > command line will of course remain available for power users, > script-writers, etc. I certainly don't intend to throw away any of my nice > mm-themed test programs :-) > > Jamie Davis > GPFS Test > > Date: Mon, 13 Apr 2015 12:05:11 -0400 > From: Zachary Giles > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Monitoring capacity and health status > for a multitude of GPFS clusters > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Thanks for your replies. I can definitely appreciate the the goal of > improving management of components, and I agree that if GPFS will be using > within other products (which it is and will continue to be), then it would > be great for those products to be able to manage GPFS via an interface. > > My fear with the idea of the above mentioned as a prereq is that the > "improvement" of management might looks like an improvement when you only > have 1 or 2 tiers of storage and 1 or 2 filesets called "database" and > another fileset called "user files", but if you have hundreds or thousands > of filesets and several tiers of storage, with both GSS and non-GSS systems > in the same cluster, then the GUI may actually be more cumbersome than the > original method. So, I just want to voice an opinion that we should > continue to be able to configure / maintain / monitor GPFS in a programatic > / scriptable non-point-and-click way, if possible. > > > Jamie Davis > GPFS Functional Verification Test (FVT) > jamiedavis at us.ibm.com > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- Zach Giles zgiles at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fleers at gmail.com Thu Apr 16 23:21:34 2015 From: fleers at gmail.com (Frank Leers) Date: Thu, 16 Apr 2015 15:21:34 -0700 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: > Hello, > > My site is working up to upgrading our paired GridScaler system from > GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and > that's an advertised feature of 4.1. We already have over 100 TB of data, > synchronously replicated between two geographically separated sites, and we > have concerns about how the upgrade, as well as the application of > encryption to all that data, will go. > > I'd like to hear from admins who've been through this upgrade. What > gotchas should we look out for? Can it easily be done in place, or would we > need some extra equipment to "slosh" our data to and from so that it is > written to an encrypted GPFS? > > Thank you for your time, and for any sage advice on this process. > > Chris > -- > Chris Garrison > Indiana University > Research Systems Storage > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Apr 17 13:15:40 2015 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 17 Apr 2015 13:15:40 +0100 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel Kidger No. 1 The Square, Technical Specialist SDI (formerly Platform Computing) Temple Quay, Bristol BS1 6DG Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: From jamiedavis at us.ibm.com Fri Apr 17 15:11:24 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Fri, 17 Apr 2015 10:11:24 -0400 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: Hi Chris, Based on your question I think you are aware of this already, but just in case... There is not currently an encrypt-in-place solution for GPFS encryption. A file's encryption state is determined at create-time. In order to encrypt your existing 100TB of data, you will need to apply an encryption policy to the current (or a new) GPFS file system(s) and then do something like: cp file file.enc #now file.new is encrypted mv file.enc file #replace the unencrypted file with the encrypted file This can be done in parallel using mmapplypolicy if you want. In 4.1.1 (forthcoming) I plan to provide an improved version of the /usr/lpp/mmfs/samples/ilm/mmfind (a find-esque interface to mmapplypolicy) that shipped in the last release; this should be an effective tool for the job. Cheers, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com From: Daniel Kidger To: gpfsug main discussion list Date: 17-04-15 08:16 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel No. 1 The Square, Kidger Technical Temple Quay, Specialist Bristol BS1 6DG SDI (formerly Platform Computing) Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 20386063.gif Type: image/gif Size: 360 bytes Desc: not available URL: From makaplan at us.ibm.com Fri Apr 17 15:37:55 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 17 Apr 2015 10:37:55 -0400 Subject: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? In-Reply-To: References: Message-ID: An "Encrypt-in-place" feature would not survive a serious, paranoid security audit. If sensitive data ever was written to a disk, then the paranoid would say you can never be 100% sure that you have erased it. That said, you decide how worried or paranoid you'd like to be and you can do an almost in place encryption as James Davis suggests by simply copying the unencrypted file to a new file that will be subject to a GPFS encryption policy (SET ENCRYPTION) rule. The safest way would be to copy-encrypt all the data to a new file system and then crush all the equipment that was used to store the clear-text files. If crushing is too extreme, you might settle for a multipass sector by sector soft scrubbing by writing both carefully chosen and random data patterns. Even then, unless you trust the manufacturer AND the manufacturer has provided you the means to "scrub" all the tracks/sectors of the disk you won't be sure... Suppose that some sensitive data was written to a sector number that was later declared (partially) defective and remapped by the disk micro-code, and the original sector is put in a bad sector list that you can no longer address with standard disk driver software... Or it could be on some NVRAM or a disk that a service technician swapped out... Ooops... it went out the door... and is now in the hands of the bad guys... From: James Davis/Poughkeepsie/IBM at IBMUS To: gpfsug main discussion list Date: 04/17/2015 10:12 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Based on your question I think you are aware of this already, but just in case... There is not currently an encrypt-in-place solution for GPFS encryption. A file's encryption state is determined at create-time. In order to encrypt your existing 100TB of data, you will need to apply an encryption policy to the current (or a new) GPFS file system(s) and then do something like: cp file file.enc #now file.new is encrypted mv file.enc file #replace the unencrypted file with the encrypted file This can be done in parallel using mmapplypolicy if you want. In 4.1.1 (forthcoming) I plan to provide an improved version of the /usr/lpp/mmfs/samples/ilm/mmfind (a find-esque interface to mmapplypolicy) that shipped in the last release; this should be an effective tool for the job. Cheers, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com Daniel Kidger ---17-04-2015 08:16:54 AM---Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra From: Daniel Kidger To: gpfsug main discussion list Date: 17-04-15 08:16 AM Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, The Crypto feature of GPFS 4.1 requires the addition of just one extra RPM over the standard edition. Other RPMs remain the same. It also requires licenses for "Advanced Edition". These are of the order of 30% more expensive than the standard edition. The model for GPFS encryption at rest is that the client node fetches the encrypted file from the fileserver. The file remains encrypted in transfer and is only decrypted by the client node using a key it holds. A result is end-to-end encryption as well as encryption of data at rest, Encryption can be on a per file level if desired. Hence a way to migrate from an existing non-encrypted. setup. note inodes should be 4kB to make sure there is enough room to store the encryption attributes. A side effect of adding encryption is that you also need additional remote key management (RKM) servers to handle the key management. These need licensed software. see http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/com.ibm.cluster.gpfs.v4r104.gpfs200.doc/bl1adv_encryptionsetupreqs.htm I am sure DDN can help you with all of this. Hope this helps, Daniel Dr.Daniel Kidger No. 1 The Square, Technical Specialist SDI (formerly Platform Computing) Temple Quay, Bristol BS1 6DG Mobile: +44-07818 522 266 United Kingdom Landline: +44-02392 564 121 (Internal ITN 3726 9250) e-mail: daniel.kidger at uk.ibm.com From: Frank Leers To: gpfsug main discussion list Date: 16/04/2015 23:21 Subject: Re: [gpfsug-discuss] Experiences upgrading in place to GPFS 4.1? Sent by: gpfsug-discuss-bounces at gpfsug.org Hi Chris, Since you mention GRIDScaler, I assume that you are a DDN customer and that you have a support contract with them. If not, then feel free to take the advice that follows with a measure of salt although it mostly still applies ;-) With 4.1, there are now 'Editions' of GPFS, which break out various feature sets into a tiered arrangement, with each tier being licensed (and possibly priced) differently. Have a look here (Q 1.3) for the 4.1 licensing notes - http://www-01.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html With GPFS 3.5, you are most likely running the equivalent of the 'Standard Edition' today. The crypto feature set comes with the 'Advanced Edition', which is licensed differently. Have a look at Chapter 15 of the Advanced Admin Guide for 4.1 as a primer ... http://www-01.ibm.com/support/knowledgecenter/SSFKCN_4.1.0.4/gpfs4104_content.html -frank On Thu, Apr 16, 2015 at 1:33 PM, Garrison, E Chris wrote: Hello, My site is working up to upgrading our paired GridScaler system from GPFS 3.5 to 4.1. There is a mandate to provide encryption at rest, and that's an advertised feature of 4.1. We already have over 100 TB of data, synchronously replicated between two geographically separated sites, and we have concerns about how the upgrade, as well as the application of encryption to all that data, will go. I'd like to hear from admins who've been through this upgrade. What gotchas should we look out for? Can it easily be done in place, or would we need some extra equipment to "slosh" our data to and from so that it is written to an encrypted GPFS? Thank you for your time, and for any sage advice on this process. Chris -- Chris Garrison Indiana University Research Systems Storage _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 360 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: From secretary at gpfsug.org Fri Apr 24 15:05:25 2015 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Fri, 24 Apr 2015 15:05:25 +0100 Subject: [gpfsug-discuss] GPFS User Group Meeting Agenda Message-ID: Dear Members, The agenda for the next GPFS User Group Meeting is now available: 10:00 arrival for a 10:30 start 10:30 Introductions - Jez Tucker, Group Chair & Claire Robson, Group Secretary 10:35 Keynote - Doris Conti 10:50 4.1.1 Roadmap / High-level futures - Scott Fadden 11:40 Failure Events, Recovery & Problem determination - Scott Fadden 12:00 Monitoring IBM Spectrum Scale using IBM Spectrum Control (VSC/TPC) - Christian Bolik 12:30 Lunch 13:00 User Experience from University of Birmingham & CLIMB - Simon Thompson 13:20 User Experience from NERSC - Jason Hick 13:40 AFM & Async DR Use Cases - Shankar Balasubramanian 14:25 mmbackup + TSM Integration - Stefan Bender 15:10 Break 15:25 IBM Spectrum Scale (formerly GPFS) performance update, protocol performance and sizing - Sven Oehme 16:50 Closing summary - Jez Tucker, Group Chair & Claire Robson, Group Secretary 17:00 Ends All attendees are invited to attend: 19:00 Buffet and drinks at York National Railway Museum There are still places available for the event. If you have not registered and would like to attend, please email me, secretary at gpfsug.org with your name, job title, organisation, telephone and any dietary requirements. We hope to see you in May! Kind regards, Claire -- Claire Robson GPFS User Group Secretary From Dan.Foster at bristol.ac.uk Wed Apr 29 08:51:03 2015 From: Dan.Foster at bristol.ac.uk (Dan Foster) Date: Wed, 29 Apr 2015 08:51:03 +0100 Subject: [gpfsug-discuss] file system format version / software version compatibility matrix Message-ID: Hi All, I'm trying to determine which versions of the GPFS file system format are compatible with certain versions of the GPFS software. Specifically in this instance I'm interested if file system format v11.05 (from GPFS 3.3.0.2) is compatible with GPFS 3.5 . The "File system format changes between versions of GPFS" [1] chapter mentions format levels as old a v6, which infers that they are compatible. But it would be useful to know for certain. Thanks, Dan. [1] http://www-01.ibm.com/support/knowledgecenter/SSFKCN_3.5.0/com.ibm.cluster.gpfs.v3r5.gpfs100.doc/bl1adm_fsmigissues.htm?lang=en -- Dan Foster | Senior Storage Systems Administrator | IT Services e: dan.foster at bristol.ac.uk | t: 0117 3941170 [x41170] m: Advanced Computing Research Centre, University of Bristol, 8-10 Berkeley Square, Bristol BS8 1HH From jamiedavis at us.ibm.com Wed Apr 29 13:43:08 2015 From: jamiedavis at us.ibm.com (James Davis) Date: Wed, 29 Apr 2015 08:43:08 -0400 Subject: [gpfsug-discuss] file system format version / software version compatibility matrix In-Reply-To: References: Message-ID: Dan, A "new" GPFS should be able to mount and interact with file systems with "old" versions. Specifically I do not believe you will have trouble getting GPFS 3.5 to talk to an FS created on GPFS 3.3.0.2. The "Concepts, planning, and install guide" provides information about migrating to GPFS 4.1 from GPFS 3.2 or earlier; this would implies file systems created at GPFS 3.2 being supported on GPFS 4.1, which is even more compatibility than you are asking about. The section to which I refer is titled "Migrating to GPFS 4.1 from GPFS 3.2 or earlier releases of GPFS". Note that without running mmchfs -V {full|compat} you will not have access to some of the newer GPFS features. See the section titled "Completing the migration to a new level of GPFS", also in the concepts guide. Hope this helps, Jamie Jamie Davis GPFS Functional Verification Test (FVT) jamiedavis at us.ibm.com From: Dan Foster To: gpfsug main discussion list Date: 29-04-15 03:51 AM Subject: [gpfsug-discuss] file system format version / software version compatibility matrix Sent by: gpfsug-discuss-bounces at gpfsug.org Hi All, I'm trying to determine which versions of the GPFS file system format are compatible with certain versions of the GPFS software. Specifically in this instance I'm interested if file system format v11.05 (from GPFS 3.3.0.2) is compatible with GPFS 3.5 . The "File system format changes between versions of GPFS" [1] chapter mentions format levels as old a v6, which infers that they are compatible. But it would be useful to know for certain. Thanks, Dan. [1] http://www-01.ibm.com/support/knowledgecenter/SSFKCN_3.5.0/com.ibm.cluster.gpfs.v3r5.gpfs100.doc/bl1adm_fsmigissues.htm?lang=en -- Dan Foster | Senior Storage Systems Administrator | IT Services e: dan.foster at bristol.ac.uk | t: 0117 3941170 [x41170] m: Advanced Computing Research Centre, University of Bristol, 8-10 Berkeley Square, Bristol BS8 1HH _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: