From alvise.dorigo at psi.ch Tue Dec 3 14:35:22 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Tue, 3 Dec 2019 14:35:22 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Message-ID: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Tue Dec 3 14:44:21 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Tue, 3 Dec 2019 14:44:21 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <5f54e13651cc45ef999ebf2417792b38@psi.ch> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Tue Dec 3 14:54:31 2019 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Tue, 3 Dec 2019 09:54:31 -0500 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <5f54e13651cc45ef999ebf2417792b38@psi.ch> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From TOMP at il.ibm.com Tue Dec 3 15:02:36 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Tue, 3 Dec 2019 17:02:36 +0200 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=5Ji4Rrk0dQhYpwfSkj-6RPXwgYhhiqqImlaHmuHrOsk&s=Z0aCyK22UfYZ2VIREnwtIirpmS2fM6a7IrkEUnuWyB8&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue Dec 3 15:03:41 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 3 Dec 2019 15:03:41 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: <02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> On 03/12/2019 14:54, Olaf Weiser wrote: > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - ?you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster > .. ?.. add the nodes to your existing cluster.. and then start > configuring the RGs > I was under the impression (from post by IBM employees on this list) that you are not allowed to mix GNR, ESS, DSS, classical GPFS, DDN GPFS etc. in the same cluster. Not a technical limitation but a licensing one. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From makaplan at us.ibm.com Tue Dec 3 19:14:52 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 3 Dec 2019 14:14:52 -0500 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> <02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> Message-ID: IF you have everything properly licensed and then you reconfigure... It may work okay... But then you may come up short if you ask for IBM support or service... So depending how much support you need or desire... Or take the easier and supported path... And probably accomplish most of what you need -- let each cluster be and remote mount onto clients which could be on any connected cluster. From: Jonathan Buzzard To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 10:04 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org On 03/12/2019 14:54, Olaf Weiser wrote: > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - ?you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster > .. ?.. add the nodes to your existing cluster.. and then start > configuring the RGs > I was under the impression (from post by IBM employees on this list) that you are not allowed to mix GNR, ESS, DSS, classical GPFS, DDN GPFS etc. in the same cluster. Not a technical limitation but a licensing one. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIF-g&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=lEWw7H2AdQxSCu_vbgGHhztL0y7voTATCG_KfbRgHJw&s=wg5NvwO5OAw-jLCsL-BtSRGisghnRu5F39r_G_gKNKk&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From lgayne at us.ibm.com Tue Dec 3 19:20:55 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Tue, 3 Dec 2019 19:20:55 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , <5f54e13651cc45ef999ebf2417792b38@psi.ch><02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0E56DFFAD6E28f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0E56DFFAD6E28f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15754003609670.gif Type: image/gif Size: 105 bytes Desc: not available URL: From lgayne at us.ibm.com Tue Dec 3 19:30:31 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Tue, 3 Dec 2019 19:30:31 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Wed Dec 4 09:29:32 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Wed, 4 Dec 2019 09:29:32 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , <5f54e13651cc45ef999ebf2417792b38@psi.ch>, Message-ID: <62721c5c4c3640848e1513d03965fefe@psi.ch> Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Dec 4 11:21:54 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 4 Dec 2019 12:21:54 +0100 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <62721c5c4c3640848e1513d03965fefe@psi.ch> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> <62721c5c4c3640848e1513d03965fefe@psi.ch> Message-ID: Adding the GL2 into your existing cluster shouldn?t be any problem. You would just delete the existing cluster on the GL2, then on the EMS run something like: gssaddnode -N gssio1-hs --cluster-node netapp-node --nodetype gss --accept-license gssaddnode -N gssio2-hs --cluster-node netapp-node --nodetype gss --accept-license and then afterwards create the RGs: gssgenclusterrgs -G gss_ppc64 --suffix=-hs Then create the vdisks/nsds and add to your existing filesystem. Beware that last time I did this, gssgenclusterrgs triggered a "mmshutdown -a" on the whole cluster, because it wanted to change some config settings... Caught me a bit by surprise.. -jf ons. 4. des. 2019 kl. 10:44 skrev Dorigo Alvise (PSI) : > Thank you all for the answer. I try to recap my answers to your questions: > > > > 1. the purpose is not to merge clusters "per se"; it is adding the > GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp > (which is running out of free space); of course I know well the > heterogeneity of this hypothetical system, so the GL2's NSD would go to a > special pool; but in the end I need a unique namespace for files. > 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 > cluster) because the former is in production and I cannot schedule long > downtimes > 3. All system have proper licensing of course; what does it means that > I could loose IBM support ? if the support is for a failing disk drive I do > not think so; if the support is for a "strange" behaviour of GPFS I can > probably understand > 4. NSD (in the NetApp system) are in their roles: what do you mean > exactly ? there's RAIDset attached to servers that are actually NSD > together with their attached LUN > > > Alvise > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Lyle Gayne < > lgayne at us.ibm.com> > *Sent:* Tuesday, December 3, 2019 8:30:31 PM > *To:* gpfsug-discuss at spectrumscale.org > *Cc:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp *< --- Are these > NSD servers in their GPFS roles (where Scale "runs on top"*? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest > cluster, rather than simply allowing remote mount of the ESS servers by the > other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our > coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no > issues. Having them as separate file systems (NetApp vs. ESS) means no > concerns regarding varying architectures within the same fs serving or > failover scheme. Mixing such as compute nodes would mean some performance > differences across the different clients, but you haven't described your > compute (NSD client) details. > > Lyle > > ----- Original message ----- > From: "Tomer Perry" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR > configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based > file system, then all affected IBM Spectrum Scale RAID objects will be > exported as well. This includes recovery groups, declustered arrays, > vdisks, and any other file systems that are based on these objects. For > more information about IBM Spectrum Scale RAID, see *IBM Spectrum > Scale RAID: Administration*. " > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since > I would assume that the cluster config version is to high for the NetApp > based cluster. > I would also suspect that the filesystem version on the ESS will be > different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" > To: gpfsug main discussion list > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to > a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. > .. add the nodes to your existing cluster.. and then start configuring the > RGs > > > > > > From: "Dorigo Alvise (PSI)" > To: "gpfsug-discuss at spectrumscale.org" < > gpfsug-discuss at spectrumscale.org> > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Wed Dec 4 14:07:18 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Wed, 4 Dec 2019 14:07:18 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <62721c5c4c3640848e1513d03965fefe@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Thu Dec 5 09:15:13 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Thu, 5 Dec 2019 09:15:13 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <62721c5c4c3640848e1513d03965fefe@psi.ch>, Message-ID: Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, >From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, Anderson Nobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone: 55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Thu Dec 5 10:24:08 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Thu, 5 Dec 2019 10:24:08 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: Message-ID: One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: > > ? > Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. > > > > A > > From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre > Sent: Wednesday, December 4, 2019 3:07:18 PM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > Hi Dorigo, > > From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. > > Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata > > Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. > > One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: > https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning > > Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. > > Abra?os / Regards / Saludos, > > > Anderson Nobre > Power and Storage Consultant > IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services > > > > Phone: 55-19-2132-4317 > E-mail: anobre at br.ibm.com > > > ----- Original message ----- > From: "Dorigo Alvise (PSI)" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: "gpfsug-discuss at spectrumscale.org" > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Date: Wed, Dec 4, 2019 06:44 > > Thank you all for the answer. I try to recap my answers to your questions: > > > > the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. > I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes > All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand > NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN > > Alvise > From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne > Sent: Tuesday, December 3, 2019 8:30:31 PM > To: gpfsug-discuss at spectrumscale.org > Cc: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. > > Lyle > ----- Original message ----- > From: "Tomer Perry" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. > I would also suspect that the filesystem version on the ESS will be different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" > To: gpfsug main discussion list > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs > > > > > > From: "Dorigo Alvise (PSI)" > To: "gpfsug-discuss at spectrumscale.org" > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Thu Dec 5 14:50:01 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Thu, 5 Dec 2019 14:50:01 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , Message-ID: <15d9b14554534be7a7adca204ca3febd@psi.ch> This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com [https://images.youracclaim.com/images/c49300ae-d13e-4071-90f5-15f59d199c9e/IBM%2BVolunteers%2BGold%2Bv6.png] [https://images.youracclaim.com/images/f2539224-f951-46b4-b376-b88f21c2be98/IBM-Selling-Certification---Level-1.png] [https://images.youracclaim.com/images/ea52b12f-97ac-4e72-8d24-b0ced8054e7d/Storage%2BTechnical%2BV1.png] On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: ? Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From cblack at nygenome.org Thu Dec 5 15:17:49 2019 From: cblack at nygenome.org (Christopher Black) Date: Thu, 5 Dec 2019 15:17:49 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <15d9b14554534be7a7adca204ca3febd@psi.ch> References: <15d9b14554534be7a7adca204ca3febd@psi.ch> Message-ID: <487517C3-5B4A-401E-85E5-A1874527A115@nygenome.org> If you have two clusters that are hard to merge, but you are facing the need to provide capacity for more writes, another option to consider would be to set up a filesystem on GL2 with an AFM relationship to the filesystem on the netapp gpfs cluster for accessing older data and point clients to the new GL2 filesystem. Some downsides to that approach include introducing a dependency on afm (and potential performance reduction) to get to older data. There may also be complications depending on how your filesets are laid out. At some point when you have more capacity in 5.x cluster and/or are ready to move off netapp, you could use afm to prefetch all data into new filesystem. In theory, you could then (re)build nsd servers connected to netapp on 5.x and add them to new cluster and use them for a separate pool or keep them as a separate 5.x cluster. Best, Chris From: on behalf of "Dorigo Alvise (PSI)" Reply-To: gpfsug main discussion list Date: Thursday, December 5, 2019 at 9:50 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com [https://images.youracclaim.com/images/c49300ae-d13e-4071-90f5-15f59d199c9e/IBM%2BVolunteers%2BGold%2Bv6.png] [https://images.youracclaim.com/images/f2539224-f951-46b4-b376-b88f21c2be98/IBM-Selling-Certification---Level-1.png] [https://images.youracclaim.com/images/ea52b12f-97ac-4e72-8d24-b0ced8054e7d/Storage%2BTechnical%2BV1.png] On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Thu Dec 5 15:59:07 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 5 Dec 2019 16:59:07 +0100 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <15d9b14554534be7a7adca204ca3febd@psi.ch> References: <15d9b14554534be7a7adca204ca3febd@psi.ch> Message-ID: There?s still being maintained the ESS v5.2 release stream with gpfs v4.2.3.x for customer that are stuck on v4. You should probably install that on your ESS if you want to add it to your existing cluster. BTW: I think Tomer misunderstood the task a bit. It sounded like you needed to keep the existing recoverygroups from the ESS in the merge. That would probably be complicated.. Adding an empty ESS to an existing cluster should not be complicated ?- it?s just not properly documented anywhere I?m aware of. -jf tor. 5. des. 2019 kl. 15:50 skrev Dorigo Alvise (PSI) : > This is a quite critical storage for data taking. It is not easy to update > to GPFS5 because in that facility we have very short shutdown period. Thank > you for pointing out that 4.2.3. But the entire storage will be replaced in > the future; at the moment we just need to expand it to survive for a while. > > > This merge seems quite tricky to implement and I haven't seen consistent > opinions among the people that kindly answered. According to Jan Frode, > Kaplan and T. Perry it should be possible, in principle, to do the merge... > Other people suggest a remote mount, which is not a solution for my use > case. Other suggest not to do that... > > > A > > > > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Daniel Kidger < > daniel.kidger at uk.ibm.com> > *Sent:* Thursday, December 5, 2019 11:24:08 AM > > *To:* gpfsug main discussion list > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > One additional question to ask is : what are your long term plans for the > 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x > (hopefully before 4.2.3 goes out of support)? > > Also I assume your Netapp hardware is the standard Netapp block storage, > perhaps based on their standard 4U60 shelves daisy-chained together? > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum Discover and IBM Cloud Object Store > > + <+44-7818%20522%20266>44-(0)7818 522 266 <+44-7818%20522%20266> > daniel.kidger at uk.ibm.com > > > > > > > > On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: > > ? > > Thank Anderson for the material. In principle our idea was to scratch the > filesystem in the GL2, put its NSD on a dedicated pool and then merge it > into the Filesystem which would remain on V4. I do not want to create a FS > in the GL2 but use its space to expand the space of the other cluster. > > > A > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Anderson Ferreira > Nobre > *Sent:* Wednesday, December 4, 2019 3:07:18 PM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > Hi Dorigo, > > From point of view of cluster administration I don't think it's a good > idea to have hererogeneous cluster. There are too many diferences between > V4 and V5. And much probably many of enhancements of V5 you won't take > advantage. One example is the new filesystem layout in V5. And at this > moment the way to migrate the filesystem is create a new filesystem in V5 > with the new layout and migrate the data. That is inevitable. I have seen > clients saying that they don't need all that enhancements, but the true is > when you face performance issue that is only addressable with the new > features someone will raise the question why we didn't consider that in the > beginning. > > Use this time to review if it would be better to change the block size of > your fileystem. There's a script called filehist > in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your > current filesystem. Here's the link with additional information: > > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata > > Different RAID configurations also brings unexpected performance > behaviors. Unless you are planning create different pools and use ILM to > manage the files in different pools. > > One last thing, it's a good idea to follow the recommended levels for > Spectrum Scale: > > https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning > > Anyway, you are the system administrator, you know better than anyone how > complex is to manage this cluster. > > Abra?os / Regards / Saludos, > > > *AndersonNobre* > Power and Storage Consultant > IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services > > [image: community_general_lab_services] > > ------------------------------ > Phone:55-19-2132-4317 > E-mail: anobre at br.ibm.com [image: IBM] > > > > ----- Original message ----- > From: "Dorigo Alvise (PSI)" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: "gpfsug-discuss at spectrumscale.org" > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Wed, Dec 4, 2019 06:44 > > > Thank you all for the answer. I try to recap my answers to your questions: > > > > 1. the purpose is not to merge clusters "per se"; it is adding the > GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp > (which is running out of free space); of course I know well the > heterogeneity of this hypothetical system, so the GL2's NSD would go to a > special pool; but in the end I need a unique namespace for files. > 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 > cluster) because the former is in production and I cannot schedule long > downtimes > 3. All system have proper licensing of course; what does it means that > I could loose IBM support ? if the support is for a failing disk drive I do > not think so; if the support is for a "strange" behaviour of GPFS I can > probably understand > 4. NSD (in the NetApp system) are in their roles: what do you mean > exactly ? there's RAIDset attached to servers that are actually NSD > together with their attached LUN > > > Alvise > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Lyle Gayne < > lgayne at us.ibm.com> > *Sent:* Tuesday, December 3, 2019 8:30:31 PM > *To:* gpfsug-discuss at spectrumscale.org > *Cc:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp *< --- Are these > NSD servers in their GPFS roles (where Scale "runs on top"*? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest > cluster, rather than simply allowing remote mount of the ESS servers by the > other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our > coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no > issues. Having them as separate file systems (NetApp vs. ESS) means no > concerns regarding varying architectures within the same fs serving or > failover scheme. Mixing such as compute nodes would mean some performance > differences across the different clients, but you haven't described your > compute (NSD client) details. > > Lyle > > ----- Original message ----- > From: "Tomer Perry" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR > configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based > file system, then all affected IBM Spectrum Scale RAID objects will be > exported as well. This includes recovery groups, declustered arrays, > vdisks, and any other file systems that are based on these objects. For > more information about IBM Spectrum Scale RAID, see *IBM Spectrum > Scale RAID: Administration*." > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since > I would assume that the cluster config version is to high for the NetApp > based cluster. > I would also suspect that the filesystem version on the ESS will be > different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" > To: gpfsug main discussion list > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to > a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. > .. add the nodes to your existing cluster.. and then start configuring the > RGs > > > > > > From: "Dorigo Alvise (PSI)" > To: "gpfsug-discuss at spectrumscale.org" < > gpfsug-discuss at spectrumscale.org> > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgayne at us.ibm.com Thu Dec 5 15:58:39 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Thu, 5 Dec 2019 10:58:39 -0500 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <487517C3-5B4A-401E-85E5-A1874527A115@nygenome.org> References: <15d9b14554534be7a7adca204ca3febd@psi.ch> <487517C3-5B4A-401E-85E5-A1874527A115@nygenome.org> Message-ID: One tricky bit in this case is that ESS is always recommended to be its own standalone cluster, so MERGING it as a storage pool or pools into a cluster already containing NetApp storage wouldn't be generally recommended. Yet you cannot achieve the stated goal of a single fs image/mount point containing both types of storage that way. Perhaps our ESS folk should weigh in regarding possible routs? Lyle From: Christopher Black To: gpfsug main discussion list Date: 12/05/2019 10:53 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org If you have two clusters that are hard to merge, but you are facing the need to provide capacity for more writes, another option to consider would be to set up a filesystem on GL2 with an AFM relationship to the filesystem on the netapp gpfs cluster for accessing older data and point clients to the new GL2 filesystem. Some downsides to that approach include introducing a dependency on afm (and potential performance reduction) to get to older data. There may also be complications depending on how your filesets are laid out. At some point when you have more capacity in 5.x cluster and/or are ready to move off netapp, you could use afm to prefetch all data into new filesystem. In theory, you could then (re)build nsd servers connected to netapp on 5.x and add them to new cluster and use them for a separate pool or keep them as a separate 5.x cluster. Best, Chris From: on behalf of "Dorigo Alvise (PSI)" Reply-To: gpfsug main discussion list Date: Thursday, December 5, 2019 at 9:50 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=96nejPA0lJgbr9YP3LlaHsFUacfAy3QObHRl5SSeu6o&s=E1HEKXJOzKNDJan1TBYUlV1ckkhUjDiqUXT-x-p-QbI&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: From stockf at us.ibm.com Thu Dec 5 20:13:28 2019 From: stockf at us.ibm.com (Frederick Stock) Date: Thu, 5 Dec 2019 20:13:28 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <15d9b14554534be7a7adca204ca3febd@psi.ch> References: <15d9b14554534be7a7adca204ca3febd@psi.ch>, , Message-ID: An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Fri Dec 6 14:37:02 2019 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Fri, 6 Dec 2019 14:37:02 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Message-ID: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage This is full-function DME, no time restrictions, limited to 12TB per cluster. NO production use or support! It?s likely that some people entirely new to Scale will find their way here to the user group Slack channel and mailing list, so I thank you in advance for making them welcome, and letting them know about the wealth of online information for Scale, including the email address scale at us.ibm.com Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69557 bytes Desc: image001.png URL: From lists at esquad.de Sun Dec 8 17:22:43 2019 From: lists at esquad.de (Dieter Mosbach) Date: Sun, 8 Dec 2019 18:22:43 +0100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: Am 06.12.2019 um 15:37 schrieb Carl Zetie - carlz at us.ibm.com:> > Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage > Clicking on "Try free developer edition" leads to a download of "Spectrum Scale 4.2.2 GUI Open Beta zip file" from 2015-08-22 ... Kind regards Dieter From alvise.dorigo at psi.ch Mon Dec 9 10:03:58 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Mon, 9 Dec 2019 10:03:58 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <15d9b14554534be7a7adca204ca3febd@psi.ch>, , , Message-ID: <2bad2631ebf44042b4004fb5c51eb7d0@psi.ch> I thank you all so much for the participation on this topic. We realized that what we wanted to do is not only "exotic", but also not officially supported and as far as I understand no one did something like that in production. We do not want to be the first with production systems. We decided that the least disruptive thing to do is remotely mount the GL2's filesystem into the NetApp/GPFS cluster and for a limited amount of time (less than 1 year) we are going to survive with different filesystem namespaces, managing users and groups with some symlink system or other high level solutions. Thank you very much, Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Frederick Stock Sent: Thursday, December 5, 2019 9:13:28 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster If you plan to replace all the storage then why did you choose to integrate a ESS GL2 rather than use another storage option? Perhaps you had already purchased the ESS system? Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Thu, Dec 5, 2019 2:57 PM This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com [X] [X] [X] On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: ? Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Mon Dec 9 10:30:05 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Mon, 9 Dec 2019 10:30:05 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: , <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: An HTML attachment was scrubbed... URL: From nnasef at us.ibm.com Mon Dec 9 18:35:52 2019 From: nnasef at us.ibm.com (Nariman Nasef) Date: Mon, 9 Dec 2019 18:35:52 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-productionuse now available In-Reply-To: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.156777917997825.png Type: image/png Size: 15543 bytes Desc: not available URL: From Greg.Lehmann at csiro.au Tue Dec 10 02:09:31 2019 From: Greg.Lehmann at csiro.au (Lehmann, Greg (IM&T, Pullenvale)) Date: Tue, 10 Dec 2019 02:09:31 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: Hi Carl, I am wondering if it is acceptable to use this as a test cluster. The main intentions being to try fixes, configuration changes etc. on the test cluster before applying those to the production cluster. I guess the issue with this release, is that it is the latest version. We really need a version that matches production and be able to apply fixpacks, PTFs etc. to it without breaching the license of the developer edition. Cheers, Greg Lehmann -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Carl Zetie - carlz at us.ibm.com Sent: Saturday, December 7, 2019 12:37 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage This is full-function DME, no time restrictions, limited to 12TB per cluster. NO production use or support! It?s likely that some people entirely new to Scale will find their way here to the user group Slack channel and mailing list, so I thank you in advance for making them welcome, and letting them know about the wealth of online information for Scale, including the email address scale at us.ibm.com Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com From jack at flametech.com.au Tue Dec 10 02:35:06 2019 From: jack at flametech.com.au (Jack Horrocks) Date: Tue, 10 Dec 2019 13:35:06 +1100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: Hi Carl, To further that I tried to download it in Australia and couldn't. I said I had to go through export controls. Thanks Jack. On Tue, 10 Dec 2019 at 13:16, Lehmann, Greg (IM&T, Pullenvale) wrote: > Hi Carl, > I am wondering if it is acceptable to use this as a test cluster. > The main intentions being to try fixes, configuration changes etc. on the > test cluster before applying those to the production cluster. I guess the > issue with this release, is that it is the latest version. We really need a > version that matches production and be able to apply fixpacks, PTFs etc. to > it without breaching the license of the developer edition. > > Cheers, > > Greg Lehmann > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Carl Zetie - > carlz at us.ibm.com > Sent: Saturday, December 7, 2019 12:37 AM > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Scale Developer Edition free for non-production > use now available > > > Spectrum Scale Developer Edition is now available for free download on the > IBM Marketplace: > https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage > > This is full-function DME, no time restrictions, limited to 12TB per > cluster. NO production use or support! > > It?s likely that some people entirely new to Scale will find their way > here to the user group Slack channel and mailing list, so I thank you in > advance for making them welcome, and letting them know about the wealth of > online information for Scale, including the email address scale at us.ibm.com > > > Carl Zetie > Program Director > Offering Management > Spectrum Scale & Spectrum Discover > ---- > (919) 473 3318 ][ Research Triangle Park > carlz at us.ibm.com > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nigel.williams at tpac.org.au Tue Dec 10 03:07:31 2019 From: nigel.williams at tpac.org.au (Nigel Williams) Date: Tue, 10 Dec 2019 14:07:31 +1100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: On Tue, 10 Dec 2019 at 13:35, Jack Horrocks wrote: > To further that I tried to download it in Australia and couldn't. I said I had to go through export controls. I clicked the option "I already have an IBMid", but using known working credentials [1] I get "Incorrect IBMid or password. Please try again!" [1] credentials work with support.ibm.com and IBM Cloud From Greg.Lehmann at csiro.au Tue Dec 10 03:11:30 2019 From: Greg.Lehmann at csiro.au (Lehmann, Greg (IM&T, Pullenvale)) Date: Tue, 10 Dec 2019 03:11:30 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: I am in Australia and downloaded it OK. Greg Lehmann Senior High Performance Data Specialist | CSIRO Greg.Lehmann at csiro.au | +61 7 3327 4137 | From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jack Horrocks Sent: Tuesday, December 10, 2019 12:35 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Hi Carl, To further that I tried to download it in Australia and couldn't. I said I had to go through export controls. Thanks Jack. On Tue, 10 Dec 2019 at 13:16, Lehmann, Greg (IM&T, Pullenvale) > wrote: Hi Carl, I am wondering if it is acceptable to use this as a test cluster. The main intentions being to try fixes, configuration changes etc. on the test cluster before applying those to the production cluster. I guess the issue with this release, is that it is the latest version. We really need a version that matches production and be able to apply fixpacks, PTFs etc. to it without breaching the license of the developer edition. Cheers, Greg Lehmann -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Carl Zetie - carlz at us.ibm.com Sent: Saturday, December 7, 2019 12:37 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage This is full-function DME, no time restrictions, limited to 12TB per cluster. NO production use or support! It?s likely that some people entirely new to Scale will find their way here to the user group Slack channel and mailing list, so I thank you in advance for making them welcome, and letting them know about the wealth of online information for Scale, including the email address scale at us.ibm.com Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From nigel.williams at tpac.org.au Tue Dec 10 03:29:04 2019 From: nigel.williams at tpac.org.au (Nigel Williams) Date: Tue, 10 Dec 2019 14:29:04 +1100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: On Tue, 10 Dec 2019 at 14:19, Lehmann, Greg (IM&T, Pullenvale) wrote: > I am in Australia and downloaded it OK. I found a workaround which was to logon to an IBM service that worked with my credentials, and then switch back to the developer edition download and that allowed me to click through and start the download. From jmanuel.fuentes at upf.edu Tue Dec 10 09:45:19 2019 From: jmanuel.fuentes at upf.edu (FUENTES DIAZ, JUAN MANUEL) Date: Tue, 10 Dec 2019 10:45:19 +0100 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full Message-ID: Hi, Recently our group have migrated the Spectrum Scale from 4.2.3.9 to 5.0.3.0. According to the documentation to finish and consolidate the migration we should also update the config and the filesystems to the latest version with the commands above. Our cluster is a single cluster and all the nodes have the same version. My question is if we can update safely with those commands without compromising the data and metadata. Thanks Juanma -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergi.more at bsc.es Tue Dec 10 10:04:31 2019 From: sergi.more at bsc.es (Sergi More) Date: Tue, 10 Dec 2019 11:04:31 +0100 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full In-Reply-To: References: Message-ID: <48fb738b-203a-14cb-ef12-3a94f0cad199@bsc.es> Hi Juanma, Yes, it is safe. We have done it several times. AFAIK it doesn't actually change current data and metadata. Just states that filesystem is using latest version, so new features can be enabled. It is something to take into consideration specially when using multicluster, or mixing different gpfs versions, as these could potentially prevent older nodes to be able to mount the filesystems, but this doesn't seem to be your case. Best regards, Sergi. On 10/12/2019 10:45, FUENTES DIAZ, JUAN MANUEL wrote: > Hi, > > Recently our group have migrated the Spectrum Scale from 4.2.3.9 to > 5.0.3.0. According to the documentation to finish and consolidate the > migration we should also update the config and the filesystems to the > latest version with the commands above. Our cluster is a single > cluster and all the nodes have the same version. My question is if we > can update safely?with those commands without?compromising the data > and metadata. > > Thanks Juanma > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- ------------------------------------------------------------------------ Sergi More Codina Operations - System administration Barcelona Supercomputing Center Centro Nacional de Supercomputacion WWW: http://www.bsc.es Tel: +34-93-405 42 27 e-mail: sergi.more at bsc.es Fax: +34-93-413 77 21 ------------------------------------------------------------------------ WARNING / LEGAL TEXT: This message is intended only for the use of the individual or entity to which it is addressed and may contain information which is privileged, confidential, proprietary, or exempt from disclosure under applicable law. If you are not the intended recipient or the person responsible for delivering the message to the intended recipient, you are strictly prohibited from disclosing, distributing, copying, or in any way using this message. If you have received this communication in error, please notify the sender and destroy and delete any copies you may have received. http://www.bsc.es/disclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3617 bytes Desc: S/MIME Cryptographic Signature URL: From Renar.Grunenberg at huk-coburg.de Tue Dec 10 12:21:37 2019 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Tue, 10 Dec 2019 12:21:37 +0000 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full In-Reply-To: References: Message-ID: <9b774f33494d42ae989e3ad61d359d8c@huk-coburg.de> Hallo Juanma, ist save, the only change are only happen if you change the filesystem version with mmcfs device ?V full. As a tip you schould update to 5.0.3.3 ist a very stable Level for us. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder, Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von FUENTES DIAZ, JUAN MANUEL Gesendet: Dienstag, 10. Dezember 2019 10:45 An: gpfsug-discuss at spectrumscale.org Betreff: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full Hi, Recently our group have migrated the Spectrum Scale from 4.2.3.9 to 5.0.3.0. According to the documentation to finish and consolidate the migration we should also update the config and the filesystems to the latest version with the commands above. Our cluster is a single cluster and all the nodes have the same version. My question is if we can update safely with those commands without compromising the data and metadata. Thanks Juanma -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Tue Dec 10 14:48:35 2019 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Tue, 10 Dec 2019 14:48:35 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Message-ID: <5582929B-4515-4FFE-87BA-7CC4B5E71920@us.ibm.com> In response to various questions? Yes, the wrong file was originally linked. It should be fixed now. Yes, you can definitely use this edition in your test labs. We want to make it as easy as possible for you to experiment with new features, config changes, and releases so that you can adopt them with confidence, and discover problems in the lab not production. No, we do not plan at this time to backport Developer Edition to earlier Scale releases. If you are having problems with access to the download, please use the Contact links on the Marketplace page, including this one for IBMid issues: https://www.ibm.com/ibmid/myibm/help/us/helpdesk.html. The Scale dev and offering management team don?t have any control over the website or download process (other than providing the file itself for download) or the authentication process, and we?re just going to contact the same people via the same links? Regards Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_1522411740] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69557 bytes Desc: image001.png URL: From jmanuel.fuentes at upf.edu Wed Dec 11 08:23:34 2019 From: jmanuel.fuentes at upf.edu (FUENTES DIAZ, JUAN MANUEL) Date: Wed, 11 Dec 2019 09:23:34 +0100 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full In-Reply-To: References: Message-ID: Hi, Thanks Sergi and Renar for the clear explanation. Juanma El mar., 10 dic. 2019 15:50, escribi?: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: mmchconfig release=LATEST mmchfs FileSystem -V full > (Grunenberg, Renar) > 2. Re: Scale Developer Edition free for non-production use now > available (Carl Zetie - carlz at us.ibm.com) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 10 Dec 2019 12:21:37 +0000 > From: "Grunenberg, Renar" > To: "gpfsug-discuss at spectrumscale.org" > > Subject: Re: [gpfsug-discuss] mmchconfig release=LATEST mmchfs > FileSystem -V full > Message-ID: <9b774f33494d42ae989e3ad61d359d8c at huk-coburg.de> > Content-Type: text/plain; charset="utf-8" > > Hallo Juanma, > ist save, the only change are only happen if you change the filesystem > version with mmcfs device ?V full. > As a tip you schould update to 5.0.3.3 ist a very stable Level for us. > Regards Renar > > > Renar Grunenberg > Abteilung Informatik - Betrieb > > HUK-COBURG > Bahnhofsplatz > 96444 Coburg > Telefon: 09561 96-44110 > Telefax: 09561 96-44104 > E-Mail: Renar.Grunenberg at huk-coburg.de > Internet: www.huk.de > ________________________________ > HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter > Deutschlands a. G. in Coburg > Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 > Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg > Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. > Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav > Her?y, Dr. J?rg Rheinl?nder, Sarah R?ssler, Daniel Thomas. > ________________________________ > Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte > Informationen. > Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich > erhalten haben, > informieren Sie bitte sofort den Absender und vernichten Sie diese > Nachricht. > Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht > ist nicht gestattet. > > This information may contain confidential and/or privileged information. > If you are not the intended recipient (or have received this information > in error) please notify the > sender immediately and destroy this information. > Any unauthorized copying, disclosure or distribution of the material in > this information is strictly forbidden. > ________________________________ > Von: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> Im Auftrag von FUENTES DIAZ, > JUAN MANUEL > Gesendet: Dienstag, 10. Dezember 2019 10:45 > An: gpfsug-discuss at spectrumscale.org > Betreff: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V > full > > Hi, > > Recently our group have migrated the Spectrum Scale from 4.2.3.9 to > 5.0.3.0. According to the documentation to finish and consolidate the > migration we should also update the config and the filesystems to the > latest version with the commands above. Our cluster is a single cluster and > all the nodes have the same version. My question is if we can update safely > with those commands without compromising the data and metadata. > > Thanks Juanma > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20191210/5a763fea/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Tue, 10 Dec 2019 14:48:35 +0000 > From: "Carl Zetie - carlz at us.ibm.com" > To: "gpfsug-discuss at spectrumscale.org" > > Subject: Re: [gpfsug-discuss] Scale Developer Edition free for > non-production use now available > Message-ID: <5582929B-4515-4FFE-87BA-7CC4B5E71920 at us.ibm.com> > Content-Type: text/plain; charset="utf-8" > > In response to various questions? > > > Yes, the wrong file was originally linked. It should be fixed now. > > Yes, you can definitely use this edition in your test labs. We want to > make it as easy as possible for you to experiment with new features, config > changes, and releases so that you can adopt them with confidence, and > discover problems in the lab not production. > > No, we do not plan at this time to backport Developer Edition to earlier > Scale releases. > > If you are having problems with access to the download, please use the > Contact links on the Marketplace page, including this one for IBMid issues: > https://www.ibm.com/ibmid/myibm/help/us/helpdesk.html. The Scale dev and > offering management team don?t have any control over the website or > download process (other than providing the file itself for download) or the > authentication process, and we?re just going to contact the same people via > the same links? > > > Regards > > > > > > Carl Zetie > Program Director > Offering Management > Spectrum Scale & Spectrum Discover > ---- > (919) 473 3318 ][ Research Triangle Park > carlz at us.ibm.com > > [signature_1522411740] > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20191210/b732e2e2/attachment.html > > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: image001.png > Type: image/png > Size: 69557 bytes > Desc: image001.png > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20191210/b732e2e2/attachment.png > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 95, Issue 17 > ********************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From heinrich.billich at id.ethz.ch Thu Dec 12 14:26:31 2019 From: heinrich.billich at id.ethz.ch (Billich Heinrich Rainer (ID SD)) Date: Thu, 12 Dec 2019 14:26:31 +0000 Subject: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64? Message-ID: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> Hello, I remember that a GNR/ESS recovery group can hold up to 64 vdisks, but I can?t find a citation to proof it. Now I wonder if 64 is the actual limit? And where is it documented? And did the limit change with versions? Thank you. I did spend quite some time searching the documentation, no luck .. maybe you know. We run ESS 5.3.4.1 and the recovery groups have current/allowable format version 5.0.0.0 Thank you, Heiner --? ======================= Heinrich Billich ETH Z?rich Informatikdienste Tel.: +41 44 632 72 56 heinrich.billich at id.ethz.ch ======================== From stefan.dietrich at desy.de Fri Dec 13 07:19:42 2019 From: stefan.dietrich at desy.de (Dietrich, Stefan) Date: Fri, 13 Dec 2019 08:19:42 +0100 (CET) Subject: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64? In-Reply-To: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> References: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> Message-ID: <68327965.755878.1576221582269.JavaMail.zimbra@desy.de> Hello Heiner, the 64 vdisk limit per RG is still present in the latest ESS docs: https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.5/com.ibm.spectrum.scale.raid.v5r04.adm.doc/bl1adv_vdisks.htm For the other questions, no idea. Regards, Stefan ----- Original Message ----- > From: "Billich Heinrich Rainer (ID SD)" > To: "gpfsug main discussion list" > Sent: Thursday, December 12, 2019 3:26:31 PM > Subject: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64? > Hello, > > I remember that a GNR/ESS recovery group can hold up to 64 vdisks, but I can?t > find a citation to proof it. Now I wonder if 64 is the actual limit? And where > is it documented? And did the limit change with versions? Thank you. I did > spend quite some time searching the documentation, no luck .. maybe you know. > > We run ESS 5.3.4.1 and the recovery groups have current/allowable format > version 5.0.0.0 > > Thank you, > > Heiner > -- > ======================= > Heinrich Billich > ETH Z?rich > Informatikdienste > Tel.: +41 44 632 72 56 > heinrich.billich at id.ethz.ch > ======================== > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From olaf.weiser at de.ibm.com Fri Dec 13 12:20:15 2019 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Fri, 13 Dec 2019 07:20:15 -0500 Subject: [gpfsug-discuss] =?utf-8?q?Max_number_of_vdisks_in_a_recovery_gro?= =?utf-8?q?up_-_is_it=0964=3F?= In-Reply-To: <68327965.755878.1576221582269.JavaMail.zimbra@desy.de> References: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> <68327965.755878.1576221582269.JavaMail.zimbra@desy.de> Message-ID: An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Fri Dec 13 23:56:44 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Fri, 13 Dec 2019 23:56:44 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Max_number_of_vdisks_in_a_recovery_gro?= =?utf-8?q?up_-_is_it=0964=3F?= In-Reply-To: References: , <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch><68327965.755878.1576221582269.JavaMail.zimbra@desy.de> Message-ID: An HTML attachment was scrubbed... URL: From kkr at lbl.gov Mon Dec 16 19:05:02 2019 From: kkr at lbl.gov (Kristy Kallback-Rose) Date: Mon, 16 Dec 2019 11:05:02 -0800 Subject: [gpfsug-discuss] Planning US meeting for Spring 2020 Message-ID: <42F45E03-0AEC-422C-B3A9-4B5A21B1D8DF@lbl.gov> Hello, It?s time already to plan for the next US event. We have a quick, seriously, should take order of 2 minutes, survey to capture your thoughts on location and date. It would help us greatly if you can please fill it out. Best wishes to all in the new year. -Kristy Please give us 2 minutes of your time here: ?https://forms.gle/NFk5q4djJWvmDurW7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arc at b4restore.com Wed Dec 18 09:30:48 2019 From: arc at b4restore.com (=?iso-8859-1?Q?Andi_N=F8r_Christiansen?=) Date: Wed, 18 Dec 2019 09:30:48 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Message-ID: Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I'm not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns "fileset A1" which needs to be replicated to Site B "fileset A2" the from Site B to Site C "fileset A3". Site B: Owns "fileset B1" which needs to be replicated to Site C "fileset B2". Site C: Holds all data from Site A and B "fileset A3 & B2". We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don't know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B58E.35AA89D0] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Data migration and ILM blueprint - Andi V1.1.pdf Type: application/pdf Size: 236012 bytes Desc: Data migration and ILM blueprint - Andi V1.1.pdf URL: From jack at flametech.com.au Wed Dec 18 10:09:31 2019 From: jack at flametech.com.au (Jack Horrocks) Date: Wed, 18 Dec 2019 21:09:31 +1100 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: Hey Andi I'd be talking to the pixstor boys. Ngenea will do it for you without having to mess about too much. https://ww.pixitmedia.com They are down to earth and won't sell you stuff that doesn't work. Thanks Jack. On Wed, 18 Dec 2019 at 21:00, Andi N?r Christiansen wrote: > Hi, > > > > We are currently building a 3 site spectrum scale solution where data is > going to be generated at two different sites (Site A and Site B, Site C is > for archiving/backup) and then archived on site three. > > I have however not worked with AFM much so I was wondering if there is > someone who knows how to configure AFM to have all data generated in a > file-set automatically being copied to an offsite. > > GPFS AFM is not an option because of latency between sites so NFS AFM is > going to be tunneled between the site via WAN. > > > > As of now we have tried to set up AFM but it only transfers data from home > to cache when a prefetch is manually started or a file is being opened, we > need all files from home to go to cache as soon as it is generated or at > least after a little while. > > It does not need to be synchronous it just need to be automatic. > > > > I?m not sure if attachments will be available in this thread but I have > attached the concept of our design. > > > > Basically the setup is : > > > > Site A: > > Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the > from Site B to Site C ?fileset A3?. > > > > Site B: > > Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. > > > > Site C: > > Holds all data from Site A and B ?fileset A3 & B2?. > > > > We do not need any sites to have failover functionality only a copy of the > data from the two first sites. > > > > If anyone knows how to accomplish this I would be glad to know how! > > > > We have been looking into switching the home and cache site so that data > is generated at the cache sites which will trigger GPFS to transfer the > files to home as soon as possible but as I have little to no experience > with AFM I don?t know what happens to the cache site over time, does the > cache site empty itself after a while or does data stay there until > manually deleted? > > > > Thanks in advance! > > > > Best Regards > > > > > *Andi N?r Christiansen* > *IT Solution Specialist* > > Phone +45 87 81 37 39 > Mobile +45 23 89 59 75 > E-mail arc at b4restore.com > Web www.b4restore.com > > [image: B4Restore on LinkedIn] > [image: B4Restore on > Facebook] [image: B4Restore on Facebook] > [image: Sign up for our newsletter] > > > [image: Download Report] > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: not available URL: From TROPPENS at de.ibm.com Wed Dec 18 11:22:30 2019 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Wed, 18 Dec 2019 12:22:30 +0100 Subject: [gpfsug-discuss] Chart decks of SC19 meeting Message-ID: Most chart decks of the SC19 meeting are now available: https://www.spectrumscale.org/presentations/ -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Matthias Hartmann Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed Dec 18 12:04:11 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 18 Dec 2019 12:04:11 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.jpg at 01D5B58E.35AA89D0.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.png at 01D5B58E.35AA89D0.png Type: image/png Size: 58433 bytes Desc: not available URL: From arc at b4restore.com Wed Dec 18 12:31:14 2019 From: arc at b4restore.com (=?utf-8?B?QW5kaSBOw7hyIENocmlzdGlhbnNlbg==?=) Date: Wed, 18 Dec 2019 12:31:14 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: Hi Andrew, Alright, that partly confirms that there is no automatically sweep of data at cache site, right? I mean data will not be deleted automatically after a while in the cache fileset, where it is only metadata that stays? If data is kept until a manual deletion of data is requested on the cache site then this is the way to go for us..! Also, Site A has no connection to Site C so it needs to be connected as A to B and B to C.. This means: Site A holds live data from Site A, Site B holds live data from Site B and Replicated data from Site A, Site C holds replicated data from A and B. Does that make sense? The connection between A and B is LAN, about 500meters apart.. basically same site but different data centers and strictly separated because of security.. Site C is in another Country. Hence why we cant use GPFS AFM and also why we need to utilize WAN/NFS tunneled for AFM. Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Andrew Beattie Sendt: 18. december 2019 13:04 Til: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Andi, This is basic functionality that is part of Spectrum Scale there is no additional licensing or HSM costs required for this. Set Site C as your AFM Home, and have Site A and Site B both as Caches of Site C you can then Write Data in to Site A - have it stream to Site C, and call it on demand or Prefetch from Site C to Site B as required the Same is true of Site B, you can write Data into Site B, have it Stream to Site C, and call it on demand to site A if you want the data to be Multi Writer then you will need to make sure you use Independent writer as the AFM type https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Active%20File%20Management%20(AFM) Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Andi N?r Christiansen" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 8:00 PM Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B5A5.D4744A80] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: From arc at b4restore.com Wed Dec 18 12:33:31 2019 From: arc at b4restore.com (=?utf-8?B?QW5kaSBOw7hyIENocmlzdGlhbnNlbg==?=) Date: Wed, 18 Dec 2019 12:33:31 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: <8b0c31bf2c774ef7972a2f21f8b64e0a@B4RWEX01.internal.b4restore.com> Hi Jack, Thanks, but we are not looking to implement other products with spectrum scale. We are only searching for a solution to get Spectrum Scale to do the replication for us automatically. ? Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Jack Horrocks Sendt: 18. december 2019 11:10 Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Hey Andi I'd be talking to the pixstor boys. Ngenea will do it for you without having to mess about too much. https://ww.pixitmedia.com They are down to earth and won't sell you stuff that doesn't work. Thanks Jack. On Wed, 18 Dec 2019 at 21:00, Andi N?r Christiansen > wrote: Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B5A7.BC39FB20] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: From abeattie at au1.ibm.com Wed Dec 18 12:40:44 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 18 Dec 2019 12:40:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.jpg at 01D5B5A5.D4744A80.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.png at 01D5B5A5.D4744A80.png Type: image/png Size: 58433 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Wed Dec 18 12:56:11 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 18 Dec 2019 12:56:11 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> On Wed, 2019-12-18 at 12:04 +0000, Andrew Beattie wrote: > Andi, > > This is basic functionality that is part of Spectrum Scale there is > no additional licensing or HSM costs required for this. > Noting only if you have the Extended Edition. Basic Spectrum Scale licensing does not include AFM :-) JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From arc at b4restore.com Wed Dec 18 12:59:21 2019 From: arc at b4restore.com (=?iso-8859-1?Q?Andi_N=F8r_Christiansen?=) Date: Wed, 18 Dec 2019 12:59:21 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> Message-ID: <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> To my knowledge basic AFM is part of all Spectrum scale licensing's but AFM-DR is only in Data Management and ECE? https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_prodstruct.htm /Andi -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Jonathan Buzzard Sendt: 18. december 2019 13:56 Til: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. On Wed, 2019-12-18 at 12:04 +0000, Andrew Beattie wrote: > Andi, > > This is basic functionality that is part of Spectrum Scale there is no > additional licensing or HSM costs required for this. > Noting only if you have the Extended Edition. Basic Spectrum Scale licensing does not include AFM :-) JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From arc at b4restore.com Wed Dec 18 13:00:24 2019 From: arc at b4restore.com (=?utf-8?B?QW5kaSBOw7hyIENocmlzdGlhbnNlbg==?=) Date: Wed, 18 Dec 2019 13:00:24 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: , Message-ID: Alright, I will have to dig a little deeper with this then..Thanks!? Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Andrew Beattie Sendt: 18. december 2019 13:41 Til: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Andi Daisy chained AFM caches are a bad idea -- while it might work -- when things go wrong they go really badly wrong. Based on the scenario your describing What I think your going to want to do is AFM-DR between Sites A and B and then look at a policy based copy (Scripted Rsync or somthing similar) from Site B to site C I don't believe at present we support an AFM-DR relationship between a cluster and a Cache which is doing AFM to its home -- You could put in a request with IBM development to see if they would support such an architecture - but i'm not sure its ever been tested. Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Andi N?r Christiansen" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 10:31 PM Hi Andrew, Alright, that partly confirms that there is no automatically sweep of data at cache site, right? I mean data will not be deleted automatically after a while in the cache fileset, where it is only metadata that stays? If data is kept until a manual deletion of data is requested on the cache site then this is the way to go for us..! Also, Site A has no connection to Site C so it needs to be connected as A to B and B to C.. This means: Site A holds live data from Site A, Site B holds live data from Site B and Replicated data from Site A, Site C holds replicated data from A and B. Does that make sense? The connection between A and B is LAN, about 500meters apart.. basically same site but different data centers and strictly separated because of security.. Site C is in another Country. Hence why we cant use GPFS AFM and also why we need to utilize WAN/NFS tunneled for AFM. Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af Andrew Beattie Sendt: 18. december 2019 13:04 Til: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Andi, This is basic functionality that is part of Spectrum Scale there is no additional licensing or HSM costs required for this. Set Site C as your AFM Home, and have Site A and Site B both as Caches of Site C you can then Write Data in to Site A - have it stream to Site C, and call it on demand or Prefetch from Site C to Site B as required the Same is true of Site B, you can write Data into Site B, have it Stream to Site C, and call it on demand to site A if you want the data to be Multi Writer then you will need to make sure you use Independent writer as the AFM type https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Active%20File%20Management%20(AFM) Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Andi N?r Christiansen" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 8:00 PM Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B5AB.7DA09A50] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: From jonathan.buzzard at strath.ac.uk Wed Dec 18 13:03:48 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 18 Dec 2019 13:03:48 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> Message-ID: <0dabc7eccd020e31d80484fe99b36e692be47c00.camel@strath.ac.uk> On Wed, 2019-12-18 at 12:59 +0000, Andi N?r Christiansen wrote: > To my knowledge basic AFM is part of all Spectrum scale licensing's > but AFM-DR is only in Data Management and ECE? > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_prodstruct.htm > Gees I can't keep up. That didn't used to be the case and possibly not if you are still on Express edition which looks to have been canned. I was sure our DSS-G says Express edition on the license. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From abeattie at au1.ibm.com Wed Dec 18 13:50:26 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 18 Dec 2019 13:50:26 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk>, Message-ID: An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed Dec 18 13:50:47 2019 From: ulmer at ulmer.org (Stephen Ulmer) Date: Wed, 18 Dec 2019 08:50:47 -0500 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <0dabc7eccd020e31d80484fe99b36e692be47c00.camel@strath.ac.uk> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> <0dabc7eccd020e31d80484fe99b36e692be47c00.camel@strath.ac.uk> Message-ID: I want to say that AFM was in GPFS before there were editions, and that everything that was pre-edition went into Standard Edition. That timing may not be exact, but Advanced edition has definitely never been required for ?regular? AFM. For the longest time the only ?Advanced? feature was encryption. Of course AFM-DR was eventually added to the Advanced Edition stream, which became DME with perTB licensing, which went to a GNR concert and spawned ECE from incessant complaining community feedback. :) I?m not aware that anyone ever *wanted* Express Edition, except the Linux on Z people, because that?s all they were allowed to have for a while. Liberty, ? Stephen > On Dec 18, 2019, at 8:03 AM, Jonathan Buzzard wrote: > > On Wed, 2019-12-18 at 12:59 +0000, Andi N?r Christiansen wrote: >> To my knowledge basic AFM is part of all Spectrum scale licensing's >> but AFM-DR is only in Data Management and ECE? >> >> https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_prodstruct.htm >> > > Gees I can't keep up. That didn't used to be the case and possibly not > if you are still on Express edition which looks to have been canned. I > was sure our DSS-G says Express edition on the license. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgayne at us.ibm.com Wed Dec 18 14:33:45 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Wed, 18 Dec 2019 14:33:45 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.jpg at 01D5B58E.35AA89D0.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.png at 01D5B58E.35AA89D0.png Type: image/png Size: 58433 bytes Desc: not available URL: From vpuvvada at in.ibm.com Thu Dec 19 13:40:31 2019 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Thu, 19 Dec 2019 13:40:31 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: >Site A: >Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. a. Is this required because A cannot directly talk to C ? b. Is this network restriction ? c. Where is the data generated ? At filesetA1 or filesetA2 or filesetA3 or all the places ? >Site B: >Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. > >Site C: >Holds all data from Site A and B ?fileset A3 & B2?. Same as above, where is the data generated ? >We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to >the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? AFM single writer mode or independent-writer mode can be used to replicate the data from the cache to home automatically. a. Approximately how many files/data can each cache(filesetA1, filesetA2 and fileesetB1) hold ? b. After the archival at the site C, will the data get deleted from the filesets at C? ~Venkat (vpuvvada at in.ibm.com) From: Lyle Gayne/Poughkeepsie/IBM To: gpfsug-discuss at spectrumscale.org, Venkateswara R Puvvada/India/IBM at IBMIN Date: 12/18/2019 08:03 PM Subject: Re: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Adding Venkat so he can chime in. Lyle ----- Original message ----- From: "Andi N?r Christiansen" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 5:24 AM Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=eqWwibkj7RzAd4hcjuMXLC8a3bAQwHQNAlIm-a5WEOo&s=dWoFLlPqh2RDoLkJVIY0tM-wTVCtrhCqT0oZL4UkmZ8&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 58433 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 17:22:20 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 17:22:20 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default Message-ID: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School -------------- next part -------------- An HTML attachment was scrubbed... URL: From kywang at us.ibm.com Thu Dec 19 19:06:15 2019 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Thu, 19 Dec 2019 14:06:15 -0500 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> Message-ID: It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=m2_UDb09pxCtr3QQCy-6gDUzpw-o_zJQig_xI3C2_1c&m=Podv2DTbd8lR1FO2ZYZ8x8zq9iYA04zPm4GJnVZqlOw&s=1H_Rhmne_XoS3KS5pOD1RiBL8FQBXV4VdCkEL4KD11E&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 19:18:36 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 19:18:36 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> Message-ID: <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset]"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 106 bytes Desc: image001.gif URL: From kywang at us.ibm.com Thu Dec 19 19:25:01 2019 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Thu, 19 Dec 2019 14:25:01 -0500 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> Message-ID: >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=m2_UDb09pxCtr3QQCy-6gDUzpw-o_zJQig_xI3C2_1c&m=Nbr-ds_gTHq88IqMt3BvuP7-CagDQwEk2Bax6qK4iZo&s=D1aDuwRRm4mrIjdMBLSYo28KEflXV7WLywFw7puhlFU&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16683622.gif Type: image/gif Size: 106 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 19:28:33 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 19:28:33 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> Message-ID: <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed to do in this case? Really appreciate your assistance. Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:25 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different tho]"Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different though. From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset]"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 106 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 107 bytes Desc: image002.gif URL: From kywang at us.ibm.com Thu Dec 19 20:56:05 2019 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Thu, 19 Dec 2019 15:56:05 -0500 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu><794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> Message-ID: Razvan, mmedquota -d -u fs:fset: -d Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command. This option will assign the default quota to the user. The quota entry type will change from "e" to "d_fset". You may need to play a little bit with your system to get the result as you can have default quota per file system set and default quota per fileset enabled. An exemple to illustrate User pfs004 in filesystem fs9 and fileset fset7 has explicit quota set: # mmrepquota -u -v fs9 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none e <=== explicit # mmlsquota -d fs9:fset7 Default Block Limits(KB) | Default File Limits Filesystem Fileset type quota limit | quota limit entryType fs9 fset7 USR 102400 1048576 | 10000 0 default on <=== default quota limits for fs9:fset7, the default fs9 fset7 GRP 0 0 | 0 0 i # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none <=== explicit # mmedquota -d -u pfs004 fs9:fset7 <=== run mmedquota -d -u to get default limits # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none <=== takes the default value # mmrepquota -u -v fs9:fset7 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none d_fset <=== now user pfs004 in fset7 takes the default limits # ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:28 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed to do in this case? Really appreciate your assistance. Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:25 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different tho"Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different though. From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=m2_UDb09pxCtr3QQCy-6gDUzpw-o_zJQig_xI3C2_1c&m=ztpfU2VfH5aJ9mmrGarTov3Rf4RZyt417t0UZAdESOg&s=AY4A_7BxD_jvDV7p9tmwCj6wTIZrD9R6ZEXTOLgZDDI&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16898169.gif Type: image/gif Size: 106 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16513130.gif Type: image/gif Size: 107 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 21:47:21 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 21:47:21 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> Message-ID: Many thanks ? that?s exactly what I?m looking for. Unfortunately I have an error when attempting to run command : First the background: [root at storinator ~]# mmrepquota -u -v --block-size auto gsb:home |grep rp2927 rp2927 home USR 8.934G 10G 20G 0 none | 86355 1048576 3145728 0 none e [root at storinator ~]# mmlsquota -d --block-size auto gsb:home Default Block Limits | Default File Limits Filesystem Fileset type quota limit | quota limit entryType gsb home USR 20G 30G | 1048576 3145728 default on gsb home GRP 0 0 | 0 0 i And now the most interesting part: [root at storinator ~]# mmedquota -d -u rp2927 gsb:home gsb USR default quota is off Attention: In file system gsb (fileset home), block soft limit (10485760) for user rp2927 is too small. Suggest setting it higher than 26214400. Attention: In file system gsb (fileset home), block hard limit (20971520) for user rp2927 is too small. Suggest setting it higher than 26214400. gsb:home is not valid user A little bit more background, maybe of help? [root at storinator ~]# mmlsquota -d gsb Default Block Limits(KB) | Default File Limits Filesystem Fileset type quota limit | quota limit entryType gsb root USR 0 0 | 0 0 i gsb root GRP 0 0 | 0 0 i gsb work USR 0 0 | 0 0 i gsb work GRP 0 0 | 0 0 i gsb misc USR 0 0 | 0 0 i gsb misc GRP 0 0 | 0 0 i gsb home USR 20971520 31457280 | 1048576 3145728 default on gsb home GRP 0 0 | 0 0 i gsb shared USR 0 0 | 0 0 i gsb shared GRP 20971520 31457280 | 1048576 3145728 default on [root at storinator ~]# mmlsfs gsb flag value description ------------------- ------------------------ ----------------------------------- -f 8192 Minimum fragment (subblock) size in bytes -i 4096 Inode size in bytes -I 32768 Indirect block size in bytes -m 2 Default number of metadata replicas -M 3 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j scatter Block allocation type -D nfs4 File locking semantics in effect -k nfs4 ACL semantics in effect -n 100 Estimated number of nodes that will mount file system -B 1048576 Block size -Q user;group;fileset Quotas accounting enabled user;group;fileset Quotas enforced none Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement --filesetdf Yes Fileset df enabled? -V 21.00 (5.0.3.0) File system version --create-time Fri Aug 30 16:25:29 2019 File system creation time -z No Is DMAPI enabled? -L 33554432 Logfile size -E Yes Exact mtime mount option -S relatime Suppress atime mount option -K whenpossible Strict replica allocation option --fastea Yes Fast external attributes enabled? --encryption No Encryption enabled? --inode-limit 105906176 Maximum number of inodes in all inode spaces --log-replicas 0 Number of log replicas --is4KAligned Yes is4KAligned? --rapid-repair Yes rapidRepair enabled? --write-cache-threshold 0 HAWC Threshold (max 65536) --subblocks-per-full-block 128 Number of subblocks per full block -P system;Main01 Disk storage pools in file system --file-audit-log No File Audit Logging enabled? --maintenance-mode No Maintenance Mode enabled? -d meta_01;meta_02;meta_03;data_1A;data_1B;data_2A;data_2B;data_3A;data_3B Disks in file system -A yes Automatic mount option -o none Additional mount options -T /gpfs/cesRoot/gsb Default mount point --mount-priority 2 Mount priority Any ideas? Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 3:56 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Razvan, mmedquota -d -u fs:fset: -d Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command. This option will assign the default quota to the user. The quota entry type will change from "e" to "d_fset". You may need to play a little bit with your system to get the result as you can have default quota per file system set and default quota per fileset enabled. An exemple to illustrate User pfs004 in filesystem fs9 and fileset fset7 has explicit quota set: # mmrepquota -u -v fs9 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none e <=== explicit # mmlsquota -d fs9:fset7 Default Block Limits(KB) | Default File Limits Filesystem Fileset type quota limit | quota limit entryType fs9 fset7 USR 102400 1048576 | 10000 0 default on <=== default quota limits for fs9:fset7, the default fs9 fset7 GRP 0 0 | 0 0 i # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none <=== explicit # mmedquota -d -u pfs004 fs9:fset7 <=== run mmedquota -d -u to get default limits # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none <=== takes the default value # mmrepquota -u -v fs9:fset7 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none d_fset <=== now user pfs004 in fset7 takes the default limits # ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:28:51 PM---I see. May I ask one follow-up question, please: what]"Popescu, Razvan" ---12/19/2019 02:28:51 PM---I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:28 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed to do in this case? Really appreciate your assistance. Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:25 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different tho]"Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different though. From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset]"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 106 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 107 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 108 bytes Desc: image003.gif URL: From jonathan.buzzard at strath.ac.uk Thu Dec 19 21:56:28 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 19 Dec 2019 21:56:28 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> Message-ID: <5ffb8059-bd51-29a5-78c5-19c86dcb6dc7@strath.ac.uk> On 19/12/2019 19:28, Popescu, Razvan wrote: > I see. > > May I ask one follow-up question, please:?? what is? ?mmedquota -d -u > ?? ?supposed to do in this case? > > Really appreciate your assistance. In the past (last time I did this was on version 3.2 or 3.3) if you used mmsetquota and set a users quota to 0 then as far as GPFS was concerned it was like you had never set a quota. This was notionally before per fileset quotas where a thing. In reality on my test cluster you could enable them and set them and they seemed to work as would be expected when I tested it. Never used it in production on those versions because well that would be dumb, and never had to remove a quota completely since. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From lavila at illinois.edu Fri Dec 20 15:32:54 2019 From: lavila at illinois.edu (Avila, Leandro) Date: Fri, 20 Dec 2019 15:32:54 +0000 Subject: [gpfsug-discuss] More information about CVE-2019-4715 Message-ID: Good morning, I am looking for additional information related to CVE-2019-4715 to try to determine the applicability and impact of this vulnerability in our environment. https://exchange.xforce.ibmcloud.com/vulnerabilities/172093 and https://www.ibm.com/support/pages/node/1118913 For the documents above it is not very clear if the issue affects mmfsd or just one of the protocol components (NFS,SMB). Thank you very much for your attention and help -- ==================== Leandro Avila | NCSA From Stephan.Peinkofer at lrz.de Fri Dec 20 15:58:12 2019 From: Stephan.Peinkofer at lrz.de (Peinkofer, Stephan) Date: Fri, 20 Dec 2019 15:58:12 +0000 Subject: [gpfsug-discuss] More information about CVE-2019-4715 In-Reply-To: References: Message-ID: <663A46F4-E170-4C7E-ABDC-E0CE7488C25D@lrz.de> Dear Leonardo, I had the same issue as you today. After some time (after I already opened a case for this) I noticed that they referenced the APAR numbers in the second link you posted. A google search for this apar numbers gives this here https://www-01.ibm.com/support/docview.wss?uid=isg1IJ20901 So seems to be SMB related. Best, Stephan Peinkofer Von meinem iPhone gesendet Am 20.12.2019 um 16:33 schrieb Avila, Leandro : ?Good morning, I am looking for additional information related to CVE-2019-4715 to try to determine the applicability and impact of this vulnerability in our environment. https://exchange.xforce.ibmcloud.com/vulnerabilities/172093 and https://www.ibm.com/support/pages/node/1118913 For the documents above it is not very clear if the issue affects mmfsd or just one of the protocol components (NFS,SMB). Thank you very much for your attention and help -- ==================== Leandro Avila | NCSA _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From lavila at illinois.edu Fri Dec 20 17:14:35 2019 From: lavila at illinois.edu (Avila, Leandro) Date: Fri, 20 Dec 2019 17:14:35 +0000 Subject: [gpfsug-discuss] More information about CVE-2019-4715 In-Reply-To: <663A46F4-E170-4C7E-ABDC-E0CE7488C25D@lrz.de> References: <663A46F4-E170-4C7E-ABDC-E0CE7488C25D@lrz.de> Message-ID: <7efe86e566f610a31e178e0333b65144e5734bc3.camel@illinois.edu> On Fri, 2019-12-20 at 15:58 +0000, Peinkofer, Stephan wrote: > Dear Leonardo, > > I had the same issue as you today. After some time (after I already > opened a case for this) I noticed that they referenced the APAR > numbers in the second link you posted. > > A google search for this apar numbers gives this here > https://www-01.ibm.com/support/docview.wss?uid=isg1IJ20901 > > So seems to be SMB related. > > Best, > Stephan Peinkofer > Stephan, Thank you very much for pointing me in the right direction. I appreciate it. From kevin.doyle at manchester.ac.uk Fri Dec 27 11:45:14 2019 From: kevin.doyle at manchester.ac.uk (Kevin Doyle) Date: Fri, 27 Dec 2019 11:45:14 +0000 Subject: [gpfsug-discuss] Question about Policies Message-ID: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk [/Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1799188038] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16051 bytes Desc: image001.png URL: From YARD at il.ibm.com Fri Dec 27 12:55:06 2019 From: YARD at il.ibm.com (Yaron Daniel) Date: Fri, 27 Dec 2019 14:55:06 +0200 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Message-ID: Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=Wg3EAA9O8sH3c_zHS2h8miVpSosqtXulMRqXMRwSMe0&s=TdemXXkFD1mjpxNFg7Y_DYYPpJXZk7BmQcW9hWQDLso&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4338 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 16051 bytes Desc: not available URL: From kevin.doyle at manchester.ac.uk Fri Dec 27 13:56:29 2019 From: kevin.doyle at manchester.ac.uk (Kevin Doyle) Date: Fri, 27 Dec 2019 13:56:29 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Message-ID: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Hi Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool How do I specify the folder to move it to which needs to be different from the current location. Thanks Kevin RULE ['RuleName'] [WHEN TimeBooleanExpression] MIGRATE [COMPRESS ({'yes' | 'no' | 'lz4' | 'z'})] [FROM POOL 'FromPoolName'] [THRESHOLD (HighPercentage[,LowPercentage[,PremigratePercentage]])] [WEIGHT (WeightExpression)] TO POOL 'ToPoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [SHOW (['String'] SqlExpression)] [SIZE (numeric-sql-expression)] [ACTION (SqlExpression)] [WHERE SqlExpression] Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk [/Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1131538866] From: on behalf of Yaron Daniel Reply-To: gpfsug main discussion list Date: Friday, 27 December 2019 at 12:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Question about Policies Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd [cid:_1_10392F3C103929880046F589C22584DD] Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel [cid:_2_103C9B0C103C96FC0046F589C22584DD] [cid:_2_103C9D14103C96FC0046F589C22584DD] [cid:_2_103C9F1C103C96FC0046F589C22584DD] [cid:_2_103CA124103C96FC0046F589C22584DD] [cid:_2_103CA32C103C96FC0046F589C22584DD] [cid:_2_103CA534103C96FC0046F589C22584DD] [cid:_2_103CA73C103C96FC0046F589C22584DD] [cid:_2_103CA944103C96FC0046F589C22584DD] From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk [/Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1799188038] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16051 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 1115 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3848 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 4267 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 3748 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 3794 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 4302 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 3740 bytes Desc: image008.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 3856 bytes Desc: image009.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image010.jpg Type: image/jpeg Size: 4339 bytes Desc: image010.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image011.png Type: image/png Size: 16052 bytes Desc: image011.png URL: From YARD at il.ibm.com Fri Dec 27 14:11:40 2019 From: YARD at il.ibm.com (Yaron Daniel) Date: Fri, 27 Dec 2019 14:11:40 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Message-ID: Hi As you said it migrate between different pools (ILM/External - Tape) - so in case you need to move directory to different location - you will have to use the OS mv command. From what i remember there is no directory policy for the same pool. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Kevin Doyle To: gpfsug main discussion list Date: 27/12/2019 15:57 Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool How do I specify the folder to move it to which needs to be different from the current location. Thanks Kevin RULE ['RuleName'] [WHEN TimeBooleanExpression] MIGRATE [COMPRESS ({'yes' | 'no' | 'lz4' | 'z'})] [FROM POOL 'FromPoolName'] [THRESHOLD (HighPercentage[,LowPercentage[,PremigratePercentage]])] [WEIGHT (WeightExpression)] TO POOL 'ToPoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [SHOW (['String'] SqlExpression)] [SIZE (numeric-sql-expression)] [ACTION (SqlExpression)] [WHERE SqlExpression] Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk From: on behalf of Yaron Daniel Reply-To: gpfsug main discussion list Date: Friday, 27 December 2019 at 12:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Question about Policies Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=26aKLyF8ZP9iUfCT0RV9tvO89IrBmJUY3xt0AJrp--E&s=beWwNqFpTlTds5Dir2ZVmRiNt9kLQkFZC70Mp7UqFRY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4338 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 16052 bytes Desc: not available URL: From makaplan at us.ibm.com Fri Dec 27 14:19:43 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 27 Dec 2019 09:19:43 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Message-ID: The MIGRATE rule is for moving files from one pool to another, without changing the pathname or any attributes, except the storage devices holding the data blocks of the file. Also can be use for "external" pools to migrate to an HSM system. "moving" from one folder to another is a different concept. The mmapplypolicy LIST and EXTERNAL LIST rules can be used to find files older than 30 days and then do any operations you like on them, but you have to write a script to do those operations. See also -- the "Information Lifecycle Management" (ILM) chapter of the SS Admin Guide AND/OR for an easy to use parallel function equivalent to the classic Unix pipline `find ... | xargs ... ` Try the `mmfind ... -xargs ... ` from the samples/ilm directory. [root@~/.../samples/ilm]$./mmfind Usage: ./mmfind [mmfind args] { | -inputFileList f -policyFile f } mmfind args: [-polFlags 'flag 1 flag 2 ...'] [-logLvl {0|1|2}] [-logFile f] [-saveTmpFiles] [-fs fsName] [-mmapplypolicyOutputFile f] find invocation -- logic: ! ( ) -a -o /path1 [/path2 ...] [expression] -atime N -ctime N -mtime N -true -false -perm mode -iname PATTERN -name PATTERN -path PATTERN -ipath PATTERN -uid N -user NAME -gid N -group NAME -nouser -nogroup -newer FILE -older FILE -mindepth LEVEL -maxdepth LEVEL -links N -size N -empty -type [bcdpflsD] -inum N -exec COMMAND -execdir COMMAND -ea NAME -eaWithValue NAME===VALUE -setEA NAME[===VALUE] -deleteEA NAME -gpfsImmut -gpfsAppOnly -gpfsEnc -gpfsPool POOL_NAME -gpfsMigrate poolFrom,poolTo -gpfsSetPool poolTo -gpfsCompress -gpfsUncompress -gpfsSetRep m,r -gpfsWeight NumericExpr -ls -fls -print -fprint -print0 -fprint0 -exclude PATH -xargs [-L maxlines] [-I rplstr] COMMAND Give -h for a more verbose usage message From: Kevin Doyle To: gpfsug main discussion list Date: 12/27/2019 08:57 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool How do I specify the folder to move it to which needs to be different from the current location. Thanks Kevin RULE ['RuleName'] [WHEN TimeBooleanExpression] MIGRATE [COMPRESS ({'yes' | 'no' | 'lz4' | 'z'})] [FROM POOL 'FromPoolName'] [THRESHOLD (HighPercentage[,LowPercentage[,PremigratePercentage]])] [WEIGHT (WeightExpression)] TO POOL 'ToPoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [SHOW (['String'] SqlExpression)] [SIZE (numeric-sql-expression)] [ACTION (SqlExpression)] [WHERE SqlExpression] Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk /Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1131538866 From: on behalf of Yaron Daniel Reply-To: gpfsug main discussion list Date: Friday, 27 December 2019 at 12:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Question about Policies Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards Yaron Daniel 94 Em Ha'Moshavot Rd cid:_1_10392F3C103929880046F589C22584DD Storage Architect ? IL Lab Petach Tiqva, 49527 Services (Storage) IBM Global Markets, Systems HW Israel Sales Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel cid:_2_103C9B0C103C96FC0046F589C22584DD cid:_2_103C9D14103C96FC0046F589C22584DD cid:_2_103C9F1C103C96FC0046F589C22584DD cid:_2_103CA124103C96FC0046F589C22584DD cid:_2_103CA32C103C96FC0046F589C22584DD cid:_2_103CA534103C96FC0046F589C22584DD cid:_2_103CA73C103C96FC0046F589C22584DD cid:_2_103CA944103C96FC0046F589C22584DD From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk /Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1799188038 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=w3zKI5uOkIxqfgnHm53Al4Q3apC0htUiiuFcMnh2U9s&s=rkD5iWzjhbTA_9kEHL9Laggb4NGjiYS4qoM8yXbAoyM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16547711.gif Type: image/gif Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16942257.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16264175.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16010102.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16098719.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16043707.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16546771.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16875824.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16069185.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16639470.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16809363.gif Type: image/gif Size: 16052 bytes Desc: not available URL: From david_johnson at brown.edu Fri Dec 27 14:20:13 2019 From: david_johnson at brown.edu (david_johnson at brown.edu) Date: Fri, 27 Dec 2019 09:20:13 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: Message-ID: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> You would want to look for examples of external scripts that work on the result of running the policy engine in listing mode. The one issue that might need some attention is the way that gpfs quotes unprintable characters in the pathname. So the policy engine generates the list and your external script does the moving. -- ddj Dave Johnson > On Dec 27, 2019, at 9:11 AM, Yaron Daniel wrote: > > ?Hi > > As you said it migrate between different pools (ILM/External - Tape) - so in case you need to move directory to different location - you will have to use the OS mv command. > From what i remember there is no directory policy for the same pool. > > > > Regards > > > > > Yaron Daniel 94 Em Ha'Moshavot Rd > > Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 > IBM Global Markets, Systems HW Sales Israel > > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > Webex: https://ibm.webex.com/meet/yard > IBM Israel > > > > > > > > > > > > > > > > > From: Kevin Doyle > To: gpfsug main discussion list > Date: 27/12/2019 15:57 > Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > Hi > > Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? > > Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool > How do I specify the folder to move it to which needs to be different from the current location. > > Thanks > Kevin > > RULE['RuleName'] [WHENTimeBooleanExpression] > MIGRATE [COMPRESS({'yes' | 'no' | 'lz4' | 'z'})] > [FROM POOL'FromPoolName'] > [THRESHOLD(HighPercentage[,LowPercentage[,PremigratePercentage]])] > [WEIGHT(WeightExpression)] > TO POOL'ToPoolName' > [LIMIT(OccupancyPercentage)] > [REPLICATE(DataReplication)] > [FOR FILESET('FilesetName'[,'FilesetName']...)] > [SHOW(['String'] SqlExpression)] > [SIZE(numeric-sql-expression)] > [ACTION(SqlExpression)] > [WHERESqlExpression] > > > Kevin Doyle | Linux Administrator, Scientific Computing > Cancer Research UK, Manchester Institute > The University of Manchester > Room 13G40, Alderley Park, Macclesfield SK10 4TG > Mobile: 07554 223480 > Email: Kevin.Doyle at manchester.ac.uk > > > > > > From: on behalf of Yaron Daniel > Reply-To: gpfsug main discussion list > Date: Friday, 27 December 2019 at 12:55 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Question about Policies > > Hi > > U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. > > > > Regards > > > > Yaron Daniel 94 Em Ha'Moshavot Rd > > Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 > IBM Global Markets, Systems HW Sales Israel > > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > Webex: https://ibm.webex.com/meet/yard > IBM Israel > > > > > > > > > > > > > > > > > From: Kevin Doyle > To: "gpfsug-discuss at spectrumscale.org" > Date: 27/12/2019 13:45 > Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > Hi > > I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will > Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. > I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? > > Many thanks > Kevin > > > Kevin Doyle | Linux Administrator, Scientific Computing > Cancer Research UK, Manchester Institute > The University of Manchester > Room 13G40, Alderley Park, Macclesfield SK10 4TG > Mobile: 07554 223480 > Email: Kevin.Doyle at manchester.ac.uk > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Dec 27 14:27:43 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 27 Dec 2019 14:27:43 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> References: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk>, <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.gif at 01D5BCBD.7015DEE0.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image007.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image008.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image009.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image010.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image011.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16052 bytes Desc: not available URL: From daniel.kidger at uk.ibm.com Fri Dec 27 14:30:46 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 27 Dec 2019 14:30:46 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.gif at 01D5BCBD.7015DEE0.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image007.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image008.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image009.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image010.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image011.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16052 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.gif at 01D5BCBD.7015DEE0.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image007.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image008.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image009.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image010.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image011.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16052 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Sat Dec 28 15:17:05 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Sat, 28 Dec 2019 15:17:05 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: On 27/12/2019 14:20, david_johnson at brown.edu wrote: > You would want to look for examples of external scripts that work on the > result of running the policy engine in listing mode. ?The one issue that > might need some attention is the way that gpfs quotes unprintable > characters in the pathname. So the policy engine generates the list and > your external script does the moving. > In my experience a good starting point would be to scan the list of files from the policy engine and separate the files out into "normal"; that is files using basic ASCII and no special characters and the rest also known as the "wacky pile". Given that you are UK based it is not unreasonable to expect all path and file names to be in English. There might (and if not probably should) be an institutional policy mandating it. Not much use if a researcher saves everything in Greek then gets knocked over by a bus and person picking up the work is Spanish for example. Hopefully the "wacky pile" is small, however expect to find all sorts of bizarre file and path names in it. We are talking wildcards, back ticks, even newline characters to name but a few. Depending on the amount of data in the "wacky" pile you might just want to forget about moving them, as they are orders of magnitude more difficult to deal with than files with "sane" path and file names and can rapidly soak up large chunks of time trying to deal with them in scripts. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From Paul.Sanchez at deshaw.com Sat Dec 28 17:07:15 2019 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Sat, 28 Dec 2019 17:07:15 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: <9ce3971faea5493daa133b08e4a0113e@deshaw.com> If you needed to preserve the "wackiness" of the original file and pathnames (and I'm assuming you need to preserve the pathnames in order to avoid collisions between migrated files from different directories which have the same basename, and to allow the files to found/recovered again later, etc) then you can use Marc's `mmfind` suggestion, coupled with the -print0 argument to produce a null-delimited file list which could be coupled with an "xargs -0" pipeline or "rsync -0" to do most of the work. Test everything with a "dry-run" mode which reported what it would do, but without doing it, and one which copied without deleting, to help expose bugs in the process before destroying your data. If the migration doesn't cross between independent filesets, then file migrations could be performed using "mv" without any actual data copying. (For that matter, it could also be done in two stages by hard-linking, then unlinking.) But I think that there are other potential problems involved, even before considering things like path escaping or fileset boundaries... If everything is predicated on the age of a file, you will need to create the missing directory hierarchy in the target dir structure for files which need to be "migrated". If files in a directory vary in age, you may move some files but leave others alone (until they become old enough to migrate) creating incomplete and probably unusable versions at both the source and target. What if a user recreates the missing files as they disappear? As they later age, do you overwrite the files on the target? What if a directory name is later changed to a filename or vice-versa? Will you ever need to "restore" these structures? If so, will you merge these back in to the original source if both non-empty source and target dirs exist? Should we wait for an entire dir hierarchy to age out and then archive it atomically? (We would want a way to know where project dir boundaries are.) I would urge you to think about how complex this might actually get before start performing surgery within data sets. I would be inclined to challenge the original requirements to ensure that what you are able to accomplish matches up with the real goals without creating a raft of new operational problems or loss of work product. Depending on the original goal, it may be possible to do this (more safely) with snapshots or tarballs. -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: Saturday, December 28, 2019 10:17 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Question about Policies This message was sent by an external party. On 27/12/2019 14:20, david_johnson at brown.edu wrote: > You would want to look for examples of external scripts that work on > the result of running the policy engine in listing mode. The one > issue that might need some attention is the way that gpfs quotes > unprintable characters in the pathname. So the policy engine generates > the list and your external script does the moving. > In my experience a good starting point would be to scan the list of files from the policy engine and separate the files out into "normal"; that is files using basic ASCII and no special characters and the rest also known as the "wacky pile". Given that you are UK based it is not unreasonable to expect all path and file names to be in English. There might (and if not probably should) be an institutional policy mandating it. Not much use if a researcher saves everything in Greek then gets knocked over by a bus and person picking up the work is Spanish for example. Hopefully the "wacky pile" is small, however expect to find all sorts of bizarre file and path names in it. We are talking wildcards, back ticks, even newline characters to name but a few. Depending on the amount of data in the "wacky" pile you might just want to forget about moving them, as they are orders of magnitude more difficult to deal with than files with "sane" path and file names and can rapidly soak up large chunks of time trying to deal with them in scripts. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Sat Dec 28 19:49:01 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Sat, 28 Dec 2019 14:49:01 -0500 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file lists perfectly. No need to worry about whitespaces and so forth. Give it a look-see and a try -- marc of GPFS - From: Jonathan Buzzard To: "gpfsug-discuss at spectrumscale.org" Date: 12/28/2019 10:17 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org On 27/12/2019 14:20, david_johnson at brown.edu wrote: > You would want to look for examples of external scripts that work on the > result of running the policy engine in listing mode. ?The one issue that > might need some attention is the way that gpfs quotes unprintable > characters in the pathname. So the policy engine generates the list and > your external script does the moving. > In my experience a good starting point would be to scan the list of files from the policy engine and separate the files out into "normal"; that is files using basic ASCII and no special characters and the rest also known as the "wacky pile". Given that you are UK based it is not unreasonable to expect all path and file names to be in English. There might (and if not probably should) be an institutional policy mandating it. Not much use if a researcher saves everything in Greek then gets knocked over by a bus and person picking up the work is Spanish for example. Hopefully the "wacky pile" is small, however expect to find all sorts of bizarre file and path names in it. We are talking wildcards, back ticks, even newline characters to name but a few. Depending on the amount of data in the "wacky" pile you might just want to forget about moving them, as they are orders of magnitude more difficult to deal with than files with "sane" path and file names and can rapidly soak up large chunks of time trying to deal with them in scripts. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=ndS4tGx_CLuYWNl3PoYZUZGMwTDw0IFQAVCovuw2qbc&s=VLuDBejMqsG2ggu2YNluBW2c_g-bpbNluifBXQNHRM4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Sun Dec 29 10:01:16 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Sun, 29 Dec 2019 10:01:16 +0000 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: On 28/12/2019 19:49, Marc A Kaplan wrote: > The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file > lists perfectly. No need to worry about whitespaces and so forth. > Give it a look-see and a try > Indeed, but I get the feeling from the original post that you will need to mung the path/file names to produce a new directory path that the files is to be moved to. At this point the whole issue of "wacky" directory and file names will rear it's ugly head. So for example /gpfs/users/joeblogs/experiment`1234?/results *-12-2019.txt would need moving to something like /gpfs/users/joeblogs/experiment`1234?/old_data/results *-12-2019.txt That is a pit of woe unless you are confident that users are being sensible, or you just forget about wacky named files. In a similar vein, in the past I have for results coming of a piece of experimental equipment ziped up every 30 days. Each run on the equipment and the results go in a different directory/ So for example the directory /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01/ would be zipped up to /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01.zip and the original directory removed. This works well because both widows explorer and finder will allow you to click into the zip files to see the contents. However the script that did this worked in the principle of a very strict naming convention that if was not adhered to would mean the folders where not zipped up. Given the original posters institution, a good guess is that something like this is what is wanting to be achieved. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From makaplan at us.ibm.com Sun Dec 29 14:24:28 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Sun, 29 Dec 2019 09:24:28 -0500 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: Correct, you may need to use similar parsing/quoting techniques in your renaming scripts. 0 Just remember, in Unix/Posix/Linux the only 2 special characters/codes in path names are '/' and \0. The former delimits directories and the latter marks the end of the string. And technically the latter isn't ever in a path name, it's only used by system APIs to mark the end of a string that is the pathname argument. Happy New Year, From: Jonathan Buzzard To: "gpfsug-discuss at spectrumscale.org" Date: 12/29/2019 05:01 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs Sent by: gpfsug-discuss-bounces at spectrumscale.org On 28/12/2019 19:49, Marc A Kaplan wrote: > The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file > lists perfectly. No need to worry about whitespaces and so forth. > Give it a look-see and a try > Indeed, but I get the feeling from the original post that you will need to mung the path/file names to produce a new directory path that the files is to be moved to. At this point the whole issue of "wacky" directory and file names will rear it's ugly head. So for example /gpfs/users/joeblogs/experiment`1234?/results *-12-2019.txt would need moving to something like /gpfs/users/joeblogs/experiment`1234?/old_data/results *-12-2019.txt That is a pit of woe unless you are confident that users are being sensible, or you just forget about wacky named files. In a similar vein, in the past I have for results coming of a piece of experimental equipment ziped up every 30 days. Each run on the equipment and the results go in a different directory/ So for example the directory /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01/ would be zipped up to /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01.zip and the original directory removed. This works well because both widows explorer and finder will allow you to click into the zip files to see the contents. However the script that did this worked in the principle of a very strict naming convention that if was not adhered to would mean the folders where not zipped up. Given the original posters institution, a good guess is that something like this is what is wanting to be achieved. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=prco68XIUUkBHwRlOlBP9xNlbXteQlfo6eTljgmJseQ&s=dQ0hsxzBJZzZG2Y2Xkh_u6eNGasZl-wHlffQDLn9kiw&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From makaplan at us.ibm.com Mon Dec 30 16:20:59 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 11:20:59 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9ce3971faea5493daa133b08e4a0113e@deshaw.com> References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: Now apart from the mechanics of handling and manipulating pathnames ... the idea to manage storage by "mv"ing instead of MIGRATEing (GPFS-wise) may be ill-advised. I suspect this is a hold-over or leftover from the old days -- when a filesystem was comprised of just a few storage devices (disk drives) and the only way available to manage space was to mv files to another filesystem or archive to tape or whatnot.. That is not the GPFS-way (Spectrum-Scale-way).... Well at least not for more than a dozen or more years! Modern Spectrum Scale has storage POOLs and also integrates with HSM systems. These separate the concept of name space (pathnames) from storage devices. Read about it, discuss it with your colleagues, clients, managers -- and use it! -- marc of GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Mon Dec 30 16:29:52 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 11:29:52 -0500 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: Also see if your distribution includes samples/ilm/mmxcp which, if you are determined to cp or mv from one path to another, shows a way to do that easily in perl, using code similar to the aforementions bin/mmxargs Here is the path changing part... ... $src =~ s/'/'\\''/g; # any ' within the name like x'y become x'\''y then we quote all names passed to commands my @src = split('/',$src); my $sra = join('/', @src[$strip+1..$#src-1]); $newtarg = "'" . $target . '/' . $sra . "'"; ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Mon Dec 30 21:48:00 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 30 Dec 2019 21:48:00 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: On 30/12/2019 16:20, Marc A Kaplan wrote: > Now apart from the mechanics of handling and manipulating pathnames ... > > the idea to manage storage by "mv"ing instead of MIGRATEing (GPFS-wise) > may be ill-advised. > > I suspect this is a hold-over or leftover from the old days -- when a > filesystem was comprised of just a few storage devices (disk drives) and > the only way available to manage space was to mv files to another > filesystem or archive to tape or whatnot.. > I suspect based on the OP is from (a cancer research institute which is basically life sciences) that this is an incorrect assumption. I would guess this is about "archiving" results coming off experimental equipment. I use the term "archiving" in the same way that various email programs try and "archive" my old emails. That is to prevent the output directory of the equipment filling up with many thousands of files and/or directories I want to automate the placement in a directory hierarchy of old results. Imagine a piece of equipment that does 50 different analysis's a day every working day. That's a 1000 a month or ~50,000 a year. It's about logically moving stuff to keep ones working directory manageable but making finding an old analysis easy to find. I would also note that some experimental equipment would do many more than 50 different analysis's a day. It's a common requirement in any sort of research facility, especially when they have central facilities for doing analysis on equipment that would be too expensive for an individual group or where it makes sense to "outsource" repetitive basics analysis to lower paid staff. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Mon Dec 30 22:14:18 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 30 Dec 2019 22:14:18 +0000 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: <3127843a-403f-d360-4b6c-9b410c9ef39d@strath.ac.uk> On 29/12/2019 14:24, Marc A Kaplan wrote: > Correct, you may need to use similar parsing/quoting techniques in your > renaming scripts. > 0 > Just remember, in Unix/Posix/Linux the only 2 special characters/codes > in path names are '/' and \0. The former delimits directories and the > latter marks the end of the string. > And technically the latter isn't ever in a path name, it's only used by > system APIs to mark the end of a string that is the pathname argument. >i I am not sure even that is entirely true. Certainly MacOS X in the past would allow '/' in file names. You find this out when a MacOS user tries to migrate their files to a SMB based file server and the process trips up because they have named a whole bunch of files in the format "My Results 30/12/2019.txt" At this juncture I note that MacOS is certified Unix :-) I think it is more a file system limitation than anything else. I wonder what happens when you mount a HFS+ file system with such named files on Linux... I would at this point note that the vast majority of "wacky" file names originate from MacOS (both Classic and X) users. Also while you are otherwise technically correct about what is allowed in a file name just try creating a file name with a newline character in it using either a GUI tool or the command line. You have to be really determined to achieve it. I have also seen \007 in a file name, I mean really. Our training for new HPC users has a section covering file names which includes advising users not to use "wacky" characters in them as we don't guarantee their continued survival. That is if we do something on the file system and they get "lost" as a result it's your own fault. In my view restricting yourself to the following is entirely sensible https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata Also while Unix is generally case sensitive creating files that would clash if accessed case insensitive is really dumb and should be avoided. Again, if it causes you problems in future, it sucks to be you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From makaplan at us.ibm.com Mon Dec 30 23:35:02 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 18:35:02 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu><9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: Yes, that is entirely true, if not then basic Posix calls like open(2) are broken. https://stackoverflow.com/questions/9847288/is-it-possible-to-use-in-a-filename -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Mon Dec 30 23:40:37 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 18:40:37 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu><9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: As I said :"MAY be ill-advised". If you have a good reason to use "mv" then certainly, use it! But there are plenty of good naming conventions for the scenario you give... Like, start a new directory of results every day, week or month... /fs/experiments/y2019/m12/d30/fileX.ZZZ ... OF course, if you want or need to mv, or cp and/or rm the metadata out of the filesystem, then eventually you do so! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Mon Dec 30 23:55:17 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 30 Dec 2019 23:55:17 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: <09180fd7-8121-02d6-6384-8ef4b9c7decd@strath.ac.uk> On 30/12/2019 23:40, Marc A Kaplan wrote: > As I said :"MAY be ill-advised". > > If you have a good reason to use "mv" then certainly, use it! > > But there are plenty of good naming conventions for the scenario you > give... > Like, start a new directory of results every day, week or month... > > > /fs/experiments/y2019/m12/d30/fileX.ZZZ ... > > OF course, if you want or need to mv, or cp and/or rm the metadata out > of the filesystem, then eventually you do so! > Possibly, but often (in fact sensibly) the results are saved in the first instance to the local machine because any network issue and boom your results are gone as doing the analysis destroys the sample. That in life sciences can easily mean several days and $1000. The results are then uploaded automatically to the file server. That gets a whole bunch more complicated. Honest you simply don't want to go there getting it to be done different. It would be less painful to have a tooth extracted without anesthetic. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Tue Dec 31 00:00:06 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 31 Dec 2019 00:00:06 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: On 30/12/2019 23:35, Marc A Kaplan wrote: > Yes, that is entirely true, if not then basic Posix calls like open(2) > are broken. > > _https://stackoverflow.com/questions/9847288/is-it-possible-to-use-in-a-filename_ > > That's for Linux and possibly Posix. Like I said on the certified *Unix* that is macOS it's perfectly fine. I have bumped into it more times that I care to recall. Try moving a MacOS AFP server to a different OS and then get back to me... JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From alvise.dorigo at psi.ch Tue Dec 3 14:35:22 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Tue, 3 Dec 2019 14:35:22 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Message-ID: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Tue Dec 3 14:44:21 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Tue, 3 Dec 2019 14:44:21 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <5f54e13651cc45ef999ebf2417792b38@psi.ch> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Tue Dec 3 14:54:31 2019 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Tue, 3 Dec 2019 09:54:31 -0500 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <5f54e13651cc45ef999ebf2417792b38@psi.ch> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From TOMP at il.ibm.com Tue Dec 3 15:02:36 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Tue, 3 Dec 2019 17:02:36 +0200 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=5Ji4Rrk0dQhYpwfSkj-6RPXwgYhhiqqImlaHmuHrOsk&s=Z0aCyK22UfYZ2VIREnwtIirpmS2fM6a7IrkEUnuWyB8&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue Dec 3 15:03:41 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 3 Dec 2019 15:03:41 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: <02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> On 03/12/2019 14:54, Olaf Weiser wrote: > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - ?you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster > .. ?.. add the nodes to your existing cluster.. and then start > configuring the RGs > I was under the impression (from post by IBM employees on this list) that you are not allowed to mix GNR, ESS, DSS, classical GPFS, DDN GPFS etc. in the same cluster. Not a technical limitation but a licensing one. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From makaplan at us.ibm.com Tue Dec 3 19:14:52 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 3 Dec 2019 14:14:52 -0500 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> <02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> Message-ID: IF you have everything properly licensed and then you reconfigure... It may work okay... But then you may come up short if you ask for IBM support or service... So depending how much support you need or desire... Or take the easier and supported path... And probably accomplish most of what you need -- let each cluster be and remote mount onto clients which could be on any connected cluster. From: Jonathan Buzzard To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 10:04 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org On 03/12/2019 14:54, Olaf Weiser wrote: > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - ?you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster > .. ?.. add the nodes to your existing cluster.. and then start > configuring the RGs > I was under the impression (from post by IBM employees on this list) that you are not allowed to mix GNR, ESS, DSS, classical GPFS, DDN GPFS etc. in the same cluster. Not a technical limitation but a licensing one. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIF-g&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=lEWw7H2AdQxSCu_vbgGHhztL0y7voTATCG_KfbRgHJw&s=wg5NvwO5OAw-jLCsL-BtSRGisghnRu5F39r_G_gKNKk&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From lgayne at us.ibm.com Tue Dec 3 19:20:55 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Tue, 3 Dec 2019 19:20:55 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , <5f54e13651cc45ef999ebf2417792b38@psi.ch><02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0E56DFFAD6E28f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0E56DFFAD6E28f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15754003609670.gif Type: image/gif Size: 105 bytes Desc: not available URL: From lgayne at us.ibm.com Tue Dec 3 19:30:31 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Tue, 3 Dec 2019 19:30:31 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Wed Dec 4 09:29:32 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Wed, 4 Dec 2019 09:29:32 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , <5f54e13651cc45ef999ebf2417792b38@psi.ch>, Message-ID: <62721c5c4c3640848e1513d03965fefe@psi.ch> Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Dec 4 11:21:54 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 4 Dec 2019 12:21:54 +0100 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <62721c5c4c3640848e1513d03965fefe@psi.ch> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> <62721c5c4c3640848e1513d03965fefe@psi.ch> Message-ID: Adding the GL2 into your existing cluster shouldn?t be any problem. You would just delete the existing cluster on the GL2, then on the EMS run something like: gssaddnode -N gssio1-hs --cluster-node netapp-node --nodetype gss --accept-license gssaddnode -N gssio2-hs --cluster-node netapp-node --nodetype gss --accept-license and then afterwards create the RGs: gssgenclusterrgs -G gss_ppc64 --suffix=-hs Then create the vdisks/nsds and add to your existing filesystem. Beware that last time I did this, gssgenclusterrgs triggered a "mmshutdown -a" on the whole cluster, because it wanted to change some config settings... Caught me a bit by surprise.. -jf ons. 4. des. 2019 kl. 10:44 skrev Dorigo Alvise (PSI) : > Thank you all for the answer. I try to recap my answers to your questions: > > > > 1. the purpose is not to merge clusters "per se"; it is adding the > GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp > (which is running out of free space); of course I know well the > heterogeneity of this hypothetical system, so the GL2's NSD would go to a > special pool; but in the end I need a unique namespace for files. > 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 > cluster) because the former is in production and I cannot schedule long > downtimes > 3. All system have proper licensing of course; what does it means that > I could loose IBM support ? if the support is for a failing disk drive I do > not think so; if the support is for a "strange" behaviour of GPFS I can > probably understand > 4. NSD (in the NetApp system) are in their roles: what do you mean > exactly ? there's RAIDset attached to servers that are actually NSD > together with their attached LUN > > > Alvise > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Lyle Gayne < > lgayne at us.ibm.com> > *Sent:* Tuesday, December 3, 2019 8:30:31 PM > *To:* gpfsug-discuss at spectrumscale.org > *Cc:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp *< --- Are these > NSD servers in their GPFS roles (where Scale "runs on top"*? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest > cluster, rather than simply allowing remote mount of the ESS servers by the > other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our > coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no > issues. Having them as separate file systems (NetApp vs. ESS) means no > concerns regarding varying architectures within the same fs serving or > failover scheme. Mixing such as compute nodes would mean some performance > differences across the different clients, but you haven't described your > compute (NSD client) details. > > Lyle > > ----- Original message ----- > From: "Tomer Perry" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR > configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based > file system, then all affected IBM Spectrum Scale RAID objects will be > exported as well. This includes recovery groups, declustered arrays, > vdisks, and any other file systems that are based on these objects. For > more information about IBM Spectrum Scale RAID, see *IBM Spectrum > Scale RAID: Administration*. " > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since > I would assume that the cluster config version is to high for the NetApp > based cluster. > I would also suspect that the filesystem version on the ESS will be > different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" > To: gpfsug main discussion list > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to > a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. > .. add the nodes to your existing cluster.. and then start configuring the > RGs > > > > > > From: "Dorigo Alvise (PSI)" > To: "gpfsug-discuss at spectrumscale.org" < > gpfsug-discuss at spectrumscale.org> > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Wed Dec 4 14:07:18 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Wed, 4 Dec 2019 14:07:18 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <62721c5c4c3640848e1513d03965fefe@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Thu Dec 5 09:15:13 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Thu, 5 Dec 2019 09:15:13 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <62721c5c4c3640848e1513d03965fefe@psi.ch>, Message-ID: Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, >From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, Anderson Nobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone: 55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Thu Dec 5 10:24:08 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Thu, 5 Dec 2019 10:24:08 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: Message-ID: One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: > > ? > Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. > > > > A > > From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre > Sent: Wednesday, December 4, 2019 3:07:18 PM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > Hi Dorigo, > > From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. > > Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata > > Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. > > One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: > https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning > > Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. > > Abra?os / Regards / Saludos, > > > Anderson Nobre > Power and Storage Consultant > IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services > > > > Phone: 55-19-2132-4317 > E-mail: anobre at br.ibm.com > > > ----- Original message ----- > From: "Dorigo Alvise (PSI)" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: "gpfsug-discuss at spectrumscale.org" > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Date: Wed, Dec 4, 2019 06:44 > > Thank you all for the answer. I try to recap my answers to your questions: > > > > the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. > I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes > All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand > NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN > > Alvise > From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne > Sent: Tuesday, December 3, 2019 8:30:31 PM > To: gpfsug-discuss at spectrumscale.org > Cc: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. > > Lyle > ----- Original message ----- > From: "Tomer Perry" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. > I would also suspect that the filesystem version on the ESS will be different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" > To: gpfsug main discussion list > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs > > > > > > From: "Dorigo Alvise (PSI)" > To: "gpfsug-discuss at spectrumscale.org" > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Thu Dec 5 14:50:01 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Thu, 5 Dec 2019 14:50:01 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , Message-ID: <15d9b14554534be7a7adca204ca3febd@psi.ch> This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com [https://images.youracclaim.com/images/c49300ae-d13e-4071-90f5-15f59d199c9e/IBM%2BVolunteers%2BGold%2Bv6.png] [https://images.youracclaim.com/images/f2539224-f951-46b4-b376-b88f21c2be98/IBM-Selling-Certification---Level-1.png] [https://images.youracclaim.com/images/ea52b12f-97ac-4e72-8d24-b0ced8054e7d/Storage%2BTechnical%2BV1.png] On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: ? Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From cblack at nygenome.org Thu Dec 5 15:17:49 2019 From: cblack at nygenome.org (Christopher Black) Date: Thu, 5 Dec 2019 15:17:49 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <15d9b14554534be7a7adca204ca3febd@psi.ch> References: <15d9b14554534be7a7adca204ca3febd@psi.ch> Message-ID: <487517C3-5B4A-401E-85E5-A1874527A115@nygenome.org> If you have two clusters that are hard to merge, but you are facing the need to provide capacity for more writes, another option to consider would be to set up a filesystem on GL2 with an AFM relationship to the filesystem on the netapp gpfs cluster for accessing older data and point clients to the new GL2 filesystem. Some downsides to that approach include introducing a dependency on afm (and potential performance reduction) to get to older data. There may also be complications depending on how your filesets are laid out. At some point when you have more capacity in 5.x cluster and/or are ready to move off netapp, you could use afm to prefetch all data into new filesystem. In theory, you could then (re)build nsd servers connected to netapp on 5.x and add them to new cluster and use them for a separate pool or keep them as a separate 5.x cluster. Best, Chris From: on behalf of "Dorigo Alvise (PSI)" Reply-To: gpfsug main discussion list Date: Thursday, December 5, 2019 at 9:50 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com [https://images.youracclaim.com/images/c49300ae-d13e-4071-90f5-15f59d199c9e/IBM%2BVolunteers%2BGold%2Bv6.png] [https://images.youracclaim.com/images/f2539224-f951-46b4-b376-b88f21c2be98/IBM-Selling-Certification---Level-1.png] [https://images.youracclaim.com/images/ea52b12f-97ac-4e72-8d24-b0ced8054e7d/Storage%2BTechnical%2BV1.png] On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Thu Dec 5 15:59:07 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 5 Dec 2019 16:59:07 +0100 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <15d9b14554534be7a7adca204ca3febd@psi.ch> References: <15d9b14554534be7a7adca204ca3febd@psi.ch> Message-ID: There?s still being maintained the ESS v5.2 release stream with gpfs v4.2.3.x for customer that are stuck on v4. You should probably install that on your ESS if you want to add it to your existing cluster. BTW: I think Tomer misunderstood the task a bit. It sounded like you needed to keep the existing recoverygroups from the ESS in the merge. That would probably be complicated.. Adding an empty ESS to an existing cluster should not be complicated ?- it?s just not properly documented anywhere I?m aware of. -jf tor. 5. des. 2019 kl. 15:50 skrev Dorigo Alvise (PSI) : > This is a quite critical storage for data taking. It is not easy to update > to GPFS5 because in that facility we have very short shutdown period. Thank > you for pointing out that 4.2.3. But the entire storage will be replaced in > the future; at the moment we just need to expand it to survive for a while. > > > This merge seems quite tricky to implement and I haven't seen consistent > opinions among the people that kindly answered. According to Jan Frode, > Kaplan and T. Perry it should be possible, in principle, to do the merge... > Other people suggest a remote mount, which is not a solution for my use > case. Other suggest not to do that... > > > A > > > > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Daniel Kidger < > daniel.kidger at uk.ibm.com> > *Sent:* Thursday, December 5, 2019 11:24:08 AM > > *To:* gpfsug main discussion list > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > One additional question to ask is : what are your long term plans for the > 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x > (hopefully before 4.2.3 goes out of support)? > > Also I assume your Netapp hardware is the standard Netapp block storage, > perhaps based on their standard 4U60 shelves daisy-chained together? > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum Discover and IBM Cloud Object Store > > + <+44-7818%20522%20266>44-(0)7818 522 266 <+44-7818%20522%20266> > daniel.kidger at uk.ibm.com > > > > > > > > On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: > > ? > > Thank Anderson for the material. In principle our idea was to scratch the > filesystem in the GL2, put its NSD on a dedicated pool and then merge it > into the Filesystem which would remain on V4. I do not want to create a FS > in the GL2 but use its space to expand the space of the other cluster. > > > A > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Anderson Ferreira > Nobre > *Sent:* Wednesday, December 4, 2019 3:07:18 PM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > Hi Dorigo, > > From point of view of cluster administration I don't think it's a good > idea to have hererogeneous cluster. There are too many diferences between > V4 and V5. And much probably many of enhancements of V5 you won't take > advantage. One example is the new filesystem layout in V5. And at this > moment the way to migrate the filesystem is create a new filesystem in V5 > with the new layout and migrate the data. That is inevitable. I have seen > clients saying that they don't need all that enhancements, but the true is > when you face performance issue that is only addressable with the new > features someone will raise the question why we didn't consider that in the > beginning. > > Use this time to review if it would be better to change the block size of > your fileystem. There's a script called filehist > in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your > current filesystem. Here's the link with additional information: > > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata > > Different RAID configurations also brings unexpected performance > behaviors. Unless you are planning create different pools and use ILM to > manage the files in different pools. > > One last thing, it's a good idea to follow the recommended levels for > Spectrum Scale: > > https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning > > Anyway, you are the system administrator, you know better than anyone how > complex is to manage this cluster. > > Abra?os / Regards / Saludos, > > > *AndersonNobre* > Power and Storage Consultant > IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services > > [image: community_general_lab_services] > > ------------------------------ > Phone:55-19-2132-4317 > E-mail: anobre at br.ibm.com [image: IBM] > > > > ----- Original message ----- > From: "Dorigo Alvise (PSI)" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: "gpfsug-discuss at spectrumscale.org" > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Wed, Dec 4, 2019 06:44 > > > Thank you all for the answer. I try to recap my answers to your questions: > > > > 1. the purpose is not to merge clusters "per se"; it is adding the > GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp > (which is running out of free space); of course I know well the > heterogeneity of this hypothetical system, so the GL2's NSD would go to a > special pool; but in the end I need a unique namespace for files. > 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 > cluster) because the former is in production and I cannot schedule long > downtimes > 3. All system have proper licensing of course; what does it means that > I could loose IBM support ? if the support is for a failing disk drive I do > not think so; if the support is for a "strange" behaviour of GPFS I can > probably understand > 4. NSD (in the NetApp system) are in their roles: what do you mean > exactly ? there's RAIDset attached to servers that are actually NSD > together with their attached LUN > > > Alvise > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Lyle Gayne < > lgayne at us.ibm.com> > *Sent:* Tuesday, December 3, 2019 8:30:31 PM > *To:* gpfsug-discuss at spectrumscale.org > *Cc:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp *< --- Are these > NSD servers in their GPFS roles (where Scale "runs on top"*? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest > cluster, rather than simply allowing remote mount of the ESS servers by the > other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our > coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no > issues. Having them as separate file systems (NetApp vs. ESS) means no > concerns regarding varying architectures within the same fs serving or > failover scheme. Mixing such as compute nodes would mean some performance > differences across the different clients, but you haven't described your > compute (NSD client) details. > > Lyle > > ----- Original message ----- > From: "Tomer Perry" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR > configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based > file system, then all affected IBM Spectrum Scale RAID objects will be > exported as well. This includes recovery groups, declustered arrays, > vdisks, and any other file systems that are based on these objects. For > more information about IBM Spectrum Scale RAID, see *IBM Spectrum > Scale RAID: Administration*." > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since > I would assume that the cluster config version is to high for the NetApp > based cluster. > I would also suspect that the filesystem version on the ESS will be > different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" > To: gpfsug main discussion list > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to > a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. > .. add the nodes to your existing cluster.. and then start configuring the > RGs > > > > > > From: "Dorigo Alvise (PSI)" > To: "gpfsug-discuss at spectrumscale.org" < > gpfsug-discuss at spectrumscale.org> > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgayne at us.ibm.com Thu Dec 5 15:58:39 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Thu, 5 Dec 2019 10:58:39 -0500 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <487517C3-5B4A-401E-85E5-A1874527A115@nygenome.org> References: <15d9b14554534be7a7adca204ca3febd@psi.ch> <487517C3-5B4A-401E-85E5-A1874527A115@nygenome.org> Message-ID: One tricky bit in this case is that ESS is always recommended to be its own standalone cluster, so MERGING it as a storage pool or pools into a cluster already containing NetApp storage wouldn't be generally recommended. Yet you cannot achieve the stated goal of a single fs image/mount point containing both types of storage that way. Perhaps our ESS folk should weigh in regarding possible routs? Lyle From: Christopher Black To: gpfsug main discussion list Date: 12/05/2019 10:53 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org If you have two clusters that are hard to merge, but you are facing the need to provide capacity for more writes, another option to consider would be to set up a filesystem on GL2 with an AFM relationship to the filesystem on the netapp gpfs cluster for accessing older data and point clients to the new GL2 filesystem. Some downsides to that approach include introducing a dependency on afm (and potential performance reduction) to get to older data. There may also be complications depending on how your filesets are laid out. At some point when you have more capacity in 5.x cluster and/or are ready to move off netapp, you could use afm to prefetch all data into new filesystem. In theory, you could then (re)build nsd servers connected to netapp on 5.x and add them to new cluster and use them for a separate pool or keep them as a separate 5.x cluster. Best, Chris From: on behalf of "Dorigo Alvise (PSI)" Reply-To: gpfsug main discussion list Date: Thursday, December 5, 2019 at 9:50 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=96nejPA0lJgbr9YP3LlaHsFUacfAy3QObHRl5SSeu6o&s=E1HEKXJOzKNDJan1TBYUlV1ckkhUjDiqUXT-x-p-QbI&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: From stockf at us.ibm.com Thu Dec 5 20:13:28 2019 From: stockf at us.ibm.com (Frederick Stock) Date: Thu, 5 Dec 2019 20:13:28 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <15d9b14554534be7a7adca204ca3febd@psi.ch> References: <15d9b14554534be7a7adca204ca3febd@psi.ch>, , Message-ID: An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Fri Dec 6 14:37:02 2019 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Fri, 6 Dec 2019 14:37:02 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Message-ID: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage This is full-function DME, no time restrictions, limited to 12TB per cluster. NO production use or support! It?s likely that some people entirely new to Scale will find their way here to the user group Slack channel and mailing list, so I thank you in advance for making them welcome, and letting them know about the wealth of online information for Scale, including the email address scale at us.ibm.com Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69557 bytes Desc: image001.png URL: From lists at esquad.de Sun Dec 8 17:22:43 2019 From: lists at esquad.de (Dieter Mosbach) Date: Sun, 8 Dec 2019 18:22:43 +0100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: Am 06.12.2019 um 15:37 schrieb Carl Zetie - carlz at us.ibm.com:> > Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage > Clicking on "Try free developer edition" leads to a download of "Spectrum Scale 4.2.2 GUI Open Beta zip file" from 2015-08-22 ... Kind regards Dieter From alvise.dorigo at psi.ch Mon Dec 9 10:03:58 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Mon, 9 Dec 2019 10:03:58 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <15d9b14554534be7a7adca204ca3febd@psi.ch>, , , Message-ID: <2bad2631ebf44042b4004fb5c51eb7d0@psi.ch> I thank you all so much for the participation on this topic. We realized that what we wanted to do is not only "exotic", but also not officially supported and as far as I understand no one did something like that in production. We do not want to be the first with production systems. We decided that the least disruptive thing to do is remotely mount the GL2's filesystem into the NetApp/GPFS cluster and for a limited amount of time (less than 1 year) we are going to survive with different filesystem namespaces, managing users and groups with some symlink system or other high level solutions. Thank you very much, Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Frederick Stock Sent: Thursday, December 5, 2019 9:13:28 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster If you plan to replace all the storage then why did you choose to integrate a ESS GL2 rather than use another storage option? Perhaps you had already purchased the ESS system? Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Thu, Dec 5, 2019 2:57 PM This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com [X] [X] [X] On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: ? Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Mon Dec 9 10:30:05 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Mon, 9 Dec 2019 10:30:05 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: , <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: An HTML attachment was scrubbed... URL: From nnasef at us.ibm.com Mon Dec 9 18:35:52 2019 From: nnasef at us.ibm.com (Nariman Nasef) Date: Mon, 9 Dec 2019 18:35:52 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-productionuse now available In-Reply-To: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.156777917997825.png Type: image/png Size: 15543 bytes Desc: not available URL: From Greg.Lehmann at csiro.au Tue Dec 10 02:09:31 2019 From: Greg.Lehmann at csiro.au (Lehmann, Greg (IM&T, Pullenvale)) Date: Tue, 10 Dec 2019 02:09:31 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: Hi Carl, I am wondering if it is acceptable to use this as a test cluster. The main intentions being to try fixes, configuration changes etc. on the test cluster before applying those to the production cluster. I guess the issue with this release, is that it is the latest version. We really need a version that matches production and be able to apply fixpacks, PTFs etc. to it without breaching the license of the developer edition. Cheers, Greg Lehmann -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Carl Zetie - carlz at us.ibm.com Sent: Saturday, December 7, 2019 12:37 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage This is full-function DME, no time restrictions, limited to 12TB per cluster. NO production use or support! It?s likely that some people entirely new to Scale will find their way here to the user group Slack channel and mailing list, so I thank you in advance for making them welcome, and letting them know about the wealth of online information for Scale, including the email address scale at us.ibm.com Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com From jack at flametech.com.au Tue Dec 10 02:35:06 2019 From: jack at flametech.com.au (Jack Horrocks) Date: Tue, 10 Dec 2019 13:35:06 +1100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: Hi Carl, To further that I tried to download it in Australia and couldn't. I said I had to go through export controls. Thanks Jack. On Tue, 10 Dec 2019 at 13:16, Lehmann, Greg (IM&T, Pullenvale) wrote: > Hi Carl, > I am wondering if it is acceptable to use this as a test cluster. > The main intentions being to try fixes, configuration changes etc. on the > test cluster before applying those to the production cluster. I guess the > issue with this release, is that it is the latest version. We really need a > version that matches production and be able to apply fixpacks, PTFs etc. to > it without breaching the license of the developer edition. > > Cheers, > > Greg Lehmann > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Carl Zetie - > carlz at us.ibm.com > Sent: Saturday, December 7, 2019 12:37 AM > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Scale Developer Edition free for non-production > use now available > > > Spectrum Scale Developer Edition is now available for free download on the > IBM Marketplace: > https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage > > This is full-function DME, no time restrictions, limited to 12TB per > cluster. NO production use or support! > > It?s likely that some people entirely new to Scale will find their way > here to the user group Slack channel and mailing list, so I thank you in > advance for making them welcome, and letting them know about the wealth of > online information for Scale, including the email address scale at us.ibm.com > > > Carl Zetie > Program Director > Offering Management > Spectrum Scale & Spectrum Discover > ---- > (919) 473 3318 ][ Research Triangle Park > carlz at us.ibm.com > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nigel.williams at tpac.org.au Tue Dec 10 03:07:31 2019 From: nigel.williams at tpac.org.au (Nigel Williams) Date: Tue, 10 Dec 2019 14:07:31 +1100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: On Tue, 10 Dec 2019 at 13:35, Jack Horrocks wrote: > To further that I tried to download it in Australia and couldn't. I said I had to go through export controls. I clicked the option "I already have an IBMid", but using known working credentials [1] I get "Incorrect IBMid or password. Please try again!" [1] credentials work with support.ibm.com and IBM Cloud From Greg.Lehmann at csiro.au Tue Dec 10 03:11:30 2019 From: Greg.Lehmann at csiro.au (Lehmann, Greg (IM&T, Pullenvale)) Date: Tue, 10 Dec 2019 03:11:30 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: I am in Australia and downloaded it OK. Greg Lehmann Senior High Performance Data Specialist | CSIRO Greg.Lehmann at csiro.au | +61 7 3327 4137 | From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jack Horrocks Sent: Tuesday, December 10, 2019 12:35 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Hi Carl, To further that I tried to download it in Australia and couldn't. I said I had to go through export controls. Thanks Jack. On Tue, 10 Dec 2019 at 13:16, Lehmann, Greg (IM&T, Pullenvale) > wrote: Hi Carl, I am wondering if it is acceptable to use this as a test cluster. The main intentions being to try fixes, configuration changes etc. on the test cluster before applying those to the production cluster. I guess the issue with this release, is that it is the latest version. We really need a version that matches production and be able to apply fixpacks, PTFs etc. to it without breaching the license of the developer edition. Cheers, Greg Lehmann -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Carl Zetie - carlz at us.ibm.com Sent: Saturday, December 7, 2019 12:37 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage This is full-function DME, no time restrictions, limited to 12TB per cluster. NO production use or support! It?s likely that some people entirely new to Scale will find their way here to the user group Slack channel and mailing list, so I thank you in advance for making them welcome, and letting them know about the wealth of online information for Scale, including the email address scale at us.ibm.com Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From nigel.williams at tpac.org.au Tue Dec 10 03:29:04 2019 From: nigel.williams at tpac.org.au (Nigel Williams) Date: Tue, 10 Dec 2019 14:29:04 +1100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: On Tue, 10 Dec 2019 at 14:19, Lehmann, Greg (IM&T, Pullenvale) wrote: > I am in Australia and downloaded it OK. I found a workaround which was to logon to an IBM service that worked with my credentials, and then switch back to the developer edition download and that allowed me to click through and start the download. From jmanuel.fuentes at upf.edu Tue Dec 10 09:45:19 2019 From: jmanuel.fuentes at upf.edu (FUENTES DIAZ, JUAN MANUEL) Date: Tue, 10 Dec 2019 10:45:19 +0100 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full Message-ID: Hi, Recently our group have migrated the Spectrum Scale from 4.2.3.9 to 5.0.3.0. According to the documentation to finish and consolidate the migration we should also update the config and the filesystems to the latest version with the commands above. Our cluster is a single cluster and all the nodes have the same version. My question is if we can update safely with those commands without compromising the data and metadata. Thanks Juanma -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergi.more at bsc.es Tue Dec 10 10:04:31 2019 From: sergi.more at bsc.es (Sergi More) Date: Tue, 10 Dec 2019 11:04:31 +0100 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full In-Reply-To: References: Message-ID: <48fb738b-203a-14cb-ef12-3a94f0cad199@bsc.es> Hi Juanma, Yes, it is safe. We have done it several times. AFAIK it doesn't actually change current data and metadata. Just states that filesystem is using latest version, so new features can be enabled. It is something to take into consideration specially when using multicluster, or mixing different gpfs versions, as these could potentially prevent older nodes to be able to mount the filesystems, but this doesn't seem to be your case. Best regards, Sergi. On 10/12/2019 10:45, FUENTES DIAZ, JUAN MANUEL wrote: > Hi, > > Recently our group have migrated the Spectrum Scale from 4.2.3.9 to > 5.0.3.0. According to the documentation to finish and consolidate the > migration we should also update the config and the filesystems to the > latest version with the commands above. Our cluster is a single > cluster and all the nodes have the same version. My question is if we > can update safely?with those commands without?compromising the data > and metadata. > > Thanks Juanma > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- ------------------------------------------------------------------------ Sergi More Codina Operations - System administration Barcelona Supercomputing Center Centro Nacional de Supercomputacion WWW: http://www.bsc.es Tel: +34-93-405 42 27 e-mail: sergi.more at bsc.es Fax: +34-93-413 77 21 ------------------------------------------------------------------------ WARNING / LEGAL TEXT: This message is intended only for the use of the individual or entity to which it is addressed and may contain information which is privileged, confidential, proprietary, or exempt from disclosure under applicable law. If you are not the intended recipient or the person responsible for delivering the message to the intended recipient, you are strictly prohibited from disclosing, distributing, copying, or in any way using this message. If you have received this communication in error, please notify the sender and destroy and delete any copies you may have received. http://www.bsc.es/disclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3617 bytes Desc: S/MIME Cryptographic Signature URL: From Renar.Grunenberg at huk-coburg.de Tue Dec 10 12:21:37 2019 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Tue, 10 Dec 2019 12:21:37 +0000 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full In-Reply-To: References: Message-ID: <9b774f33494d42ae989e3ad61d359d8c@huk-coburg.de> Hallo Juanma, ist save, the only change are only happen if you change the filesystem version with mmcfs device ?V full. As a tip you schould update to 5.0.3.3 ist a very stable Level for us. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder, Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von FUENTES DIAZ, JUAN MANUEL Gesendet: Dienstag, 10. Dezember 2019 10:45 An: gpfsug-discuss at spectrumscale.org Betreff: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full Hi, Recently our group have migrated the Spectrum Scale from 4.2.3.9 to 5.0.3.0. According to the documentation to finish and consolidate the migration we should also update the config and the filesystems to the latest version with the commands above. Our cluster is a single cluster and all the nodes have the same version. My question is if we can update safely with those commands without compromising the data and metadata. Thanks Juanma -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Tue Dec 10 14:48:35 2019 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Tue, 10 Dec 2019 14:48:35 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Message-ID: <5582929B-4515-4FFE-87BA-7CC4B5E71920@us.ibm.com> In response to various questions? Yes, the wrong file was originally linked. It should be fixed now. Yes, you can definitely use this edition in your test labs. We want to make it as easy as possible for you to experiment with new features, config changes, and releases so that you can adopt them with confidence, and discover problems in the lab not production. No, we do not plan at this time to backport Developer Edition to earlier Scale releases. If you are having problems with access to the download, please use the Contact links on the Marketplace page, including this one for IBMid issues: https://www.ibm.com/ibmid/myibm/help/us/helpdesk.html. The Scale dev and offering management team don?t have any control over the website or download process (other than providing the file itself for download) or the authentication process, and we?re just going to contact the same people via the same links? Regards Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_1522411740] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69557 bytes Desc: image001.png URL: From jmanuel.fuentes at upf.edu Wed Dec 11 08:23:34 2019 From: jmanuel.fuentes at upf.edu (FUENTES DIAZ, JUAN MANUEL) Date: Wed, 11 Dec 2019 09:23:34 +0100 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full In-Reply-To: References: Message-ID: Hi, Thanks Sergi and Renar for the clear explanation. Juanma El mar., 10 dic. 2019 15:50, escribi?: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: mmchconfig release=LATEST mmchfs FileSystem -V full > (Grunenberg, Renar) > 2. Re: Scale Developer Edition free for non-production use now > available (Carl Zetie - carlz at us.ibm.com) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 10 Dec 2019 12:21:37 +0000 > From: "Grunenberg, Renar" > To: "gpfsug-discuss at spectrumscale.org" > > Subject: Re: [gpfsug-discuss] mmchconfig release=LATEST mmchfs > FileSystem -V full > Message-ID: <9b774f33494d42ae989e3ad61d359d8c at huk-coburg.de> > Content-Type: text/plain; charset="utf-8" > > Hallo Juanma, > ist save, the only change are only happen if you change the filesystem > version with mmcfs device ?V full. > As a tip you schould update to 5.0.3.3 ist a very stable Level for us. > Regards Renar > > > Renar Grunenberg > Abteilung Informatik - Betrieb > > HUK-COBURG > Bahnhofsplatz > 96444 Coburg > Telefon: 09561 96-44110 > Telefax: 09561 96-44104 > E-Mail: Renar.Grunenberg at huk-coburg.de > Internet: www.huk.de > ________________________________ > HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter > Deutschlands a. G. in Coburg > Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 > Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg > Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. > Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav > Her?y, Dr. J?rg Rheinl?nder, Sarah R?ssler, Daniel Thomas. > ________________________________ > Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte > Informationen. > Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich > erhalten haben, > informieren Sie bitte sofort den Absender und vernichten Sie diese > Nachricht. > Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht > ist nicht gestattet. > > This information may contain confidential and/or privileged information. > If you are not the intended recipient (or have received this information > in error) please notify the > sender immediately and destroy this information. > Any unauthorized copying, disclosure or distribution of the material in > this information is strictly forbidden. > ________________________________ > Von: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> Im Auftrag von FUENTES DIAZ, > JUAN MANUEL > Gesendet: Dienstag, 10. Dezember 2019 10:45 > An: gpfsug-discuss at spectrumscale.org > Betreff: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V > full > > Hi, > > Recently our group have migrated the Spectrum Scale from 4.2.3.9 to > 5.0.3.0. According to the documentation to finish and consolidate the > migration we should also update the config and the filesystems to the > latest version with the commands above. Our cluster is a single cluster and > all the nodes have the same version. My question is if we can update safely > with those commands without compromising the data and metadata. > > Thanks Juanma > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20191210/5a763fea/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Tue, 10 Dec 2019 14:48:35 +0000 > From: "Carl Zetie - carlz at us.ibm.com" > To: "gpfsug-discuss at spectrumscale.org" > > Subject: Re: [gpfsug-discuss] Scale Developer Edition free for > non-production use now available > Message-ID: <5582929B-4515-4FFE-87BA-7CC4B5E71920 at us.ibm.com> > Content-Type: text/plain; charset="utf-8" > > In response to various questions? > > > Yes, the wrong file was originally linked. It should be fixed now. > > Yes, you can definitely use this edition in your test labs. We want to > make it as easy as possible for you to experiment with new features, config > changes, and releases so that you can adopt them with confidence, and > discover problems in the lab not production. > > No, we do not plan at this time to backport Developer Edition to earlier > Scale releases. > > If you are having problems with access to the download, please use the > Contact links on the Marketplace page, including this one for IBMid issues: > https://www.ibm.com/ibmid/myibm/help/us/helpdesk.html. The Scale dev and > offering management team don?t have any control over the website or > download process (other than providing the file itself for download) or the > authentication process, and we?re just going to contact the same people via > the same links? > > > Regards > > > > > > Carl Zetie > Program Director > Offering Management > Spectrum Scale & Spectrum Discover > ---- > (919) 473 3318 ][ Research Triangle Park > carlz at us.ibm.com > > [signature_1522411740] > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20191210/b732e2e2/attachment.html > > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: image001.png > Type: image/png > Size: 69557 bytes > Desc: image001.png > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20191210/b732e2e2/attachment.png > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 95, Issue 17 > ********************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From heinrich.billich at id.ethz.ch Thu Dec 12 14:26:31 2019 From: heinrich.billich at id.ethz.ch (Billich Heinrich Rainer (ID SD)) Date: Thu, 12 Dec 2019 14:26:31 +0000 Subject: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64? Message-ID: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> Hello, I remember that a GNR/ESS recovery group can hold up to 64 vdisks, but I can?t find a citation to proof it. Now I wonder if 64 is the actual limit? And where is it documented? And did the limit change with versions? Thank you. I did spend quite some time searching the documentation, no luck .. maybe you know. We run ESS 5.3.4.1 and the recovery groups have current/allowable format version 5.0.0.0 Thank you, Heiner --? ======================= Heinrich Billich ETH Z?rich Informatikdienste Tel.: +41 44 632 72 56 heinrich.billich at id.ethz.ch ======================== From stefan.dietrich at desy.de Fri Dec 13 07:19:42 2019 From: stefan.dietrich at desy.de (Dietrich, Stefan) Date: Fri, 13 Dec 2019 08:19:42 +0100 (CET) Subject: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64? In-Reply-To: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> References: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> Message-ID: <68327965.755878.1576221582269.JavaMail.zimbra@desy.de> Hello Heiner, the 64 vdisk limit per RG is still present in the latest ESS docs: https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.5/com.ibm.spectrum.scale.raid.v5r04.adm.doc/bl1adv_vdisks.htm For the other questions, no idea. Regards, Stefan ----- Original Message ----- > From: "Billich Heinrich Rainer (ID SD)" > To: "gpfsug main discussion list" > Sent: Thursday, December 12, 2019 3:26:31 PM > Subject: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64? > Hello, > > I remember that a GNR/ESS recovery group can hold up to 64 vdisks, but I can?t > find a citation to proof it. Now I wonder if 64 is the actual limit? And where > is it documented? And did the limit change with versions? Thank you. I did > spend quite some time searching the documentation, no luck .. maybe you know. > > We run ESS 5.3.4.1 and the recovery groups have current/allowable format > version 5.0.0.0 > > Thank you, > > Heiner > -- > ======================= > Heinrich Billich > ETH Z?rich > Informatikdienste > Tel.: +41 44 632 72 56 > heinrich.billich at id.ethz.ch > ======================== > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From olaf.weiser at de.ibm.com Fri Dec 13 12:20:15 2019 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Fri, 13 Dec 2019 07:20:15 -0500 Subject: [gpfsug-discuss] =?utf-8?q?Max_number_of_vdisks_in_a_recovery_gro?= =?utf-8?q?up_-_is_it=0964=3F?= In-Reply-To: <68327965.755878.1576221582269.JavaMail.zimbra@desy.de> References: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> <68327965.755878.1576221582269.JavaMail.zimbra@desy.de> Message-ID: An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Fri Dec 13 23:56:44 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Fri, 13 Dec 2019 23:56:44 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Max_number_of_vdisks_in_a_recovery_gro?= =?utf-8?q?up_-_is_it=0964=3F?= In-Reply-To: References: , <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch><68327965.755878.1576221582269.JavaMail.zimbra@desy.de> Message-ID: An HTML attachment was scrubbed... URL: From kkr at lbl.gov Mon Dec 16 19:05:02 2019 From: kkr at lbl.gov (Kristy Kallback-Rose) Date: Mon, 16 Dec 2019 11:05:02 -0800 Subject: [gpfsug-discuss] Planning US meeting for Spring 2020 Message-ID: <42F45E03-0AEC-422C-B3A9-4B5A21B1D8DF@lbl.gov> Hello, It?s time already to plan for the next US event. We have a quick, seriously, should take order of 2 minutes, survey to capture your thoughts on location and date. It would help us greatly if you can please fill it out. Best wishes to all in the new year. -Kristy Please give us 2 minutes of your time here: ?https://forms.gle/NFk5q4djJWvmDurW7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arc at b4restore.com Wed Dec 18 09:30:48 2019 From: arc at b4restore.com (=?iso-8859-1?Q?Andi_N=F8r_Christiansen?=) Date: Wed, 18 Dec 2019 09:30:48 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Message-ID: Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I'm not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns "fileset A1" which needs to be replicated to Site B "fileset A2" the from Site B to Site C "fileset A3". Site B: Owns "fileset B1" which needs to be replicated to Site C "fileset B2". Site C: Holds all data from Site A and B "fileset A3 & B2". We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don't know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B58E.35AA89D0] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Data migration and ILM blueprint - Andi V1.1.pdf Type: application/pdf Size: 236012 bytes Desc: Data migration and ILM blueprint - Andi V1.1.pdf URL: From jack at flametech.com.au Wed Dec 18 10:09:31 2019 From: jack at flametech.com.au (Jack Horrocks) Date: Wed, 18 Dec 2019 21:09:31 +1100 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: Hey Andi I'd be talking to the pixstor boys. Ngenea will do it for you without having to mess about too much. https://ww.pixitmedia.com They are down to earth and won't sell you stuff that doesn't work. Thanks Jack. On Wed, 18 Dec 2019 at 21:00, Andi N?r Christiansen wrote: > Hi, > > > > We are currently building a 3 site spectrum scale solution where data is > going to be generated at two different sites (Site A and Site B, Site C is > for archiving/backup) and then archived on site three. > > I have however not worked with AFM much so I was wondering if there is > someone who knows how to configure AFM to have all data generated in a > file-set automatically being copied to an offsite. > > GPFS AFM is not an option because of latency between sites so NFS AFM is > going to be tunneled between the site via WAN. > > > > As of now we have tried to set up AFM but it only transfers data from home > to cache when a prefetch is manually started or a file is being opened, we > need all files from home to go to cache as soon as it is generated or at > least after a little while. > > It does not need to be synchronous it just need to be automatic. > > > > I?m not sure if attachments will be available in this thread but I have > attached the concept of our design. > > > > Basically the setup is : > > > > Site A: > > Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the > from Site B to Site C ?fileset A3?. > > > > Site B: > > Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. > > > > Site C: > > Holds all data from Site A and B ?fileset A3 & B2?. > > > > We do not need any sites to have failover functionality only a copy of the > data from the two first sites. > > > > If anyone knows how to accomplish this I would be glad to know how! > > > > We have been looking into switching the home and cache site so that data > is generated at the cache sites which will trigger GPFS to transfer the > files to home as soon as possible but as I have little to no experience > with AFM I don?t know what happens to the cache site over time, does the > cache site empty itself after a while or does data stay there until > manually deleted? > > > > Thanks in advance! > > > > Best Regards > > > > > *Andi N?r Christiansen* > *IT Solution Specialist* > > Phone +45 87 81 37 39 > Mobile +45 23 89 59 75 > E-mail arc at b4restore.com > Web www.b4restore.com > > [image: B4Restore on LinkedIn] > [image: B4Restore on > Facebook] [image: B4Restore on Facebook] > [image: Sign up for our newsletter] > > > [image: Download Report] > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: not available URL: From TROPPENS at de.ibm.com Wed Dec 18 11:22:30 2019 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Wed, 18 Dec 2019 12:22:30 +0100 Subject: [gpfsug-discuss] Chart decks of SC19 meeting Message-ID: Most chart decks of the SC19 meeting are now available: https://www.spectrumscale.org/presentations/ -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Matthias Hartmann Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed Dec 18 12:04:11 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 18 Dec 2019 12:04:11 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.jpg at 01D5B58E.35AA89D0.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.png at 01D5B58E.35AA89D0.png Type: image/png Size: 58433 bytes Desc: not available URL: From arc at b4restore.com Wed Dec 18 12:31:14 2019 From: arc at b4restore.com (=?utf-8?B?QW5kaSBOw7hyIENocmlzdGlhbnNlbg==?=) Date: Wed, 18 Dec 2019 12:31:14 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: Hi Andrew, Alright, that partly confirms that there is no automatically sweep of data at cache site, right? I mean data will not be deleted automatically after a while in the cache fileset, where it is only metadata that stays? If data is kept until a manual deletion of data is requested on the cache site then this is the way to go for us..! Also, Site A has no connection to Site C so it needs to be connected as A to B and B to C.. This means: Site A holds live data from Site A, Site B holds live data from Site B and Replicated data from Site A, Site C holds replicated data from A and B. Does that make sense? The connection between A and B is LAN, about 500meters apart.. basically same site but different data centers and strictly separated because of security.. Site C is in another Country. Hence why we cant use GPFS AFM and also why we need to utilize WAN/NFS tunneled for AFM. Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Andrew Beattie Sendt: 18. december 2019 13:04 Til: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Andi, This is basic functionality that is part of Spectrum Scale there is no additional licensing or HSM costs required for this. Set Site C as your AFM Home, and have Site A and Site B both as Caches of Site C you can then Write Data in to Site A - have it stream to Site C, and call it on demand or Prefetch from Site C to Site B as required the Same is true of Site B, you can write Data into Site B, have it Stream to Site C, and call it on demand to site A if you want the data to be Multi Writer then you will need to make sure you use Independent writer as the AFM type https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Active%20File%20Management%20(AFM) Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Andi N?r Christiansen" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 8:00 PM Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B5A5.D4744A80] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: From arc at b4restore.com Wed Dec 18 12:33:31 2019 From: arc at b4restore.com (=?utf-8?B?QW5kaSBOw7hyIENocmlzdGlhbnNlbg==?=) Date: Wed, 18 Dec 2019 12:33:31 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: <8b0c31bf2c774ef7972a2f21f8b64e0a@B4RWEX01.internal.b4restore.com> Hi Jack, Thanks, but we are not looking to implement other products with spectrum scale. We are only searching for a solution to get Spectrum Scale to do the replication for us automatically. ? Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Jack Horrocks Sendt: 18. december 2019 11:10 Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Hey Andi I'd be talking to the pixstor boys. Ngenea will do it for you without having to mess about too much. https://ww.pixitmedia.com They are down to earth and won't sell you stuff that doesn't work. Thanks Jack. On Wed, 18 Dec 2019 at 21:00, Andi N?r Christiansen > wrote: Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B5A7.BC39FB20] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: From abeattie at au1.ibm.com Wed Dec 18 12:40:44 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 18 Dec 2019 12:40:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.jpg at 01D5B5A5.D4744A80.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.png at 01D5B5A5.D4744A80.png Type: image/png Size: 58433 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Wed Dec 18 12:56:11 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 18 Dec 2019 12:56:11 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> On Wed, 2019-12-18 at 12:04 +0000, Andrew Beattie wrote: > Andi, > > This is basic functionality that is part of Spectrum Scale there is > no additional licensing or HSM costs required for this. > Noting only if you have the Extended Edition. Basic Spectrum Scale licensing does not include AFM :-) JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From arc at b4restore.com Wed Dec 18 12:59:21 2019 From: arc at b4restore.com (=?iso-8859-1?Q?Andi_N=F8r_Christiansen?=) Date: Wed, 18 Dec 2019 12:59:21 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> Message-ID: <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> To my knowledge basic AFM is part of all Spectrum scale licensing's but AFM-DR is only in Data Management and ECE? https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_prodstruct.htm /Andi -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Jonathan Buzzard Sendt: 18. december 2019 13:56 Til: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. On Wed, 2019-12-18 at 12:04 +0000, Andrew Beattie wrote: > Andi, > > This is basic functionality that is part of Spectrum Scale there is no > additional licensing or HSM costs required for this. > Noting only if you have the Extended Edition. Basic Spectrum Scale licensing does not include AFM :-) JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From arc at b4restore.com Wed Dec 18 13:00:24 2019 From: arc at b4restore.com (=?utf-8?B?QW5kaSBOw7hyIENocmlzdGlhbnNlbg==?=) Date: Wed, 18 Dec 2019 13:00:24 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: , Message-ID: Alright, I will have to dig a little deeper with this then..Thanks!? Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Andrew Beattie Sendt: 18. december 2019 13:41 Til: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Andi Daisy chained AFM caches are a bad idea -- while it might work -- when things go wrong they go really badly wrong. Based on the scenario your describing What I think your going to want to do is AFM-DR between Sites A and B and then look at a policy based copy (Scripted Rsync or somthing similar) from Site B to site C I don't believe at present we support an AFM-DR relationship between a cluster and a Cache which is doing AFM to its home -- You could put in a request with IBM development to see if they would support such an architecture - but i'm not sure its ever been tested. Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Andi N?r Christiansen" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 10:31 PM Hi Andrew, Alright, that partly confirms that there is no automatically sweep of data at cache site, right? I mean data will not be deleted automatically after a while in the cache fileset, where it is only metadata that stays? If data is kept until a manual deletion of data is requested on the cache site then this is the way to go for us..! Also, Site A has no connection to Site C so it needs to be connected as A to B and B to C.. This means: Site A holds live data from Site A, Site B holds live data from Site B and Replicated data from Site A, Site C holds replicated data from A and B. Does that make sense? The connection between A and B is LAN, about 500meters apart.. basically same site but different data centers and strictly separated because of security.. Site C is in another Country. Hence why we cant use GPFS AFM and also why we need to utilize WAN/NFS tunneled for AFM. Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af Andrew Beattie Sendt: 18. december 2019 13:04 Til: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Andi, This is basic functionality that is part of Spectrum Scale there is no additional licensing or HSM costs required for this. Set Site C as your AFM Home, and have Site A and Site B both as Caches of Site C you can then Write Data in to Site A - have it stream to Site C, and call it on demand or Prefetch from Site C to Site B as required the Same is true of Site B, you can write Data into Site B, have it Stream to Site C, and call it on demand to site A if you want the data to be Multi Writer then you will need to make sure you use Independent writer as the AFM type https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Active%20File%20Management%20(AFM) Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Andi N?r Christiansen" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 8:00 PM Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B5AB.7DA09A50] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: From jonathan.buzzard at strath.ac.uk Wed Dec 18 13:03:48 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 18 Dec 2019 13:03:48 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> Message-ID: <0dabc7eccd020e31d80484fe99b36e692be47c00.camel@strath.ac.uk> On Wed, 2019-12-18 at 12:59 +0000, Andi N?r Christiansen wrote: > To my knowledge basic AFM is part of all Spectrum scale licensing's > but AFM-DR is only in Data Management and ECE? > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_prodstruct.htm > Gees I can't keep up. That didn't used to be the case and possibly not if you are still on Express edition which looks to have been canned. I was sure our DSS-G says Express edition on the license. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From abeattie at au1.ibm.com Wed Dec 18 13:50:26 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 18 Dec 2019 13:50:26 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk>, Message-ID: An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed Dec 18 13:50:47 2019 From: ulmer at ulmer.org (Stephen Ulmer) Date: Wed, 18 Dec 2019 08:50:47 -0500 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <0dabc7eccd020e31d80484fe99b36e692be47c00.camel@strath.ac.uk> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> <0dabc7eccd020e31d80484fe99b36e692be47c00.camel@strath.ac.uk> Message-ID: I want to say that AFM was in GPFS before there were editions, and that everything that was pre-edition went into Standard Edition. That timing may not be exact, but Advanced edition has definitely never been required for ?regular? AFM. For the longest time the only ?Advanced? feature was encryption. Of course AFM-DR was eventually added to the Advanced Edition stream, which became DME with perTB licensing, which went to a GNR concert and spawned ECE from incessant complaining community feedback. :) I?m not aware that anyone ever *wanted* Express Edition, except the Linux on Z people, because that?s all they were allowed to have for a while. Liberty, ? Stephen > On Dec 18, 2019, at 8:03 AM, Jonathan Buzzard wrote: > > On Wed, 2019-12-18 at 12:59 +0000, Andi N?r Christiansen wrote: >> To my knowledge basic AFM is part of all Spectrum scale licensing's >> but AFM-DR is only in Data Management and ECE? >> >> https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_prodstruct.htm >> > > Gees I can't keep up. That didn't used to be the case and possibly not > if you are still on Express edition which looks to have been canned. I > was sure our DSS-G says Express edition on the license. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgayne at us.ibm.com Wed Dec 18 14:33:45 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Wed, 18 Dec 2019 14:33:45 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.jpg at 01D5B58E.35AA89D0.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.png at 01D5B58E.35AA89D0.png Type: image/png Size: 58433 bytes Desc: not available URL: From vpuvvada at in.ibm.com Thu Dec 19 13:40:31 2019 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Thu, 19 Dec 2019 13:40:31 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: >Site A: >Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. a. Is this required because A cannot directly talk to C ? b. Is this network restriction ? c. Where is the data generated ? At filesetA1 or filesetA2 or filesetA3 or all the places ? >Site B: >Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. > >Site C: >Holds all data from Site A and B ?fileset A3 & B2?. Same as above, where is the data generated ? >We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to >the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? AFM single writer mode or independent-writer mode can be used to replicate the data from the cache to home automatically. a. Approximately how many files/data can each cache(filesetA1, filesetA2 and fileesetB1) hold ? b. After the archival at the site C, will the data get deleted from the filesets at C? ~Venkat (vpuvvada at in.ibm.com) From: Lyle Gayne/Poughkeepsie/IBM To: gpfsug-discuss at spectrumscale.org, Venkateswara R Puvvada/India/IBM at IBMIN Date: 12/18/2019 08:03 PM Subject: Re: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Adding Venkat so he can chime in. Lyle ----- Original message ----- From: "Andi N?r Christiansen" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 5:24 AM Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=eqWwibkj7RzAd4hcjuMXLC8a3bAQwHQNAlIm-a5WEOo&s=dWoFLlPqh2RDoLkJVIY0tM-wTVCtrhCqT0oZL4UkmZ8&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 58433 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 17:22:20 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 17:22:20 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default Message-ID: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School -------------- next part -------------- An HTML attachment was scrubbed... URL: From kywang at us.ibm.com Thu Dec 19 19:06:15 2019 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Thu, 19 Dec 2019 14:06:15 -0500 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> Message-ID: It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=m2_UDb09pxCtr3QQCy-6gDUzpw-o_zJQig_xI3C2_1c&m=Podv2DTbd8lR1FO2ZYZ8x8zq9iYA04zPm4GJnVZqlOw&s=1H_Rhmne_XoS3KS5pOD1RiBL8FQBXV4VdCkEL4KD11E&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 19:18:36 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 19:18:36 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> Message-ID: <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset]"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 106 bytes Desc: image001.gif URL: From kywang at us.ibm.com Thu Dec 19 19:25:01 2019 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Thu, 19 Dec 2019 14:25:01 -0500 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> Message-ID: >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=m2_UDb09pxCtr3QQCy-6gDUzpw-o_zJQig_xI3C2_1c&m=Nbr-ds_gTHq88IqMt3BvuP7-CagDQwEk2Bax6qK4iZo&s=D1aDuwRRm4mrIjdMBLSYo28KEflXV7WLywFw7puhlFU&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16683622.gif Type: image/gif Size: 106 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 19:28:33 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 19:28:33 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> Message-ID: <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed to do in this case? Really appreciate your assistance. Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:25 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different tho]"Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different though. From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset]"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 106 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 107 bytes Desc: image002.gif URL: From kywang at us.ibm.com Thu Dec 19 20:56:05 2019 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Thu, 19 Dec 2019 15:56:05 -0500 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu><794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> Message-ID: Razvan, mmedquota -d -u fs:fset: -d Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command. This option will assign the default quota to the user. The quota entry type will change from "e" to "d_fset". You may need to play a little bit with your system to get the result as you can have default quota per file system set and default quota per fileset enabled. An exemple to illustrate User pfs004 in filesystem fs9 and fileset fset7 has explicit quota set: # mmrepquota -u -v fs9 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none e <=== explicit # mmlsquota -d fs9:fset7 Default Block Limits(KB) | Default File Limits Filesystem Fileset type quota limit | quota limit entryType fs9 fset7 USR 102400 1048576 | 10000 0 default on <=== default quota limits for fs9:fset7, the default fs9 fset7 GRP 0 0 | 0 0 i # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none <=== explicit # mmedquota -d -u pfs004 fs9:fset7 <=== run mmedquota -d -u to get default limits # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none <=== takes the default value # mmrepquota -u -v fs9:fset7 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none d_fset <=== now user pfs004 in fset7 takes the default limits # ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:28 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed to do in this case? Really appreciate your assistance. Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:25 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different tho"Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different though. From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=m2_UDb09pxCtr3QQCy-6gDUzpw-o_zJQig_xI3C2_1c&m=ztpfU2VfH5aJ9mmrGarTov3Rf4RZyt417t0UZAdESOg&s=AY4A_7BxD_jvDV7p9tmwCj6wTIZrD9R6ZEXTOLgZDDI&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16898169.gif Type: image/gif Size: 106 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16513130.gif Type: image/gif Size: 107 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 21:47:21 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 21:47:21 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> Message-ID: Many thanks ? that?s exactly what I?m looking for. Unfortunately I have an error when attempting to run command : First the background: [root at storinator ~]# mmrepquota -u -v --block-size auto gsb:home |grep rp2927 rp2927 home USR 8.934G 10G 20G 0 none | 86355 1048576 3145728 0 none e [root at storinator ~]# mmlsquota -d --block-size auto gsb:home Default Block Limits | Default File Limits Filesystem Fileset type quota limit | quota limit entryType gsb home USR 20G 30G | 1048576 3145728 default on gsb home GRP 0 0 | 0 0 i And now the most interesting part: [root at storinator ~]# mmedquota -d -u rp2927 gsb:home gsb USR default quota is off Attention: In file system gsb (fileset home), block soft limit (10485760) for user rp2927 is too small. Suggest setting it higher than 26214400. Attention: In file system gsb (fileset home), block hard limit (20971520) for user rp2927 is too small. Suggest setting it higher than 26214400. gsb:home is not valid user A little bit more background, maybe of help? [root at storinator ~]# mmlsquota -d gsb Default Block Limits(KB) | Default File Limits Filesystem Fileset type quota limit | quota limit entryType gsb root USR 0 0 | 0 0 i gsb root GRP 0 0 | 0 0 i gsb work USR 0 0 | 0 0 i gsb work GRP 0 0 | 0 0 i gsb misc USR 0 0 | 0 0 i gsb misc GRP 0 0 | 0 0 i gsb home USR 20971520 31457280 | 1048576 3145728 default on gsb home GRP 0 0 | 0 0 i gsb shared USR 0 0 | 0 0 i gsb shared GRP 20971520 31457280 | 1048576 3145728 default on [root at storinator ~]# mmlsfs gsb flag value description ------------------- ------------------------ ----------------------------------- -f 8192 Minimum fragment (subblock) size in bytes -i 4096 Inode size in bytes -I 32768 Indirect block size in bytes -m 2 Default number of metadata replicas -M 3 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j scatter Block allocation type -D nfs4 File locking semantics in effect -k nfs4 ACL semantics in effect -n 100 Estimated number of nodes that will mount file system -B 1048576 Block size -Q user;group;fileset Quotas accounting enabled user;group;fileset Quotas enforced none Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement --filesetdf Yes Fileset df enabled? -V 21.00 (5.0.3.0) File system version --create-time Fri Aug 30 16:25:29 2019 File system creation time -z No Is DMAPI enabled? -L 33554432 Logfile size -E Yes Exact mtime mount option -S relatime Suppress atime mount option -K whenpossible Strict replica allocation option --fastea Yes Fast external attributes enabled? --encryption No Encryption enabled? --inode-limit 105906176 Maximum number of inodes in all inode spaces --log-replicas 0 Number of log replicas --is4KAligned Yes is4KAligned? --rapid-repair Yes rapidRepair enabled? --write-cache-threshold 0 HAWC Threshold (max 65536) --subblocks-per-full-block 128 Number of subblocks per full block -P system;Main01 Disk storage pools in file system --file-audit-log No File Audit Logging enabled? --maintenance-mode No Maintenance Mode enabled? -d meta_01;meta_02;meta_03;data_1A;data_1B;data_2A;data_2B;data_3A;data_3B Disks in file system -A yes Automatic mount option -o none Additional mount options -T /gpfs/cesRoot/gsb Default mount point --mount-priority 2 Mount priority Any ideas? Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 3:56 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Razvan, mmedquota -d -u fs:fset: -d Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command. This option will assign the default quota to the user. The quota entry type will change from "e" to "d_fset". You may need to play a little bit with your system to get the result as you can have default quota per file system set and default quota per fileset enabled. An exemple to illustrate User pfs004 in filesystem fs9 and fileset fset7 has explicit quota set: # mmrepquota -u -v fs9 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none e <=== explicit # mmlsquota -d fs9:fset7 Default Block Limits(KB) | Default File Limits Filesystem Fileset type quota limit | quota limit entryType fs9 fset7 USR 102400 1048576 | 10000 0 default on <=== default quota limits for fs9:fset7, the default fs9 fset7 GRP 0 0 | 0 0 i # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none <=== explicit # mmedquota -d -u pfs004 fs9:fset7 <=== run mmedquota -d -u to get default limits # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none <=== takes the default value # mmrepquota -u -v fs9:fset7 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none d_fset <=== now user pfs004 in fset7 takes the default limits # ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:28:51 PM---I see. May I ask one follow-up question, please: what]"Popescu, Razvan" ---12/19/2019 02:28:51 PM---I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:28 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed to do in this case? Really appreciate your assistance. Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:25 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different tho]"Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different though. From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset]"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 106 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 107 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 108 bytes Desc: image003.gif URL: From jonathan.buzzard at strath.ac.uk Thu Dec 19 21:56:28 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 19 Dec 2019 21:56:28 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> Message-ID: <5ffb8059-bd51-29a5-78c5-19c86dcb6dc7@strath.ac.uk> On 19/12/2019 19:28, Popescu, Razvan wrote: > I see. > > May I ask one follow-up question, please:?? what is? ?mmedquota -d -u > ?? ?supposed to do in this case? > > Really appreciate your assistance. In the past (last time I did this was on version 3.2 or 3.3) if you used mmsetquota and set a users quota to 0 then as far as GPFS was concerned it was like you had never set a quota. This was notionally before per fileset quotas where a thing. In reality on my test cluster you could enable them and set them and they seemed to work as would be expected when I tested it. Never used it in production on those versions because well that would be dumb, and never had to remove a quota completely since. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From lavila at illinois.edu Fri Dec 20 15:32:54 2019 From: lavila at illinois.edu (Avila, Leandro) Date: Fri, 20 Dec 2019 15:32:54 +0000 Subject: [gpfsug-discuss] More information about CVE-2019-4715 Message-ID: Good morning, I am looking for additional information related to CVE-2019-4715 to try to determine the applicability and impact of this vulnerability in our environment. https://exchange.xforce.ibmcloud.com/vulnerabilities/172093 and https://www.ibm.com/support/pages/node/1118913 For the documents above it is not very clear if the issue affects mmfsd or just one of the protocol components (NFS,SMB). Thank you very much for your attention and help -- ==================== Leandro Avila | NCSA From Stephan.Peinkofer at lrz.de Fri Dec 20 15:58:12 2019 From: Stephan.Peinkofer at lrz.de (Peinkofer, Stephan) Date: Fri, 20 Dec 2019 15:58:12 +0000 Subject: [gpfsug-discuss] More information about CVE-2019-4715 In-Reply-To: References: Message-ID: <663A46F4-E170-4C7E-ABDC-E0CE7488C25D@lrz.de> Dear Leonardo, I had the same issue as you today. After some time (after I already opened a case for this) I noticed that they referenced the APAR numbers in the second link you posted. A google search for this apar numbers gives this here https://www-01.ibm.com/support/docview.wss?uid=isg1IJ20901 So seems to be SMB related. Best, Stephan Peinkofer Von meinem iPhone gesendet Am 20.12.2019 um 16:33 schrieb Avila, Leandro : ?Good morning, I am looking for additional information related to CVE-2019-4715 to try to determine the applicability and impact of this vulnerability in our environment. https://exchange.xforce.ibmcloud.com/vulnerabilities/172093 and https://www.ibm.com/support/pages/node/1118913 For the documents above it is not very clear if the issue affects mmfsd or just one of the protocol components (NFS,SMB). Thank you very much for your attention and help -- ==================== Leandro Avila | NCSA _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From lavila at illinois.edu Fri Dec 20 17:14:35 2019 From: lavila at illinois.edu (Avila, Leandro) Date: Fri, 20 Dec 2019 17:14:35 +0000 Subject: [gpfsug-discuss] More information about CVE-2019-4715 In-Reply-To: <663A46F4-E170-4C7E-ABDC-E0CE7488C25D@lrz.de> References: <663A46F4-E170-4C7E-ABDC-E0CE7488C25D@lrz.de> Message-ID: <7efe86e566f610a31e178e0333b65144e5734bc3.camel@illinois.edu> On Fri, 2019-12-20 at 15:58 +0000, Peinkofer, Stephan wrote: > Dear Leonardo, > > I had the same issue as you today. After some time (after I already > opened a case for this) I noticed that they referenced the APAR > numbers in the second link you posted. > > A google search for this apar numbers gives this here > https://www-01.ibm.com/support/docview.wss?uid=isg1IJ20901 > > So seems to be SMB related. > > Best, > Stephan Peinkofer > Stephan, Thank you very much for pointing me in the right direction. I appreciate it. From kevin.doyle at manchester.ac.uk Fri Dec 27 11:45:14 2019 From: kevin.doyle at manchester.ac.uk (Kevin Doyle) Date: Fri, 27 Dec 2019 11:45:14 +0000 Subject: [gpfsug-discuss] Question about Policies Message-ID: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk [/Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1799188038] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16051 bytes Desc: image001.png URL: From YARD at il.ibm.com Fri Dec 27 12:55:06 2019 From: YARD at il.ibm.com (Yaron Daniel) Date: Fri, 27 Dec 2019 14:55:06 +0200 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Message-ID: Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=Wg3EAA9O8sH3c_zHS2h8miVpSosqtXulMRqXMRwSMe0&s=TdemXXkFD1mjpxNFg7Y_DYYPpJXZk7BmQcW9hWQDLso&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4338 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 16051 bytes Desc: not available URL: From kevin.doyle at manchester.ac.uk Fri Dec 27 13:56:29 2019 From: kevin.doyle at manchester.ac.uk (Kevin Doyle) Date: Fri, 27 Dec 2019 13:56:29 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Message-ID: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Hi Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool How do I specify the folder to move it to which needs to be different from the current location. Thanks Kevin RULE ['RuleName'] [WHEN TimeBooleanExpression] MIGRATE [COMPRESS ({'yes' | 'no' | 'lz4' | 'z'})] [FROM POOL 'FromPoolName'] [THRESHOLD (HighPercentage[,LowPercentage[,PremigratePercentage]])] [WEIGHT (WeightExpression)] TO POOL 'ToPoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [SHOW (['String'] SqlExpression)] [SIZE (numeric-sql-expression)] [ACTION (SqlExpression)] [WHERE SqlExpression] Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk [/Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1131538866] From: on behalf of Yaron Daniel Reply-To: gpfsug main discussion list Date: Friday, 27 December 2019 at 12:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Question about Policies Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd [cid:_1_10392F3C103929880046F589C22584DD] Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel [cid:_2_103C9B0C103C96FC0046F589C22584DD] [cid:_2_103C9D14103C96FC0046F589C22584DD] [cid:_2_103C9F1C103C96FC0046F589C22584DD] [cid:_2_103CA124103C96FC0046F589C22584DD] [cid:_2_103CA32C103C96FC0046F589C22584DD] [cid:_2_103CA534103C96FC0046F589C22584DD] [cid:_2_103CA73C103C96FC0046F589C22584DD] [cid:_2_103CA944103C96FC0046F589C22584DD] From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk [/Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1799188038] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16051 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 1115 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3848 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 4267 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 3748 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 3794 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 4302 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 3740 bytes Desc: image008.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 3856 bytes Desc: image009.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image010.jpg Type: image/jpeg Size: 4339 bytes Desc: image010.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image011.png Type: image/png Size: 16052 bytes Desc: image011.png URL: From YARD at il.ibm.com Fri Dec 27 14:11:40 2019 From: YARD at il.ibm.com (Yaron Daniel) Date: Fri, 27 Dec 2019 14:11:40 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Message-ID: Hi As you said it migrate between different pools (ILM/External - Tape) - so in case you need to move directory to different location - you will have to use the OS mv command. From what i remember there is no directory policy for the same pool. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Kevin Doyle To: gpfsug main discussion list Date: 27/12/2019 15:57 Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool How do I specify the folder to move it to which needs to be different from the current location. Thanks Kevin RULE ['RuleName'] [WHEN TimeBooleanExpression] MIGRATE [COMPRESS ({'yes' | 'no' | 'lz4' | 'z'})] [FROM POOL 'FromPoolName'] [THRESHOLD (HighPercentage[,LowPercentage[,PremigratePercentage]])] [WEIGHT (WeightExpression)] TO POOL 'ToPoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [SHOW (['String'] SqlExpression)] [SIZE (numeric-sql-expression)] [ACTION (SqlExpression)] [WHERE SqlExpression] Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk From: on behalf of Yaron Daniel Reply-To: gpfsug main discussion list Date: Friday, 27 December 2019 at 12:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Question about Policies Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=26aKLyF8ZP9iUfCT0RV9tvO89IrBmJUY3xt0AJrp--E&s=beWwNqFpTlTds5Dir2ZVmRiNt9kLQkFZC70Mp7UqFRY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4338 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 16052 bytes Desc: not available URL: From makaplan at us.ibm.com Fri Dec 27 14:19:43 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 27 Dec 2019 09:19:43 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Message-ID: The MIGRATE rule is for moving files from one pool to another, without changing the pathname or any attributes, except the storage devices holding the data blocks of the file. Also can be use for "external" pools to migrate to an HSM system. "moving" from one folder to another is a different concept. The mmapplypolicy LIST and EXTERNAL LIST rules can be used to find files older than 30 days and then do any operations you like on them, but you have to write a script to do those operations. See also -- the "Information Lifecycle Management" (ILM) chapter of the SS Admin Guide AND/OR for an easy to use parallel function equivalent to the classic Unix pipline `find ... | xargs ... ` Try the `mmfind ... -xargs ... ` from the samples/ilm directory. [root@~/.../samples/ilm]$./mmfind Usage: ./mmfind [mmfind args] { | -inputFileList f -policyFile f } mmfind args: [-polFlags 'flag 1 flag 2 ...'] [-logLvl {0|1|2}] [-logFile f] [-saveTmpFiles] [-fs fsName] [-mmapplypolicyOutputFile f] find invocation -- logic: ! ( ) -a -o /path1 [/path2 ...] [expression] -atime N -ctime N -mtime N -true -false -perm mode -iname PATTERN -name PATTERN -path PATTERN -ipath PATTERN -uid N -user NAME -gid N -group NAME -nouser -nogroup -newer FILE -older FILE -mindepth LEVEL -maxdepth LEVEL -links N -size N -empty -type [bcdpflsD] -inum N -exec COMMAND -execdir COMMAND -ea NAME -eaWithValue NAME===VALUE -setEA NAME[===VALUE] -deleteEA NAME -gpfsImmut -gpfsAppOnly -gpfsEnc -gpfsPool POOL_NAME -gpfsMigrate poolFrom,poolTo -gpfsSetPool poolTo -gpfsCompress -gpfsUncompress -gpfsSetRep m,r -gpfsWeight NumericExpr -ls -fls -print -fprint -print0 -fprint0 -exclude PATH -xargs [-L maxlines] [-I rplstr] COMMAND Give -h for a more verbose usage message From: Kevin Doyle To: gpfsug main discussion list Date: 12/27/2019 08:57 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool How do I specify the folder to move it to which needs to be different from the current location. Thanks Kevin RULE ['RuleName'] [WHEN TimeBooleanExpression] MIGRATE [COMPRESS ({'yes' | 'no' | 'lz4' | 'z'})] [FROM POOL 'FromPoolName'] [THRESHOLD (HighPercentage[,LowPercentage[,PremigratePercentage]])] [WEIGHT (WeightExpression)] TO POOL 'ToPoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [SHOW (['String'] SqlExpression)] [SIZE (numeric-sql-expression)] [ACTION (SqlExpression)] [WHERE SqlExpression] Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk /Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1131538866 From: on behalf of Yaron Daniel Reply-To: gpfsug main discussion list Date: Friday, 27 December 2019 at 12:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Question about Policies Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards Yaron Daniel 94 Em Ha'Moshavot Rd cid:_1_10392F3C103929880046F589C22584DD Storage Architect ? IL Lab Petach Tiqva, 49527 Services (Storage) IBM Global Markets, Systems HW Israel Sales Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel cid:_2_103C9B0C103C96FC0046F589C22584DD cid:_2_103C9D14103C96FC0046F589C22584DD cid:_2_103C9F1C103C96FC0046F589C22584DD cid:_2_103CA124103C96FC0046F589C22584DD cid:_2_103CA32C103C96FC0046F589C22584DD cid:_2_103CA534103C96FC0046F589C22584DD cid:_2_103CA73C103C96FC0046F589C22584DD cid:_2_103CA944103C96FC0046F589C22584DD From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk /Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1799188038 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=w3zKI5uOkIxqfgnHm53Al4Q3apC0htUiiuFcMnh2U9s&s=rkD5iWzjhbTA_9kEHL9Laggb4NGjiYS4qoM8yXbAoyM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16547711.gif Type: image/gif Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16942257.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16264175.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16010102.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16098719.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16043707.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16546771.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16875824.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16069185.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16639470.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16809363.gif Type: image/gif Size: 16052 bytes Desc: not available URL: From david_johnson at brown.edu Fri Dec 27 14:20:13 2019 From: david_johnson at brown.edu (david_johnson at brown.edu) Date: Fri, 27 Dec 2019 09:20:13 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: Message-ID: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> You would want to look for examples of external scripts that work on the result of running the policy engine in listing mode. The one issue that might need some attention is the way that gpfs quotes unprintable characters in the pathname. So the policy engine generates the list and your external script does the moving. -- ddj Dave Johnson > On Dec 27, 2019, at 9:11 AM, Yaron Daniel wrote: > > ?Hi > > As you said it migrate between different pools (ILM/External - Tape) - so in case you need to move directory to different location - you will have to use the OS mv command. > From what i remember there is no directory policy for the same pool. > > > > Regards > > > > > Yaron Daniel 94 Em Ha'Moshavot Rd > > Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 > IBM Global Markets, Systems HW Sales Israel > > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > Webex: https://ibm.webex.com/meet/yard > IBM Israel > > > > > > > > > > > > > > > > > From: Kevin Doyle > To: gpfsug main discussion list > Date: 27/12/2019 15:57 > Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > Hi > > Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? > > Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool > How do I specify the folder to move it to which needs to be different from the current location. > > Thanks > Kevin > > RULE['RuleName'] [WHENTimeBooleanExpression] > MIGRATE [COMPRESS({'yes' | 'no' | 'lz4' | 'z'})] > [FROM POOL'FromPoolName'] > [THRESHOLD(HighPercentage[,LowPercentage[,PremigratePercentage]])] > [WEIGHT(WeightExpression)] > TO POOL'ToPoolName' > [LIMIT(OccupancyPercentage)] > [REPLICATE(DataReplication)] > [FOR FILESET('FilesetName'[,'FilesetName']...)] > [SHOW(['String'] SqlExpression)] > [SIZE(numeric-sql-expression)] > [ACTION(SqlExpression)] > [WHERESqlExpression] > > > Kevin Doyle | Linux Administrator, Scientific Computing > Cancer Research UK, Manchester Institute > The University of Manchester > Room 13G40, Alderley Park, Macclesfield SK10 4TG > Mobile: 07554 223480 > Email: Kevin.Doyle at manchester.ac.uk > > > > > > From: on behalf of Yaron Daniel > Reply-To: gpfsug main discussion list > Date: Friday, 27 December 2019 at 12:55 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Question about Policies > > Hi > > U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. > > > > Regards > > > > Yaron Daniel 94 Em Ha'Moshavot Rd > > Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 > IBM Global Markets, Systems HW Sales Israel > > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > Webex: https://ibm.webex.com/meet/yard > IBM Israel > > > > > > > > > > > > > > > > > From: Kevin Doyle > To: "gpfsug-discuss at spectrumscale.org" > Date: 27/12/2019 13:45 > Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > Hi > > I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will > Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. > I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? > > Many thanks > Kevin > > > Kevin Doyle | Linux Administrator, Scientific Computing > Cancer Research UK, Manchester Institute > The University of Manchester > Room 13G40, Alderley Park, Macclesfield SK10 4TG > Mobile: 07554 223480 > Email: Kevin.Doyle at manchester.ac.uk > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Dec 27 14:27:43 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 27 Dec 2019 14:27:43 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> References: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk>, <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.gif at 01D5BCBD.7015DEE0.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image007.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image008.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image009.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image010.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image011.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16052 bytes Desc: not available URL: From daniel.kidger at uk.ibm.com Fri Dec 27 14:30:46 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 27 Dec 2019 14:30:46 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.gif at 01D5BCBD.7015DEE0.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image007.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image008.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image009.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image010.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image011.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16052 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.gif at 01D5BCBD.7015DEE0.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image007.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image008.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image009.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image010.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image011.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16052 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Sat Dec 28 15:17:05 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Sat, 28 Dec 2019 15:17:05 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: On 27/12/2019 14:20, david_johnson at brown.edu wrote: > You would want to look for examples of external scripts that work on the > result of running the policy engine in listing mode. ?The one issue that > might need some attention is the way that gpfs quotes unprintable > characters in the pathname. So the policy engine generates the list and > your external script does the moving. > In my experience a good starting point would be to scan the list of files from the policy engine and separate the files out into "normal"; that is files using basic ASCII and no special characters and the rest also known as the "wacky pile". Given that you are UK based it is not unreasonable to expect all path and file names to be in English. There might (and if not probably should) be an institutional policy mandating it. Not much use if a researcher saves everything in Greek then gets knocked over by a bus and person picking up the work is Spanish for example. Hopefully the "wacky pile" is small, however expect to find all sorts of bizarre file and path names in it. We are talking wildcards, back ticks, even newline characters to name but a few. Depending on the amount of data in the "wacky" pile you might just want to forget about moving them, as they are orders of magnitude more difficult to deal with than files with "sane" path and file names and can rapidly soak up large chunks of time trying to deal with them in scripts. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From Paul.Sanchez at deshaw.com Sat Dec 28 17:07:15 2019 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Sat, 28 Dec 2019 17:07:15 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: <9ce3971faea5493daa133b08e4a0113e@deshaw.com> If you needed to preserve the "wackiness" of the original file and pathnames (and I'm assuming you need to preserve the pathnames in order to avoid collisions between migrated files from different directories which have the same basename, and to allow the files to found/recovered again later, etc) then you can use Marc's `mmfind` suggestion, coupled with the -print0 argument to produce a null-delimited file list which could be coupled with an "xargs -0" pipeline or "rsync -0" to do most of the work. Test everything with a "dry-run" mode which reported what it would do, but without doing it, and one which copied without deleting, to help expose bugs in the process before destroying your data. If the migration doesn't cross between independent filesets, then file migrations could be performed using "mv" without any actual data copying. (For that matter, it could also be done in two stages by hard-linking, then unlinking.) But I think that there are other potential problems involved, even before considering things like path escaping or fileset boundaries... If everything is predicated on the age of a file, you will need to create the missing directory hierarchy in the target dir structure for files which need to be "migrated". If files in a directory vary in age, you may move some files but leave others alone (until they become old enough to migrate) creating incomplete and probably unusable versions at both the source and target. What if a user recreates the missing files as they disappear? As they later age, do you overwrite the files on the target? What if a directory name is later changed to a filename or vice-versa? Will you ever need to "restore" these structures? If so, will you merge these back in to the original source if both non-empty source and target dirs exist? Should we wait for an entire dir hierarchy to age out and then archive it atomically? (We would want a way to know where project dir boundaries are.) I would urge you to think about how complex this might actually get before start performing surgery within data sets. I would be inclined to challenge the original requirements to ensure that what you are able to accomplish matches up with the real goals without creating a raft of new operational problems or loss of work product. Depending on the original goal, it may be possible to do this (more safely) with snapshots or tarballs. -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: Saturday, December 28, 2019 10:17 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Question about Policies This message was sent by an external party. On 27/12/2019 14:20, david_johnson at brown.edu wrote: > You would want to look for examples of external scripts that work on > the result of running the policy engine in listing mode. The one > issue that might need some attention is the way that gpfs quotes > unprintable characters in the pathname. So the policy engine generates > the list and your external script does the moving. > In my experience a good starting point would be to scan the list of files from the policy engine and separate the files out into "normal"; that is files using basic ASCII and no special characters and the rest also known as the "wacky pile". Given that you are UK based it is not unreasonable to expect all path and file names to be in English. There might (and if not probably should) be an institutional policy mandating it. Not much use if a researcher saves everything in Greek then gets knocked over by a bus and person picking up the work is Spanish for example. Hopefully the "wacky pile" is small, however expect to find all sorts of bizarre file and path names in it. We are talking wildcards, back ticks, even newline characters to name but a few. Depending on the amount of data in the "wacky" pile you might just want to forget about moving them, as they are orders of magnitude more difficult to deal with than files with "sane" path and file names and can rapidly soak up large chunks of time trying to deal with them in scripts. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Sat Dec 28 19:49:01 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Sat, 28 Dec 2019 14:49:01 -0500 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file lists perfectly. No need to worry about whitespaces and so forth. Give it a look-see and a try -- marc of GPFS - From: Jonathan Buzzard To: "gpfsug-discuss at spectrumscale.org" Date: 12/28/2019 10:17 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org On 27/12/2019 14:20, david_johnson at brown.edu wrote: > You would want to look for examples of external scripts that work on the > result of running the policy engine in listing mode. ?The one issue that > might need some attention is the way that gpfs quotes unprintable > characters in the pathname. So the policy engine generates the list and > your external script does the moving. > In my experience a good starting point would be to scan the list of files from the policy engine and separate the files out into "normal"; that is files using basic ASCII and no special characters and the rest also known as the "wacky pile". Given that you are UK based it is not unreasonable to expect all path and file names to be in English. There might (and if not probably should) be an institutional policy mandating it. Not much use if a researcher saves everything in Greek then gets knocked over by a bus and person picking up the work is Spanish for example. Hopefully the "wacky pile" is small, however expect to find all sorts of bizarre file and path names in it. We are talking wildcards, back ticks, even newline characters to name but a few. Depending on the amount of data in the "wacky" pile you might just want to forget about moving them, as they are orders of magnitude more difficult to deal with than files with "sane" path and file names and can rapidly soak up large chunks of time trying to deal with them in scripts. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=ndS4tGx_CLuYWNl3PoYZUZGMwTDw0IFQAVCovuw2qbc&s=VLuDBejMqsG2ggu2YNluBW2c_g-bpbNluifBXQNHRM4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Sun Dec 29 10:01:16 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Sun, 29 Dec 2019 10:01:16 +0000 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: On 28/12/2019 19:49, Marc A Kaplan wrote: > The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file > lists perfectly. No need to worry about whitespaces and so forth. > Give it a look-see and a try > Indeed, but I get the feeling from the original post that you will need to mung the path/file names to produce a new directory path that the files is to be moved to. At this point the whole issue of "wacky" directory and file names will rear it's ugly head. So for example /gpfs/users/joeblogs/experiment`1234?/results *-12-2019.txt would need moving to something like /gpfs/users/joeblogs/experiment`1234?/old_data/results *-12-2019.txt That is a pit of woe unless you are confident that users are being sensible, or you just forget about wacky named files. In a similar vein, in the past I have for results coming of a piece of experimental equipment ziped up every 30 days. Each run on the equipment and the results go in a different directory/ So for example the directory /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01/ would be zipped up to /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01.zip and the original directory removed. This works well because both widows explorer and finder will allow you to click into the zip files to see the contents. However the script that did this worked in the principle of a very strict naming convention that if was not adhered to would mean the folders where not zipped up. Given the original posters institution, a good guess is that something like this is what is wanting to be achieved. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From makaplan at us.ibm.com Sun Dec 29 14:24:28 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Sun, 29 Dec 2019 09:24:28 -0500 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: Correct, you may need to use similar parsing/quoting techniques in your renaming scripts. 0 Just remember, in Unix/Posix/Linux the only 2 special characters/codes in path names are '/' and \0. The former delimits directories and the latter marks the end of the string. And technically the latter isn't ever in a path name, it's only used by system APIs to mark the end of a string that is the pathname argument. Happy New Year, From: Jonathan Buzzard To: "gpfsug-discuss at spectrumscale.org" Date: 12/29/2019 05:01 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs Sent by: gpfsug-discuss-bounces at spectrumscale.org On 28/12/2019 19:49, Marc A Kaplan wrote: > The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file > lists perfectly. No need to worry about whitespaces and so forth. > Give it a look-see and a try > Indeed, but I get the feeling from the original post that you will need to mung the path/file names to produce a new directory path that the files is to be moved to. At this point the whole issue of "wacky" directory and file names will rear it's ugly head. So for example /gpfs/users/joeblogs/experiment`1234?/results *-12-2019.txt would need moving to something like /gpfs/users/joeblogs/experiment`1234?/old_data/results *-12-2019.txt That is a pit of woe unless you are confident that users are being sensible, or you just forget about wacky named files. In a similar vein, in the past I have for results coming of a piece of experimental equipment ziped up every 30 days. Each run on the equipment and the results go in a different directory/ So for example the directory /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01/ would be zipped up to /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01.zip and the original directory removed. This works well because both widows explorer and finder will allow you to click into the zip files to see the contents. However the script that did this worked in the principle of a very strict naming convention that if was not adhered to would mean the folders where not zipped up. Given the original posters institution, a good guess is that something like this is what is wanting to be achieved. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=prco68XIUUkBHwRlOlBP9xNlbXteQlfo6eTljgmJseQ&s=dQ0hsxzBJZzZG2Y2Xkh_u6eNGasZl-wHlffQDLn9kiw&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From makaplan at us.ibm.com Mon Dec 30 16:20:59 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 11:20:59 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9ce3971faea5493daa133b08e4a0113e@deshaw.com> References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: Now apart from the mechanics of handling and manipulating pathnames ... the idea to manage storage by "mv"ing instead of MIGRATEing (GPFS-wise) may be ill-advised. I suspect this is a hold-over or leftover from the old days -- when a filesystem was comprised of just a few storage devices (disk drives) and the only way available to manage space was to mv files to another filesystem or archive to tape or whatnot.. That is not the GPFS-way (Spectrum-Scale-way).... Well at least not for more than a dozen or more years! Modern Spectrum Scale has storage POOLs and also integrates with HSM systems. These separate the concept of name space (pathnames) from storage devices. Read about it, discuss it with your colleagues, clients, managers -- and use it! -- marc of GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Mon Dec 30 16:29:52 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 11:29:52 -0500 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: Also see if your distribution includes samples/ilm/mmxcp which, if you are determined to cp or mv from one path to another, shows a way to do that easily in perl, using code similar to the aforementions bin/mmxargs Here is the path changing part... ... $src =~ s/'/'\\''/g; # any ' within the name like x'y become x'\''y then we quote all names passed to commands my @src = split('/',$src); my $sra = join('/', @src[$strip+1..$#src-1]); $newtarg = "'" . $target . '/' . $sra . "'"; ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Mon Dec 30 21:48:00 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 30 Dec 2019 21:48:00 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: On 30/12/2019 16:20, Marc A Kaplan wrote: > Now apart from the mechanics of handling and manipulating pathnames ... > > the idea to manage storage by "mv"ing instead of MIGRATEing (GPFS-wise) > may be ill-advised. > > I suspect this is a hold-over or leftover from the old days -- when a > filesystem was comprised of just a few storage devices (disk drives) and > the only way available to manage space was to mv files to another > filesystem or archive to tape or whatnot.. > I suspect based on the OP is from (a cancer research institute which is basically life sciences) that this is an incorrect assumption. I would guess this is about "archiving" results coming off experimental equipment. I use the term "archiving" in the same way that various email programs try and "archive" my old emails. That is to prevent the output directory of the equipment filling up with many thousands of files and/or directories I want to automate the placement in a directory hierarchy of old results. Imagine a piece of equipment that does 50 different analysis's a day every working day. That's a 1000 a month or ~50,000 a year. It's about logically moving stuff to keep ones working directory manageable but making finding an old analysis easy to find. I would also note that some experimental equipment would do many more than 50 different analysis's a day. It's a common requirement in any sort of research facility, especially when they have central facilities for doing analysis on equipment that would be too expensive for an individual group or where it makes sense to "outsource" repetitive basics analysis to lower paid staff. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Mon Dec 30 22:14:18 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 30 Dec 2019 22:14:18 +0000 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: <3127843a-403f-d360-4b6c-9b410c9ef39d@strath.ac.uk> On 29/12/2019 14:24, Marc A Kaplan wrote: > Correct, you may need to use similar parsing/quoting techniques in your > renaming scripts. > 0 > Just remember, in Unix/Posix/Linux the only 2 special characters/codes > in path names are '/' and \0. The former delimits directories and the > latter marks the end of the string. > And technically the latter isn't ever in a path name, it's only used by > system APIs to mark the end of a string that is the pathname argument. >i I am not sure even that is entirely true. Certainly MacOS X in the past would allow '/' in file names. You find this out when a MacOS user tries to migrate their files to a SMB based file server and the process trips up because they have named a whole bunch of files in the format "My Results 30/12/2019.txt" At this juncture I note that MacOS is certified Unix :-) I think it is more a file system limitation than anything else. I wonder what happens when you mount a HFS+ file system with such named files on Linux... I would at this point note that the vast majority of "wacky" file names originate from MacOS (both Classic and X) users. Also while you are otherwise technically correct about what is allowed in a file name just try creating a file name with a newline character in it using either a GUI tool or the command line. You have to be really determined to achieve it. I have also seen \007 in a file name, I mean really. Our training for new HPC users has a section covering file names which includes advising users not to use "wacky" characters in them as we don't guarantee their continued survival. That is if we do something on the file system and they get "lost" as a result it's your own fault. In my view restricting yourself to the following is entirely sensible https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata Also while Unix is generally case sensitive creating files that would clash if accessed case insensitive is really dumb and should be avoided. Again, if it causes you problems in future, it sucks to be you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From makaplan at us.ibm.com Mon Dec 30 23:35:02 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 18:35:02 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu><9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: Yes, that is entirely true, if not then basic Posix calls like open(2) are broken. https://stackoverflow.com/questions/9847288/is-it-possible-to-use-in-a-filename -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Mon Dec 30 23:40:37 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 18:40:37 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu><9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: As I said :"MAY be ill-advised". If you have a good reason to use "mv" then certainly, use it! But there are plenty of good naming conventions for the scenario you give... Like, start a new directory of results every day, week or month... /fs/experiments/y2019/m12/d30/fileX.ZZZ ... OF course, if you want or need to mv, or cp and/or rm the metadata out of the filesystem, then eventually you do so! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Mon Dec 30 23:55:17 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 30 Dec 2019 23:55:17 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: <09180fd7-8121-02d6-6384-8ef4b9c7decd@strath.ac.uk> On 30/12/2019 23:40, Marc A Kaplan wrote: > As I said :"MAY be ill-advised". > > If you have a good reason to use "mv" then certainly, use it! > > But there are plenty of good naming conventions for the scenario you > give... > Like, start a new directory of results every day, week or month... > > > /fs/experiments/y2019/m12/d30/fileX.ZZZ ... > > OF course, if you want or need to mv, or cp and/or rm the metadata out > of the filesystem, then eventually you do so! > Possibly, but often (in fact sensibly) the results are saved in the first instance to the local machine because any network issue and boom your results are gone as doing the analysis destroys the sample. That in life sciences can easily mean several days and $1000. The results are then uploaded automatically to the file server. That gets a whole bunch more complicated. Honest you simply don't want to go there getting it to be done different. It would be less painful to have a tooth extracted without anesthetic. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Tue Dec 31 00:00:06 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 31 Dec 2019 00:00:06 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: On 30/12/2019 23:35, Marc A Kaplan wrote: > Yes, that is entirely true, if not then basic Posix calls like open(2) > are broken. > > _https://stackoverflow.com/questions/9847288/is-it-possible-to-use-in-a-filename_ > > That's for Linux and possibly Posix. Like I said on the certified *Unix* that is macOS it's perfectly fine. I have bumped into it more times that I care to recall. Try moving a MacOS AFP server to a different OS and then get back to me... JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From alvise.dorigo at psi.ch Tue Dec 3 14:35:22 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Tue, 3 Dec 2019 14:35:22 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Message-ID: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Tue Dec 3 14:44:21 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Tue, 3 Dec 2019 14:44:21 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <5f54e13651cc45ef999ebf2417792b38@psi.ch> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Tue Dec 3 14:54:31 2019 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Tue, 3 Dec 2019 09:54:31 -0500 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <5f54e13651cc45ef999ebf2417792b38@psi.ch> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From TOMP at il.ibm.com Tue Dec 3 15:02:36 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Tue, 3 Dec 2019 17:02:36 +0200 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=5Ji4Rrk0dQhYpwfSkj-6RPXwgYhhiqqImlaHmuHrOsk&s=Z0aCyK22UfYZ2VIREnwtIirpmS2fM6a7IrkEUnuWyB8&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue Dec 3 15:03:41 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 3 Dec 2019 15:03:41 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: <02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> On 03/12/2019 14:54, Olaf Weiser wrote: > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - ?you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster > .. ?.. add the nodes to your existing cluster.. and then start > configuring the RGs > I was under the impression (from post by IBM employees on this list) that you are not allowed to mix GNR, ESS, DSS, classical GPFS, DDN GPFS etc. in the same cluster. Not a technical limitation but a licensing one. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From makaplan at us.ibm.com Tue Dec 3 19:14:52 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 3 Dec 2019 14:14:52 -0500 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> <02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> Message-ID: IF you have everything properly licensed and then you reconfigure... It may work okay... But then you may come up short if you ask for IBM support or service... So depending how much support you need or desire... Or take the easier and supported path... And probably accomplish most of what you need -- let each cluster be and remote mount onto clients which could be on any connected cluster. From: Jonathan Buzzard To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 10:04 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org On 03/12/2019 14:54, Olaf Weiser wrote: > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - ?you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster > .. ?.. add the nodes to your existing cluster.. and then start > configuring the RGs > I was under the impression (from post by IBM employees on this list) that you are not allowed to mix GNR, ESS, DSS, classical GPFS, DDN GPFS etc. in the same cluster. Not a technical limitation but a licensing one. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIF-g&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=lEWw7H2AdQxSCu_vbgGHhztL0y7voTATCG_KfbRgHJw&s=wg5NvwO5OAw-jLCsL-BtSRGisghnRu5F39r_G_gKNKk&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From lgayne at us.ibm.com Tue Dec 3 19:20:55 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Tue, 3 Dec 2019 19:20:55 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , <5f54e13651cc45ef999ebf2417792b38@psi.ch><02652fcb-3345-d07f-f90b-833ecf380010@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0E56DFFAD6E28f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0E56DFFAD6E28f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15754003609670.gif Type: image/gif Size: 105 bytes Desc: not available URL: From lgayne at us.ibm.com Tue Dec 3 19:30:31 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Tue, 3 Dec 2019 19:30:31 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , <5f54e13651cc45ef999ebf2417792b38@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Wed Dec 4 09:29:32 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Wed, 4 Dec 2019 09:29:32 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , <5f54e13651cc45ef999ebf2417792b38@psi.ch>, Message-ID: <62721c5c4c3640848e1513d03965fefe@psi.ch> Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Dec 4 11:21:54 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 4 Dec 2019 12:21:54 +0100 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <62721c5c4c3640848e1513d03965fefe@psi.ch> References: <5f54e13651cc45ef999ebf2417792b38@psi.ch> <62721c5c4c3640848e1513d03965fefe@psi.ch> Message-ID: Adding the GL2 into your existing cluster shouldn?t be any problem. You would just delete the existing cluster on the GL2, then on the EMS run something like: gssaddnode -N gssio1-hs --cluster-node netapp-node --nodetype gss --accept-license gssaddnode -N gssio2-hs --cluster-node netapp-node --nodetype gss --accept-license and then afterwards create the RGs: gssgenclusterrgs -G gss_ppc64 --suffix=-hs Then create the vdisks/nsds and add to your existing filesystem. Beware that last time I did this, gssgenclusterrgs triggered a "mmshutdown -a" on the whole cluster, because it wanted to change some config settings... Caught me a bit by surprise.. -jf ons. 4. des. 2019 kl. 10:44 skrev Dorigo Alvise (PSI) : > Thank you all for the answer. I try to recap my answers to your questions: > > > > 1. the purpose is not to merge clusters "per se"; it is adding the > GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp > (which is running out of free space); of course I know well the > heterogeneity of this hypothetical system, so the GL2's NSD would go to a > special pool; but in the end I need a unique namespace for files. > 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 > cluster) because the former is in production and I cannot schedule long > downtimes > 3. All system have proper licensing of course; what does it means that > I could loose IBM support ? if the support is for a failing disk drive I do > not think so; if the support is for a "strange" behaviour of GPFS I can > probably understand > 4. NSD (in the NetApp system) are in their roles: what do you mean > exactly ? there's RAIDset attached to servers that are actually NSD > together with their attached LUN > > > Alvise > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Lyle Gayne < > lgayne at us.ibm.com> > *Sent:* Tuesday, December 3, 2019 8:30:31 PM > *To:* gpfsug-discuss at spectrumscale.org > *Cc:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp *< --- Are these > NSD servers in their GPFS roles (where Scale "runs on top"*? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest > cluster, rather than simply allowing remote mount of the ESS servers by the > other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our > coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no > issues. Having them as separate file systems (NetApp vs. ESS) means no > concerns regarding varying architectures within the same fs serving or > failover scheme. Mixing such as compute nodes would mean some performance > differences across the different clients, but you haven't described your > compute (NSD client) details. > > Lyle > > ----- Original message ----- > From: "Tomer Perry" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR > configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based > file system, then all affected IBM Spectrum Scale RAID objects will be > exported as well. This includes recovery groups, declustered arrays, > vdisks, and any other file systems that are based on these objects. For > more information about IBM Spectrum Scale RAID, see *IBM Spectrum > Scale RAID: Administration*. " > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since > I would assume that the cluster config version is to high for the NetApp > based cluster. > I would also suspect that the filesystem version on the ESS will be > different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" > To: gpfsug main discussion list > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to > a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. > .. add the nodes to your existing cluster.. and then start configuring the > RGs > > > > > > From: "Dorigo Alvise (PSI)" > To: "gpfsug-discuss at spectrumscale.org" < > gpfsug-discuss at spectrumscale.org> > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Wed Dec 4 14:07:18 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Wed, 4 Dec 2019 14:07:18 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <62721c5c4c3640848e1513d03965fefe@psi.ch> Message-ID: An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Thu Dec 5 09:15:13 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Thu, 5 Dec 2019 09:15:13 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <62721c5c4c3640848e1513d03965fefe@psi.ch>, Message-ID: Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, >From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, Anderson Nobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone: 55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Thu Dec 5 10:24:08 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Thu, 5 Dec 2019 10:24:08 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: Message-ID: One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: > > ? > Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. > > > > A > > From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre > Sent: Wednesday, December 4, 2019 3:07:18 PM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > Hi Dorigo, > > From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. > > Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata > > Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. > > One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: > https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning > > Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. > > Abra?os / Regards / Saludos, > > > Anderson Nobre > Power and Storage Consultant > IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services > > > > Phone: 55-19-2132-4317 > E-mail: anobre at br.ibm.com > > > ----- Original message ----- > From: "Dorigo Alvise (PSI)" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: "gpfsug-discuss at spectrumscale.org" > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Date: Wed, Dec 4, 2019 06:44 > > Thank you all for the answer. I try to recap my answers to your questions: > > > > the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. > I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes > All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand > NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN > > Alvise > From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne > Sent: Tuesday, December 3, 2019 8:30:31 PM > To: gpfsug-discuss at spectrumscale.org > Cc: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. > > Lyle > ----- Original message ----- > From: "Tomer Perry" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration. " > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. > I would also suspect that the filesystem version on the ESS will be different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" > To: gpfsug main discussion list > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs > > > > > > From: "Dorigo Alvise (PSI)" > To: "gpfsug-discuss at spectrumscale.org" > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Thu Dec 5 14:50:01 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Thu, 5 Dec 2019 14:50:01 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: , Message-ID: <15d9b14554534be7a7adca204ca3febd@psi.ch> This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com [https://images.youracclaim.com/images/c49300ae-d13e-4071-90f5-15f59d199c9e/IBM%2BVolunteers%2BGold%2Bv6.png] [https://images.youracclaim.com/images/f2539224-f951-46b4-b376-b88f21c2be98/IBM-Selling-Certification---Level-1.png] [https://images.youracclaim.com/images/ea52b12f-97ac-4e72-8d24-b0ced8054e7d/Storage%2BTechnical%2BV1.png] On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: ? Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From cblack at nygenome.org Thu Dec 5 15:17:49 2019 From: cblack at nygenome.org (Christopher Black) Date: Thu, 5 Dec 2019 15:17:49 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <15d9b14554534be7a7adca204ca3febd@psi.ch> References: <15d9b14554534be7a7adca204ca3febd@psi.ch> Message-ID: <487517C3-5B4A-401E-85E5-A1874527A115@nygenome.org> If you have two clusters that are hard to merge, but you are facing the need to provide capacity for more writes, another option to consider would be to set up a filesystem on GL2 with an AFM relationship to the filesystem on the netapp gpfs cluster for accessing older data and point clients to the new GL2 filesystem. Some downsides to that approach include introducing a dependency on afm (and potential performance reduction) to get to older data. There may also be complications depending on how your filesets are laid out. At some point when you have more capacity in 5.x cluster and/or are ready to move off netapp, you could use afm to prefetch all data into new filesystem. In theory, you could then (re)build nsd servers connected to netapp on 5.x and add them to new cluster and use them for a separate pool or keep them as a separate 5.x cluster. Best, Chris From: on behalf of "Dorigo Alvise (PSI)" Reply-To: gpfsug main discussion list Date: Thursday, December 5, 2019 at 9:50 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com [https://images.youracclaim.com/images/c49300ae-d13e-4071-90f5-15f59d199c9e/IBM%2BVolunteers%2BGold%2Bv6.png] [https://images.youracclaim.com/images/f2539224-f951-46b4-b376-b88f21c2be98/IBM-Selling-Certification---Level-1.png] [https://images.youracclaim.com/images/ea52b12f-97ac-4e72-8d24-b0ced8054e7d/Storage%2BTechnical%2BV1.png] On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Thu Dec 5 15:59:07 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 5 Dec 2019 16:59:07 +0100 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <15d9b14554534be7a7adca204ca3febd@psi.ch> References: <15d9b14554534be7a7adca204ca3febd@psi.ch> Message-ID: There?s still being maintained the ESS v5.2 release stream with gpfs v4.2.3.x for customer that are stuck on v4. You should probably install that on your ESS if you want to add it to your existing cluster. BTW: I think Tomer misunderstood the task a bit. It sounded like you needed to keep the existing recoverygroups from the ESS in the merge. That would probably be complicated.. Adding an empty ESS to an existing cluster should not be complicated ?- it?s just not properly documented anywhere I?m aware of. -jf tor. 5. des. 2019 kl. 15:50 skrev Dorigo Alvise (PSI) : > This is a quite critical storage for data taking. It is not easy to update > to GPFS5 because in that facility we have very short shutdown period. Thank > you for pointing out that 4.2.3. But the entire storage will be replaced in > the future; at the moment we just need to expand it to survive for a while. > > > This merge seems quite tricky to implement and I haven't seen consistent > opinions among the people that kindly answered. According to Jan Frode, > Kaplan and T. Perry it should be possible, in principle, to do the merge... > Other people suggest a remote mount, which is not a solution for my use > case. Other suggest not to do that... > > > A > > > > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Daniel Kidger < > daniel.kidger at uk.ibm.com> > *Sent:* Thursday, December 5, 2019 11:24:08 AM > > *To:* gpfsug main discussion list > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > One additional question to ask is : what are your long term plans for the > 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x > (hopefully before 4.2.3 goes out of support)? > > Also I assume your Netapp hardware is the standard Netapp block storage, > perhaps based on their standard 4U60 shelves daisy-chained together? > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum Discover and IBM Cloud Object Store > > + <+44-7818%20522%20266>44-(0)7818 522 266 <+44-7818%20522%20266> > daniel.kidger at uk.ibm.com > > > > > > > > On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: > > ? > > Thank Anderson for the material. In principle our idea was to scratch the > filesystem in the GL2, put its NSD on a dedicated pool and then merge it > into the Filesystem which would remain on V4. I do not want to create a FS > in the GL2 but use its space to expand the space of the other cluster. > > > A > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Anderson Ferreira > Nobre > *Sent:* Wednesday, December 4, 2019 3:07:18 PM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > Hi Dorigo, > > From point of view of cluster administration I don't think it's a good > idea to have hererogeneous cluster. There are too many diferences between > V4 and V5. And much probably many of enhancements of V5 you won't take > advantage. One example is the new filesystem layout in V5. And at this > moment the way to migrate the filesystem is create a new filesystem in V5 > with the new layout and migrate the data. That is inevitable. I have seen > clients saying that they don't need all that enhancements, but the true is > when you face performance issue that is only addressable with the new > features someone will raise the question why we didn't consider that in the > beginning. > > Use this time to review if it would be better to change the block size of > your fileystem. There's a script called filehist > in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your > current filesystem. Here's the link with additional information: > > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata > > Different RAID configurations also brings unexpected performance > behaviors. Unless you are planning create different pools and use ILM to > manage the files in different pools. > > One last thing, it's a good idea to follow the recommended levels for > Spectrum Scale: > > https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning > > Anyway, you are the system administrator, you know better than anyone how > complex is to manage this cluster. > > Abra?os / Regards / Saludos, > > > *AndersonNobre* > Power and Storage Consultant > IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services > > [image: community_general_lab_services] > > ------------------------------ > Phone:55-19-2132-4317 > E-mail: anobre at br.ibm.com [image: IBM] > > > > ----- Original message ----- > From: "Dorigo Alvise (PSI)" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: "gpfsug-discuss at spectrumscale.org" > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Wed, Dec 4, 2019 06:44 > > > Thank you all for the answer. I try to recap my answers to your questions: > > > > 1. the purpose is not to merge clusters "per se"; it is adding the > GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp > (which is running out of free space); of course I know well the > heterogeneity of this hypothetical system, so the GL2's NSD would go to a > special pool; but in the end I need a unique namespace for files. > 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 > cluster) because the former is in production and I cannot schedule long > downtimes > 3. All system have proper licensing of course; what does it means that > I could loose IBM support ? if the support is for a failing disk drive I do > not think so; if the support is for a "strange" behaviour of GPFS I can > probably understand > 4. NSD (in the NetApp system) are in their roles: what do you mean > exactly ? there's RAIDset attached to servers that are actually NSD > together with their attached LUN > > > Alvise > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of Lyle Gayne < > lgayne at us.ibm.com> > *Sent:* Tuesday, December 3, 2019 8:30:31 PM > *To:* gpfsug-discuss at spectrumscale.org > *Cc:* gpfsug-discuss at spectrumscale.org > *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster > > For: > > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp *< --- Are these > NSD servers in their GPFS roles (where Scale "runs on top"*? > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > ...... > Some observations: > > > 1) Why do you want to MERGE the GL2 into a single cluster with the rest > cluster, rather than simply allowing remote mount of the ESS servers by the > other GPFS (NSD client) nodes? > > 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our > coexistence rules. > > 3) Mixing x86 and Power, especially as NSD servers, should pose no > issues. Having them as separate file systems (NetApp vs. ESS) means no > concerns regarding varying architectures within the same fs serving or > failover scheme. Mixing such as compute nodes would mean some performance > differences across the different clients, but you haven't described your > compute (NSD client) details. > > Lyle > > ----- Original message ----- > From: "Tomer Perry" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Date: Tue, Dec 3, 2019 10:03 AM > > Hi, > > Actually, I believe that GNR is not a limiting factor here. > mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR > configuration as well: > "If the specified file system device is a IBM Spectrum Scale RAID-based > file system, then all affected IBM Spectrum Scale RAID objects will be > exported as well. This includes recovery groups, declustered arrays, > vdisks, and any other file systems that are based on these objects. For > more information about IBM Spectrum Scale RAID, see *IBM Spectrum > Scale RAID: Administration*." > > OTOH, I suspect that due to the version mismatch, it wouldn't work - since > I would assume that the cluster config version is to high for the NetApp > based cluster. > I would also suspect that the filesystem version on the ESS will be > different. > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: "Olaf Weiser" > To: gpfsug main discussion list > Date: 03/12/2019 16:54 > Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to > a non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hallo > "merging" 2 different GPFS cluster into one .. is not possible .. > for sure you can do "nested" mounts .. .but that's most likely not, what > you want to do .. > > if you want to add a GL2 (or any other ESS) ..to an existing (other) > cluster... - you can't preserve ESS's RG definitions... > you need to create the RGs after adding the IO-nodes to the existing > cluster... > > so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. > .. add the nodes to your existing cluster.. and then start configuring the > RGs > > > > > > From: "Dorigo Alvise (PSI)" > To: "gpfsug-discuss at spectrumscale.org" < > gpfsug-discuss at spectrumscale.org> > Date: 12/03/2019 09:35 AM > Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a > non-GNR cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hello everyone, > I have: > - A NetApp system with hardware RAID > - SpectrumScale 4.2.3-13 running on top of the NetApp > - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) > > What I need to do is to merge the GL2 in the other GPFS cluster (running > on the NetApp) without loosing, of course, the RecoveryGroup configuration, > etc. > > I'd like to ask the experts > 1. whether it is feasible, considering the difference in the GPFS > versions, architectures differences (x86_64 vs. power) > 2. if yes, whether anyone already did something like this and what > is the best strategy suggested > 3. finally: is there any documentation dedicated to that, or at > least inspiring the correct procedure ? > > Thank you very much, > > Alvise Dorigo_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgayne at us.ibm.com Thu Dec 5 15:58:39 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Thu, 5 Dec 2019 10:58:39 -0500 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <487517C3-5B4A-401E-85E5-A1874527A115@nygenome.org> References: <15d9b14554534be7a7adca204ca3febd@psi.ch> <487517C3-5B4A-401E-85E5-A1874527A115@nygenome.org> Message-ID: One tricky bit in this case is that ESS is always recommended to be its own standalone cluster, so MERGING it as a storage pool or pools into a cluster already containing NetApp storage wouldn't be generally recommended. Yet you cannot achieve the stated goal of a single fs image/mount point containing both types of storage that way. Perhaps our ESS folk should weigh in regarding possible routs? Lyle From: Christopher Black To: gpfsug main discussion list Date: 12/05/2019 10:53 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org If you have two clusters that are hard to merge, but you are facing the need to provide capacity for more writes, another option to consider would be to set up a filesystem on GL2 with an AFM relationship to the filesystem on the netapp gpfs cluster for accessing older data and point clients to the new GL2 filesystem. Some downsides to that approach include introducing a dependency on afm (and potential performance reduction) to get to older data. There may also be complications depending on how your filesets are laid out. At some point when you have more capacity in 5.x cluster and/or are ready to move off netapp, you could use afm to prefetch all data into new filesystem. In theory, you could then (re)build nsd servers connected to netapp on 5.x and add them to new cluster and use them for a separate pool or keep them as a separate 5.x cluster. Best, Chris From: on behalf of "Dorigo Alvise (PSI)" Reply-To: gpfsug main discussion list Date: Thursday, December 5, 2019 at 9:50 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=96nejPA0lJgbr9YP3LlaHsFUacfAy3QObHRl5SSeu6o&s=E1HEKXJOzKNDJan1TBYUlV1ckkhUjDiqUXT-x-p-QbI&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: From stockf at us.ibm.com Thu Dec 5 20:13:28 2019 From: stockf at us.ibm.com (Frederick Stock) Date: Thu, 5 Dec 2019 20:13:28 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: <15d9b14554534be7a7adca204ca3febd@psi.ch> References: <15d9b14554534be7a7adca204ca3febd@psi.ch>, , Message-ID: An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Fri Dec 6 14:37:02 2019 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Fri, 6 Dec 2019 14:37:02 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Message-ID: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage This is full-function DME, no time restrictions, limited to 12TB per cluster. NO production use or support! It?s likely that some people entirely new to Scale will find their way here to the user group Slack channel and mailing list, so I thank you in advance for making them welcome, and letting them know about the wealth of online information for Scale, including the email address scale at us.ibm.com Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69557 bytes Desc: image001.png URL: From lists at esquad.de Sun Dec 8 17:22:43 2019 From: lists at esquad.de (Dieter Mosbach) Date: Sun, 8 Dec 2019 18:22:43 +0100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: Am 06.12.2019 um 15:37 schrieb Carl Zetie - carlz at us.ibm.com:> > Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage > Clicking on "Try free developer edition" leads to a download of "Spectrum Scale 4.2.2 GUI Open Beta zip file" from 2015-08-22 ... Kind regards Dieter From alvise.dorigo at psi.ch Mon Dec 9 10:03:58 2019 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Mon, 9 Dec 2019 10:03:58 +0000 Subject: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster In-Reply-To: References: <15d9b14554534be7a7adca204ca3febd@psi.ch>, , , Message-ID: <2bad2631ebf44042b4004fb5c51eb7d0@psi.ch> I thank you all so much for the participation on this topic. We realized that what we wanted to do is not only "exotic", but also not officially supported and as far as I understand no one did something like that in production. We do not want to be the first with production systems. We decided that the least disruptive thing to do is remotely mount the GL2's filesystem into the NetApp/GPFS cluster and for a limited amount of time (less than 1 year) we are going to survive with different filesystem namespaces, managing users and groups with some symlink system or other high level solutions. Thank you very much, Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Frederick Stock Sent: Thursday, December 5, 2019 9:13:28 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster If you plan to replace all the storage then why did you choose to integrate a ESS GL2 rather than use another storage option? Perhaps you had already purchased the ESS system? Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Thu, Dec 5, 2019 2:57 PM This is a quite critical storage for data taking. It is not easy to update to GPFS5 because in that facility we have very short shutdown period. Thank you for pointing out that 4.2.3. But the entire storage will be replaced in the future; at the moment we just need to expand it to survive for a while. This merge seems quite tricky to implement and I haven't seen consistent opinions among the people that kindly answered. According to Jan Frode, Kaplan and T. Perry it should be possible, in principle, to do the merge... Other people suggest a remote mount, which is not a solution for my use case. Other suggest not to do that... A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Daniel Kidger Sent: Thursday, December 5, 2019 11:24:08 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster One additional question to ask is : what are your long term plans for the 4.2.3 Spectrum Scake cluster? Do you expect to upgrade it to version 5.x (hopefully before 4.2.3 goes out of support)? Also I assume your Netapp hardware is the standard Netapp block storage, perhaps based on their standard 4U60 shelves daisy-chained together? Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum Discover and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com [X] [X] [X] On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) wrote: ? Thank Anderson for the material. In principle our idea was to scratch the filesystem in the GL2, put its NSD on a dedicated pool and then merge it into the Filesystem which would remain on V4. I do not want to create a FS in the GL2 but use its space to expand the space of the other cluster. A ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Anderson Ferreira Nobre Sent: Wednesday, December 4, 2019 3:07:18 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Hi Dorigo, From point of view of cluster administration I don't think it's a good idea to have hererogeneous cluster. There are too many diferences between V4 and V5. And much probably many of enhancements of V5 you won't take advantage. One example is the new filesystem layout in V5. And at this moment the way to migrate the filesystem is create a new filesystem in V5 with the new layout and migrate the data. That is inevitable. I have seen clients saying that they don't need all that enhancements, but the true is when you face performance issue that is only addressable with the new features someone will raise the question why we didn't consider that in the beginning. Use this time to review if it would be better to change the block size of your fileystem. There's a script called filehist in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your current filesystem. Here's the link with additional information: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata Different RAID configurations also brings unexpected performance behaviors. Unless you are planning create different pools and use ILM to manage the files in different pools. One last thing, it's a good idea to follow the recommended levels for Spectrum Scale: https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning Anyway, you are the system administrator, you know better than anyone how complex is to manage this cluster. Abra?os / Regards / Saludos, AndersonNobre Power and Storage Consultant IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone:55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Dorigo Alvise (PSI)" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Wed, Dec 4, 2019 06:44 Thank you all for the answer. I try to recap my answers to your questions: 1. the purpose is not to merge clusters "per se"; it is adding the GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp (which is running out of free space); of course I know well the heterogeneity of this hypothetical system, so the GL2's NSD would go to a special pool; but in the end I need a unique namespace for files. 2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2 cluster) because the former is in production and I cannot schedule long downtimes 3. All system have proper licensing of course; what does it means that I could loose IBM support ? if the support is for a failing disk drive I do not think so; if the support is for a "strange" behaviour of GPFS I can probably understand 4. NSD (in the NetApp system) are in their roles: what do you mean exactly ? there's RAIDset attached to servers that are actually NSD together with their attached LUN Alvise ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Lyle Gayne Sent: Tuesday, December 3, 2019 8:30:31 PM To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster For: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are these NSD servers in their GPFS roles (where Scale "runs on top"? - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? ...... Some observations: 1) Why do you want to MERGE the GL2 into a single cluster with the rest cluster, rather than simply allowing remote mount of the ESS servers by the other GPFS (NSD client) nodes? 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our coexistence rules. 3) Mixing x86 and Power, especially as NSD servers, should pose no issues. Having them as separate file systems (NetApp vs. ESS) means no concerns regarding varying architectures within the same fs serving or failover scheme. Mixing such as compute nodes would mean some performance differences across the different clients, but you haven't described your compute (NSD client) details. Lyle ----- Original message ----- From: "Tomer Perry" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Date: Tue, Dec 3, 2019 10:03 AM Hi, Actually, I believe that GNR is not a limiting factor here. mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR configuration as well: "If the specified file system device is a IBM Spectrum Scale RAID-based file system, then all affected IBM Spectrum Scale RAID objects will be exported as well. This includes recovery groups, declustered arrays, vdisks, and any other file systems that are based on these objects. For more information about IBM Spectrum Scale RAID, see IBM Spectrum Scale RAID: Administration." OTOH, I suspect that due to the version mismatch, it wouldn't work - since I would assume that the cluster config version is to high for the NetApp based cluster. I would also suspect that the filesystem version on the ESS will be different. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Olaf Weiser" To: gpfsug main discussion list Date: 03/12/2019 16:54 Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo "merging" 2 different GPFS cluster into one .. is not possible .. for sure you can do "nested" mounts .. .but that's most likely not, what you want to do .. if you want to add a GL2 (or any other ESS) ..to an existing (other) cluster... - you can't preserve ESS's RG definitions... you need to create the RGs after adding the IO-nodes to the existing cluster... so if you got a new ESS.. (no data on it) .. simply unconfigure cluster .. .. add the nodes to your existing cluster.. and then start configuring the RGs From: "Dorigo Alvise (PSI)" To: "gpfsug-discuss at spectrumscale.org" Date: 12/03/2019 09:35 AM Subject: [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hello everyone, I have: - A NetApp system with hardware RAID - SpectrumScale 4.2.3-13 running on top of the NetApp - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1) What I need to do is to merge the GL2 in the other GPFS cluster (running on the NetApp) without loosing, of course, the RecoveryGroup configuration, etc. I'd like to ask the experts 1. whether it is feasible, considering the difference in the GPFS versions, architectures differences (x86_64 vs. power) 2. if yes, whether anyone already did something like this and what is the best strategy suggested 3. finally: is there any documentation dedicated to that, or at least inspiring the correct procedure ? Thank you very much, Alvise Dorigo_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Mon Dec 9 10:30:05 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Mon, 9 Dec 2019 10:30:05 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: , <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: An HTML attachment was scrubbed... URL: From nnasef at us.ibm.com Mon Dec 9 18:35:52 2019 From: nnasef at us.ibm.com (Nariman Nasef) Date: Mon, 9 Dec 2019 18:35:52 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-productionuse now available In-Reply-To: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.156777917997825.png Type: image/png Size: 15543 bytes Desc: not available URL: From Greg.Lehmann at csiro.au Tue Dec 10 02:09:31 2019 From: Greg.Lehmann at csiro.au (Lehmann, Greg (IM&T, Pullenvale)) Date: Tue, 10 Dec 2019 02:09:31 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: Hi Carl, I am wondering if it is acceptable to use this as a test cluster. The main intentions being to try fixes, configuration changes etc. on the test cluster before applying those to the production cluster. I guess the issue with this release, is that it is the latest version. We really need a version that matches production and be able to apply fixpacks, PTFs etc. to it without breaching the license of the developer edition. Cheers, Greg Lehmann -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Carl Zetie - carlz at us.ibm.com Sent: Saturday, December 7, 2019 12:37 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage This is full-function DME, no time restrictions, limited to 12TB per cluster. NO production use or support! It?s likely that some people entirely new to Scale will find their way here to the user group Slack channel and mailing list, so I thank you in advance for making them welcome, and letting them know about the wealth of online information for Scale, including the email address scale at us.ibm.com Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com From jack at flametech.com.au Tue Dec 10 02:35:06 2019 From: jack at flametech.com.au (Jack Horrocks) Date: Tue, 10 Dec 2019 13:35:06 +1100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: Hi Carl, To further that I tried to download it in Australia and couldn't. I said I had to go through export controls. Thanks Jack. On Tue, 10 Dec 2019 at 13:16, Lehmann, Greg (IM&T, Pullenvale) wrote: > Hi Carl, > I am wondering if it is acceptable to use this as a test cluster. > The main intentions being to try fixes, configuration changes etc. on the > test cluster before applying those to the production cluster. I guess the > issue with this release, is that it is the latest version. We really need a > version that matches production and be able to apply fixpacks, PTFs etc. to > it without breaching the license of the developer edition. > > Cheers, > > Greg Lehmann > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Carl Zetie - > carlz at us.ibm.com > Sent: Saturday, December 7, 2019 12:37 AM > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Scale Developer Edition free for non-production > use now available > > > Spectrum Scale Developer Edition is now available for free download on the > IBM Marketplace: > https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage > > This is full-function DME, no time restrictions, limited to 12TB per > cluster. NO production use or support! > > It?s likely that some people entirely new to Scale will find their way > here to the user group Slack channel and mailing list, so I thank you in > advance for making them welcome, and letting them know about the wealth of > online information for Scale, including the email address scale at us.ibm.com > > > Carl Zetie > Program Director > Offering Management > Spectrum Scale & Spectrum Discover > ---- > (919) 473 3318 ][ Research Triangle Park > carlz at us.ibm.com > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nigel.williams at tpac.org.au Tue Dec 10 03:07:31 2019 From: nigel.williams at tpac.org.au (Nigel Williams) Date: Tue, 10 Dec 2019 14:07:31 +1100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: On Tue, 10 Dec 2019 at 13:35, Jack Horrocks wrote: > To further that I tried to download it in Australia and couldn't. I said I had to go through export controls. I clicked the option "I already have an IBMid", but using known working credentials [1] I get "Incorrect IBMid or password. Please try again!" [1] credentials work with support.ibm.com and IBM Cloud From Greg.Lehmann at csiro.au Tue Dec 10 03:11:30 2019 From: Greg.Lehmann at csiro.au (Lehmann, Greg (IM&T, Pullenvale)) Date: Tue, 10 Dec 2019 03:11:30 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: I am in Australia and downloaded it OK. Greg Lehmann Senior High Performance Data Specialist | CSIRO Greg.Lehmann at csiro.au | +61 7 3327 4137 | From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jack Horrocks Sent: Tuesday, December 10, 2019 12:35 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Hi Carl, To further that I tried to download it in Australia and couldn't. I said I had to go through export controls. Thanks Jack. On Tue, 10 Dec 2019 at 13:16, Lehmann, Greg (IM&T, Pullenvale) > wrote: Hi Carl, I am wondering if it is acceptable to use this as a test cluster. The main intentions being to try fixes, configuration changes etc. on the test cluster before applying those to the production cluster. I guess the issue with this release, is that it is the latest version. We really need a version that matches production and be able to apply fixpacks, PTFs etc. to it without breaching the license of the developer edition. Cheers, Greg Lehmann -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Carl Zetie - carlz at us.ibm.com Sent: Saturday, December 7, 2019 12:37 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Spectrum Scale Developer Edition is now available for free download on the IBM Marketplace: https://www.ibm.com/us-en/marketplace/scale-out-file-and-object-storage This is full-function DME, no time restrictions, limited to 12TB per cluster. NO production use or support! It?s likely that some people entirely new to Scale will find their way here to the user group Slack channel and mailing list, so I thank you in advance for making them welcome, and letting them know about the wealth of online information for Scale, including the email address scale at us.ibm.com Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From nigel.williams at tpac.org.au Tue Dec 10 03:29:04 2019 From: nigel.williams at tpac.org.au (Nigel Williams) Date: Tue, 10 Dec 2019 14:29:04 +1100 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available In-Reply-To: References: <7197C801-44D6-4299-ACFF-CF432E32C922@us.ibm.com> Message-ID: On Tue, 10 Dec 2019 at 14:19, Lehmann, Greg (IM&T, Pullenvale) wrote: > I am in Australia and downloaded it OK. I found a workaround which was to logon to an IBM service that worked with my credentials, and then switch back to the developer edition download and that allowed me to click through and start the download. From jmanuel.fuentes at upf.edu Tue Dec 10 09:45:19 2019 From: jmanuel.fuentes at upf.edu (FUENTES DIAZ, JUAN MANUEL) Date: Tue, 10 Dec 2019 10:45:19 +0100 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full Message-ID: Hi, Recently our group have migrated the Spectrum Scale from 4.2.3.9 to 5.0.3.0. According to the documentation to finish and consolidate the migration we should also update the config and the filesystems to the latest version with the commands above. Our cluster is a single cluster and all the nodes have the same version. My question is if we can update safely with those commands without compromising the data and metadata. Thanks Juanma -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergi.more at bsc.es Tue Dec 10 10:04:31 2019 From: sergi.more at bsc.es (Sergi More) Date: Tue, 10 Dec 2019 11:04:31 +0100 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full In-Reply-To: References: Message-ID: <48fb738b-203a-14cb-ef12-3a94f0cad199@bsc.es> Hi Juanma, Yes, it is safe. We have done it several times. AFAIK it doesn't actually change current data and metadata. Just states that filesystem is using latest version, so new features can be enabled. It is something to take into consideration specially when using multicluster, or mixing different gpfs versions, as these could potentially prevent older nodes to be able to mount the filesystems, but this doesn't seem to be your case. Best regards, Sergi. On 10/12/2019 10:45, FUENTES DIAZ, JUAN MANUEL wrote: > Hi, > > Recently our group have migrated the Spectrum Scale from 4.2.3.9 to > 5.0.3.0. According to the documentation to finish and consolidate the > migration we should also update the config and the filesystems to the > latest version with the commands above. Our cluster is a single > cluster and all the nodes have the same version. My question is if we > can update safely?with those commands without?compromising the data > and metadata. > > Thanks Juanma > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- ------------------------------------------------------------------------ Sergi More Codina Operations - System administration Barcelona Supercomputing Center Centro Nacional de Supercomputacion WWW: http://www.bsc.es Tel: +34-93-405 42 27 e-mail: sergi.more at bsc.es Fax: +34-93-413 77 21 ------------------------------------------------------------------------ WARNING / LEGAL TEXT: This message is intended only for the use of the individual or entity to which it is addressed and may contain information which is privileged, confidential, proprietary, or exempt from disclosure under applicable law. If you are not the intended recipient or the person responsible for delivering the message to the intended recipient, you are strictly prohibited from disclosing, distributing, copying, or in any way using this message. If you have received this communication in error, please notify the sender and destroy and delete any copies you may have received. http://www.bsc.es/disclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3617 bytes Desc: S/MIME Cryptographic Signature URL: From Renar.Grunenberg at huk-coburg.de Tue Dec 10 12:21:37 2019 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Tue, 10 Dec 2019 12:21:37 +0000 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full In-Reply-To: References: Message-ID: <9b774f33494d42ae989e3ad61d359d8c@huk-coburg.de> Hallo Juanma, ist save, the only change are only happen if you change the filesystem version with mmcfs device ?V full. As a tip you schould update to 5.0.3.3 ist a very stable Level for us. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder, Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von FUENTES DIAZ, JUAN MANUEL Gesendet: Dienstag, 10. Dezember 2019 10:45 An: gpfsug-discuss at spectrumscale.org Betreff: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full Hi, Recently our group have migrated the Spectrum Scale from 4.2.3.9 to 5.0.3.0. According to the documentation to finish and consolidate the migration we should also update the config and the filesystems to the latest version with the commands above. Our cluster is a single cluster and all the nodes have the same version. My question is if we can update safely with those commands without compromising the data and metadata. Thanks Juanma -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Tue Dec 10 14:48:35 2019 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Tue, 10 Dec 2019 14:48:35 +0000 Subject: [gpfsug-discuss] Scale Developer Edition free for non-production use now available Message-ID: <5582929B-4515-4FFE-87BA-7CC4B5E71920@us.ibm.com> In response to various questions? Yes, the wrong file was originally linked. It should be fixed now. Yes, you can definitely use this edition in your test labs. We want to make it as easy as possible for you to experiment with new features, config changes, and releases so that you can adopt them with confidence, and discover problems in the lab not production. No, we do not plan at this time to backport Developer Edition to earlier Scale releases. If you are having problems with access to the download, please use the Contact links on the Marketplace page, including this one for IBMid issues: https://www.ibm.com/ibmid/myibm/help/us/helpdesk.html. The Scale dev and offering management team don?t have any control over the website or download process (other than providing the file itself for download) or the authentication process, and we?re just going to contact the same people via the same links? Regards Carl Zetie Program Director Offering Management Spectrum Scale & Spectrum Discover ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_1522411740] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69557 bytes Desc: image001.png URL: From jmanuel.fuentes at upf.edu Wed Dec 11 08:23:34 2019 From: jmanuel.fuentes at upf.edu (FUENTES DIAZ, JUAN MANUEL) Date: Wed, 11 Dec 2019 09:23:34 +0100 Subject: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V full In-Reply-To: References: Message-ID: Hi, Thanks Sergi and Renar for the clear explanation. Juanma El mar., 10 dic. 2019 15:50, escribi?: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: mmchconfig release=LATEST mmchfs FileSystem -V full > (Grunenberg, Renar) > 2. Re: Scale Developer Edition free for non-production use now > available (Carl Zetie - carlz at us.ibm.com) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 10 Dec 2019 12:21:37 +0000 > From: "Grunenberg, Renar" > To: "gpfsug-discuss at spectrumscale.org" > > Subject: Re: [gpfsug-discuss] mmchconfig release=LATEST mmchfs > FileSystem -V full > Message-ID: <9b774f33494d42ae989e3ad61d359d8c at huk-coburg.de> > Content-Type: text/plain; charset="utf-8" > > Hallo Juanma, > ist save, the only change are only happen if you change the filesystem > version with mmcfs device ?V full. > As a tip you schould update to 5.0.3.3 ist a very stable Level for us. > Regards Renar > > > Renar Grunenberg > Abteilung Informatik - Betrieb > > HUK-COBURG > Bahnhofsplatz > 96444 Coburg > Telefon: 09561 96-44110 > Telefax: 09561 96-44104 > E-Mail: Renar.Grunenberg at huk-coburg.de > Internet: www.huk.de > ________________________________ > HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter > Deutschlands a. G. in Coburg > Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 > Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg > Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. > Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav > Her?y, Dr. J?rg Rheinl?nder, Sarah R?ssler, Daniel Thomas. > ________________________________ > Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte > Informationen. > Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich > erhalten haben, > informieren Sie bitte sofort den Absender und vernichten Sie diese > Nachricht. > Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht > ist nicht gestattet. > > This information may contain confidential and/or privileged information. > If you are not the intended recipient (or have received this information > in error) please notify the > sender immediately and destroy this information. > Any unauthorized copying, disclosure or distribution of the material in > this information is strictly forbidden. > ________________________________ > Von: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> Im Auftrag von FUENTES DIAZ, > JUAN MANUEL > Gesendet: Dienstag, 10. Dezember 2019 10:45 > An: gpfsug-discuss at spectrumscale.org > Betreff: [gpfsug-discuss] mmchconfig release=LATEST mmchfs FileSystem -V > full > > Hi, > > Recently our group have migrated the Spectrum Scale from 4.2.3.9 to > 5.0.3.0. According to the documentation to finish and consolidate the > migration we should also update the config and the filesystems to the > latest version with the commands above. Our cluster is a single cluster and > all the nodes have the same version. My question is if we can update safely > with those commands without compromising the data and metadata. > > Thanks Juanma > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20191210/5a763fea/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Tue, 10 Dec 2019 14:48:35 +0000 > From: "Carl Zetie - carlz at us.ibm.com" > To: "gpfsug-discuss at spectrumscale.org" > > Subject: Re: [gpfsug-discuss] Scale Developer Edition free for > non-production use now available > Message-ID: <5582929B-4515-4FFE-87BA-7CC4B5E71920 at us.ibm.com> > Content-Type: text/plain; charset="utf-8" > > In response to various questions? > > > Yes, the wrong file was originally linked. It should be fixed now. > > Yes, you can definitely use this edition in your test labs. We want to > make it as easy as possible for you to experiment with new features, config > changes, and releases so that you can adopt them with confidence, and > discover problems in the lab not production. > > No, we do not plan at this time to backport Developer Edition to earlier > Scale releases. > > If you are having problems with access to the download, please use the > Contact links on the Marketplace page, including this one for IBMid issues: > https://www.ibm.com/ibmid/myibm/help/us/helpdesk.html. The Scale dev and > offering management team don?t have any control over the website or > download process (other than providing the file itself for download) or the > authentication process, and we?re just going to contact the same people via > the same links? > > > Regards > > > > > > Carl Zetie > Program Director > Offering Management > Spectrum Scale & Spectrum Discover > ---- > (919) 473 3318 ][ Research Triangle Park > carlz at us.ibm.com > > [signature_1522411740] > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20191210/b732e2e2/attachment.html > > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: image001.png > Type: image/png > Size: 69557 bytes > Desc: image001.png > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20191210/b732e2e2/attachment.png > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 95, Issue 17 > ********************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From heinrich.billich at id.ethz.ch Thu Dec 12 14:26:31 2019 From: heinrich.billich at id.ethz.ch (Billich Heinrich Rainer (ID SD)) Date: Thu, 12 Dec 2019 14:26:31 +0000 Subject: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64? Message-ID: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> Hello, I remember that a GNR/ESS recovery group can hold up to 64 vdisks, but I can?t find a citation to proof it. Now I wonder if 64 is the actual limit? And where is it documented? And did the limit change with versions? Thank you. I did spend quite some time searching the documentation, no luck .. maybe you know. We run ESS 5.3.4.1 and the recovery groups have current/allowable format version 5.0.0.0 Thank you, Heiner --? ======================= Heinrich Billich ETH Z?rich Informatikdienste Tel.: +41 44 632 72 56 heinrich.billich at id.ethz.ch ======================== From stefan.dietrich at desy.de Fri Dec 13 07:19:42 2019 From: stefan.dietrich at desy.de (Dietrich, Stefan) Date: Fri, 13 Dec 2019 08:19:42 +0100 (CET) Subject: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64? In-Reply-To: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> References: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> Message-ID: <68327965.755878.1576221582269.JavaMail.zimbra@desy.de> Hello Heiner, the 64 vdisk limit per RG is still present in the latest ESS docs: https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.5/com.ibm.spectrum.scale.raid.v5r04.adm.doc/bl1adv_vdisks.htm For the other questions, no idea. Regards, Stefan ----- Original Message ----- > From: "Billich Heinrich Rainer (ID SD)" > To: "gpfsug main discussion list" > Sent: Thursday, December 12, 2019 3:26:31 PM > Subject: [gpfsug-discuss] Max number of vdisks in a recovery group - is it 64? > Hello, > > I remember that a GNR/ESS recovery group can hold up to 64 vdisks, but I can?t > find a citation to proof it. Now I wonder if 64 is the actual limit? And where > is it documented? And did the limit change with versions? Thank you. I did > spend quite some time searching the documentation, no luck .. maybe you know. > > We run ESS 5.3.4.1 and the recovery groups have current/allowable format > version 5.0.0.0 > > Thank you, > > Heiner > -- > ======================= > Heinrich Billich > ETH Z?rich > Informatikdienste > Tel.: +41 44 632 72 56 > heinrich.billich at id.ethz.ch > ======================== > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From olaf.weiser at de.ibm.com Fri Dec 13 12:20:15 2019 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Fri, 13 Dec 2019 07:20:15 -0500 Subject: [gpfsug-discuss] =?utf-8?q?Max_number_of_vdisks_in_a_recovery_gro?= =?utf-8?q?up_-_is_it=0964=3F?= In-Reply-To: <68327965.755878.1576221582269.JavaMail.zimbra@desy.de> References: <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch> <68327965.755878.1576221582269.JavaMail.zimbra@desy.de> Message-ID: An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Fri Dec 13 23:56:44 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Fri, 13 Dec 2019 23:56:44 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Max_number_of_vdisks_in_a_recovery_gro?= =?utf-8?q?up_-_is_it=0964=3F?= In-Reply-To: References: , <2DE4658A-EDEB-4FBA-88B1-2B72A59DE50E@id.ethz.ch><68327965.755878.1576221582269.JavaMail.zimbra@desy.de> Message-ID: An HTML attachment was scrubbed... URL: From kkr at lbl.gov Mon Dec 16 19:05:02 2019 From: kkr at lbl.gov (Kristy Kallback-Rose) Date: Mon, 16 Dec 2019 11:05:02 -0800 Subject: [gpfsug-discuss] Planning US meeting for Spring 2020 Message-ID: <42F45E03-0AEC-422C-B3A9-4B5A21B1D8DF@lbl.gov> Hello, It?s time already to plan for the next US event. We have a quick, seriously, should take order of 2 minutes, survey to capture your thoughts on location and date. It would help us greatly if you can please fill it out. Best wishes to all in the new year. -Kristy Please give us 2 minutes of your time here: ?https://forms.gle/NFk5q4djJWvmDurW7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From arc at b4restore.com Wed Dec 18 09:30:48 2019 From: arc at b4restore.com (=?iso-8859-1?Q?Andi_N=F8r_Christiansen?=) Date: Wed, 18 Dec 2019 09:30:48 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Message-ID: Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I'm not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns "fileset A1" which needs to be replicated to Site B "fileset A2" the from Site B to Site C "fileset A3". Site B: Owns "fileset B1" which needs to be replicated to Site C "fileset B2". Site C: Holds all data from Site A and B "fileset A3 & B2". We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don't know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B58E.35AA89D0] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Data migration and ILM blueprint - Andi V1.1.pdf Type: application/pdf Size: 236012 bytes Desc: Data migration and ILM blueprint - Andi V1.1.pdf URL: From jack at flametech.com.au Wed Dec 18 10:09:31 2019 From: jack at flametech.com.au (Jack Horrocks) Date: Wed, 18 Dec 2019 21:09:31 +1100 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: Hey Andi I'd be talking to the pixstor boys. Ngenea will do it for you without having to mess about too much. https://ww.pixitmedia.com They are down to earth and won't sell you stuff that doesn't work. Thanks Jack. On Wed, 18 Dec 2019 at 21:00, Andi N?r Christiansen wrote: > Hi, > > > > We are currently building a 3 site spectrum scale solution where data is > going to be generated at two different sites (Site A and Site B, Site C is > for archiving/backup) and then archived on site three. > > I have however not worked with AFM much so I was wondering if there is > someone who knows how to configure AFM to have all data generated in a > file-set automatically being copied to an offsite. > > GPFS AFM is not an option because of latency between sites so NFS AFM is > going to be tunneled between the site via WAN. > > > > As of now we have tried to set up AFM but it only transfers data from home > to cache when a prefetch is manually started or a file is being opened, we > need all files from home to go to cache as soon as it is generated or at > least after a little while. > > It does not need to be synchronous it just need to be automatic. > > > > I?m not sure if attachments will be available in this thread but I have > attached the concept of our design. > > > > Basically the setup is : > > > > Site A: > > Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the > from Site B to Site C ?fileset A3?. > > > > Site B: > > Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. > > > > Site C: > > Holds all data from Site A and B ?fileset A3 & B2?. > > > > We do not need any sites to have failover functionality only a copy of the > data from the two first sites. > > > > If anyone knows how to accomplish this I would be glad to know how! > > > > We have been looking into switching the home and cache site so that data > is generated at the cache sites which will trigger GPFS to transfer the > files to home as soon as possible but as I have little to no experience > with AFM I don?t know what happens to the cache site over time, does the > cache site empty itself after a while or does data stay there until > manually deleted? > > > > Thanks in advance! > > > > Best Regards > > > > > *Andi N?r Christiansen* > *IT Solution Specialist* > > Phone +45 87 81 37 39 > Mobile +45 23 89 59 75 > E-mail arc at b4restore.com > Web www.b4restore.com > > [image: B4Restore on LinkedIn] > [image: B4Restore on > Facebook] [image: B4Restore on Facebook] > [image: Sign up for our newsletter] > > > [image: Download Report] > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: not available URL: From TROPPENS at de.ibm.com Wed Dec 18 11:22:30 2019 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Wed, 18 Dec 2019 12:22:30 +0100 Subject: [gpfsug-discuss] Chart decks of SC19 meeting Message-ID: Most chart decks of the SC19 meeting are now available: https://www.spectrumscale.org/presentations/ -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Matthias Hartmann Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed Dec 18 12:04:11 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 18 Dec 2019 12:04:11 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.jpg at 01D5B58E.35AA89D0.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.png at 01D5B58E.35AA89D0.png Type: image/png Size: 58433 bytes Desc: not available URL: From arc at b4restore.com Wed Dec 18 12:31:14 2019 From: arc at b4restore.com (=?utf-8?B?QW5kaSBOw7hyIENocmlzdGlhbnNlbg==?=) Date: Wed, 18 Dec 2019 12:31:14 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: Hi Andrew, Alright, that partly confirms that there is no automatically sweep of data at cache site, right? I mean data will not be deleted automatically after a while in the cache fileset, where it is only metadata that stays? If data is kept until a manual deletion of data is requested on the cache site then this is the way to go for us..! Also, Site A has no connection to Site C so it needs to be connected as A to B and B to C.. This means: Site A holds live data from Site A, Site B holds live data from Site B and Replicated data from Site A, Site C holds replicated data from A and B. Does that make sense? The connection between A and B is LAN, about 500meters apart.. basically same site but different data centers and strictly separated because of security.. Site C is in another Country. Hence why we cant use GPFS AFM and also why we need to utilize WAN/NFS tunneled for AFM. Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Andrew Beattie Sendt: 18. december 2019 13:04 Til: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Andi, This is basic functionality that is part of Spectrum Scale there is no additional licensing or HSM costs required for this. Set Site C as your AFM Home, and have Site A and Site B both as Caches of Site C you can then Write Data in to Site A - have it stream to Site C, and call it on demand or Prefetch from Site C to Site B as required the Same is true of Site B, you can write Data into Site B, have it Stream to Site C, and call it on demand to site A if you want the data to be Multi Writer then you will need to make sure you use Independent writer as the AFM type https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Active%20File%20Management%20(AFM) Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Andi N?r Christiansen" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 8:00 PM Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B5A5.D4744A80] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: From arc at b4restore.com Wed Dec 18 12:33:31 2019 From: arc at b4restore.com (=?utf-8?B?QW5kaSBOw7hyIENocmlzdGlhbnNlbg==?=) Date: Wed, 18 Dec 2019 12:33:31 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: <8b0c31bf2c774ef7972a2f21f8b64e0a@B4RWEX01.internal.b4restore.com> Hi Jack, Thanks, but we are not looking to implement other products with spectrum scale. We are only searching for a solution to get Spectrum Scale to do the replication for us automatically. ? Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Jack Horrocks Sendt: 18. december 2019 11:10 Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Hey Andi I'd be talking to the pixstor boys. Ngenea will do it for you without having to mess about too much. https://ww.pixitmedia.com They are down to earth and won't sell you stuff that doesn't work. Thanks Jack. On Wed, 18 Dec 2019 at 21:00, Andi N?r Christiansen > wrote: Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B5A7.BC39FB20] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: From abeattie at au1.ibm.com Wed Dec 18 12:40:44 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 18 Dec 2019 12:40:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.jpg at 01D5B5A5.D4744A80.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.png at 01D5B5A5.D4744A80.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.png at 01D5B5A5.D4744A80.png Type: image/png Size: 58433 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Wed Dec 18 12:56:11 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 18 Dec 2019 12:56:11 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> On Wed, 2019-12-18 at 12:04 +0000, Andrew Beattie wrote: > Andi, > > This is basic functionality that is part of Spectrum Scale there is > no additional licensing or HSM costs required for this. > Noting only if you have the Extended Edition. Basic Spectrum Scale licensing does not include AFM :-) JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From arc at b4restore.com Wed Dec 18 12:59:21 2019 From: arc at b4restore.com (=?iso-8859-1?Q?Andi_N=F8r_Christiansen?=) Date: Wed, 18 Dec 2019 12:59:21 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> Message-ID: <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> To my knowledge basic AFM is part of all Spectrum scale licensing's but AFM-DR is only in Data Management and ECE? https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_prodstruct.htm /Andi -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Jonathan Buzzard Sendt: 18. december 2019 13:56 Til: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. On Wed, 2019-12-18 at 12:04 +0000, Andrew Beattie wrote: > Andi, > > This is basic functionality that is part of Spectrum Scale there is no > additional licensing or HSM costs required for this. > Noting only if you have the Extended Edition. Basic Spectrum Scale licensing does not include AFM :-) JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From arc at b4restore.com Wed Dec 18 13:00:24 2019 From: arc at b4restore.com (=?utf-8?B?QW5kaSBOw7hyIENocmlzdGlhbnNlbg==?=) Date: Wed, 18 Dec 2019 13:00:24 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: , Message-ID: Alright, I will have to dig a little deeper with this then..Thanks!? Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af Andrew Beattie Sendt: 18. december 2019 13:41 Til: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Andi Daisy chained AFM caches are a bad idea -- while it might work -- when things go wrong they go really badly wrong. Based on the scenario your describing What I think your going to want to do is AFM-DR between Sites A and B and then look at a policy based copy (Scripted Rsync or somthing similar) from Site B to site C I don't believe at present we support an AFM-DR relationship between a cluster and a Cache which is doing AFM to its home -- You could put in a request with IBM development to see if they would support such an architecture - but i'm not sure its ever been tested. Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Andi N?r Christiansen" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [EXTERNAL] Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 10:31 PM Hi Andrew, Alright, that partly confirms that there is no automatically sweep of data at cache site, right? I mean data will not be deleted automatically after a while in the cache fileset, where it is only metadata that stays? If data is kept until a manual deletion of data is requested on the cache site then this is the way to go for us..! Also, Site A has no connection to Site C so it needs to be connected as A to B and B to C.. This means: Site A holds live data from Site A, Site B holds live data from Site B and Replicated data from Site A, Site C holds replicated data from A and B. Does that make sense? The connection between A and B is LAN, about 500meters apart.. basically same site but different data centers and strictly separated because of security.. Site C is in another Country. Hence why we cant use GPFS AFM and also why we need to utilize WAN/NFS tunneled for AFM. Best Regards Andi N?r Christiansen B4Restore A/S Phone +45 87 81 37 39 Mobile +45 23 89 59 75 Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af Andrew Beattie Sendt: 18. december 2019 13:04 Til: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Emne: Re: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Andi, This is basic functionality that is part of Spectrum Scale there is no additional licensing or HSM costs required for this. Set Site C as your AFM Home, and have Site A and Site B both as Caches of Site C you can then Write Data in to Site A - have it stream to Site C, and call it on demand or Prefetch from Site C to Site B as required the Same is true of Site B, you can write Data into Site B, have it Stream to Site C, and call it on demand to site A if you want the data to be Multi Writer then you will need to make sure you use Independent writer as the AFM type https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Active%20File%20Management%20(AFM) Regards, Andrew Beattie File and Object Storage Technical Specialist - A/NZ IBM Systems - Storage Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Andi N?r Christiansen" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 8:00 PM Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards [cid:image001.jpg at 01D5B5AB.7DA09A50] Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com [B4Restore on LinkedIn] [B4Restore on Facebook] [B4Restore on Facebook] [Sign up for our newsletter] [Download Report] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 2875 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 2102 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 2263 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 2036 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 2198 bytes Desc: image005.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 58433 bytes Desc: image006.png URL: From jonathan.buzzard at strath.ac.uk Wed Dec 18 13:03:48 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 18 Dec 2019 13:03:48 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> Message-ID: <0dabc7eccd020e31d80484fe99b36e692be47c00.camel@strath.ac.uk> On Wed, 2019-12-18 at 12:59 +0000, Andi N?r Christiansen wrote: > To my knowledge basic AFM is part of all Spectrum scale licensing's > but AFM-DR is only in Data Management and ECE? > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_prodstruct.htm > Gees I can't keep up. That didn't used to be the case and possibly not if you are still on Express edition which looks to have been canned. I was sure our DSS-G says Express edition on the license. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From abeattie at au1.ibm.com Wed Dec 18 13:50:26 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 18 Dec 2019 13:50:26 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk>, Message-ID: An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed Dec 18 13:50:47 2019 From: ulmer at ulmer.org (Stephen Ulmer) Date: Wed, 18 Dec 2019 08:50:47 -0500 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: <0dabc7eccd020e31d80484fe99b36e692be47c00.camel@strath.ac.uk> References: <466c72d9df430364e38df11cdcf590d0b07331f2.camel@strath.ac.uk> <6ec1f43fbd6348faa64eda92d63da514@B4RWEX01.internal.b4restore.com> <0dabc7eccd020e31d80484fe99b36e692be47c00.camel@strath.ac.uk> Message-ID: I want to say that AFM was in GPFS before there were editions, and that everything that was pre-edition went into Standard Edition. That timing may not be exact, but Advanced edition has definitely never been required for ?regular? AFM. For the longest time the only ?Advanced? feature was encryption. Of course AFM-DR was eventually added to the Advanced Edition stream, which became DME with perTB licensing, which went to a GNR concert and spawned ECE from incessant complaining community feedback. :) I?m not aware that anyone ever *wanted* Express Edition, except the Linux on Z people, because that?s all they were allowed to have for a while. Liberty, ? Stephen > On Dec 18, 2019, at 8:03 AM, Jonathan Buzzard wrote: > > On Wed, 2019-12-18 at 12:59 +0000, Andi N?r Christiansen wrote: >> To my knowledge basic AFM is part of all Spectrum scale licensing's >> but AFM-DR is only in Data Management and ECE? >> >> https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.4/com.ibm.spectrum.scale.v5r04.doc/bl1ins_prodstruct.htm >> > > Gees I can't keep up. That didn't used to be the case and possibly not > if you are still on Express edition which looks to have been canned. I > was sure our DSS-G says Express edition on the license. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgayne at us.ibm.com Wed Dec 18 14:33:45 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Wed, 18 Dec 2019 14:33:45 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.jpg at 01D5B58E.35AA89D0.jpg Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.png at 01D5B58E.35AA89D0.png Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.png at 01D5B58E.35AA89D0.png Type: image/png Size: 58433 bytes Desc: not available URL: From vpuvvada at in.ibm.com Thu Dec 19 13:40:31 2019 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Thu, 19 Dec 2019 13:40:31 +0000 Subject: [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. In-Reply-To: References: Message-ID: >Site A: >Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. a. Is this required because A cannot directly talk to C ? b. Is this network restriction ? c. Where is the data generated ? At filesetA1 or filesetA2 or filesetA3 or all the places ? >Site B: >Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. > >Site C: >Holds all data from Site A and B ?fileset A3 & B2?. Same as above, where is the data generated ? >We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to >the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? AFM single writer mode or independent-writer mode can be used to replicate the data from the cache to home automatically. a. Approximately how many files/data can each cache(filesetA1, filesetA2 and fileesetB1) hold ? b. After the archival at the site C, will the data get deleted from the filesets at C? ~Venkat (vpuvvada at in.ibm.com) From: Lyle Gayne/Poughkeepsie/IBM To: gpfsug-discuss at spectrumscale.org, Venkateswara R Puvvada/India/IBM at IBMIN Date: 12/18/2019 08:03 PM Subject: Re: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Adding Venkat so he can chime in. Lyle ----- Original message ----- From: "Andi N?r Christiansen" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale mirrored filesets across sites. Date: Wed, Dec 18, 2019 5:24 AM Hi, We are currently building a 3 site spectrum scale solution where data is going to be generated at two different sites (Site A and Site B, Site C is for archiving/backup) and then archived on site three. I have however not worked with AFM much so I was wondering if there is someone who knows how to configure AFM to have all data generated in a file-set automatically being copied to an offsite. GPFS AFM is not an option because of latency between sites so NFS AFM is going to be tunneled between the site via WAN. As of now we have tried to set up AFM but it only transfers data from home to cache when a prefetch is manually started or a file is being opened, we need all files from home to go to cache as soon as it is generated or at least after a little while. It does not need to be synchronous it just need to be automatic. I?m not sure if attachments will be available in this thread but I have attached the concept of our design. Basically the setup is : Site A: Owns ?fileset A1? which needs to be replicated to Site B ?fileset A2? the from Site B to Site C ?fileset A3?. Site B: Owns ?fileset B1? which needs to be replicated to Site C ?fileset B2?. Site C: Holds all data from Site A and B ?fileset A3 & B2?. We do not need any sites to have failover functionality only a copy of the data from the two first sites. If anyone knows how to accomplish this I would be glad to know how! We have been looking into switching the home and cache site so that data is generated at the cache sites which will trigger GPFS to transfer the files to home as soon as possible but as I have little to no experience with AFM I don?t know what happens to the cache site over time, does the cache site empty itself after a while or does data stay there until manually deleted? Thanks in advance! Best Regards Andi N?r Christiansen IT Solution Specialist Phone +45 87 81 37 39 Mobile +45 23 89 59 75 E-mail arc at b4restore.com Web www.b4restore.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=eqWwibkj7RzAd4hcjuMXLC8a3bAQwHQNAlIm-a5WEOo&s=dWoFLlPqh2RDoLkJVIY0tM-wTVCtrhCqT0oZL4UkmZ8&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 2875 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2102 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2263 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2036 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 2198 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 58433 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 17:22:20 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 17:22:20 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default Message-ID: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School -------------- next part -------------- An HTML attachment was scrubbed... URL: From kywang at us.ibm.com Thu Dec 19 19:06:15 2019 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Thu, 19 Dec 2019 14:06:15 -0500 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> Message-ID: It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=m2_UDb09pxCtr3QQCy-6gDUzpw-o_zJQig_xI3C2_1c&m=Podv2DTbd8lR1FO2ZYZ8x8zq9iYA04zPm4GJnVZqlOw&s=1H_Rhmne_XoS3KS5pOD1RiBL8FQBXV4VdCkEL4KD11E&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 19:18:36 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 19:18:36 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> Message-ID: <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset]"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 106 bytes Desc: image001.gif URL: From kywang at us.ibm.com Thu Dec 19 19:25:01 2019 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Thu, 19 Dec 2019 14:25:01 -0500 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> Message-ID: >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=m2_UDb09pxCtr3QQCy-6gDUzpw-o_zJQig_xI3C2_1c&m=Nbr-ds_gTHq88IqMt3BvuP7-CagDQwEk2Bax6qK4iZo&s=D1aDuwRRm4mrIjdMBLSYo28KEflXV7WLywFw7puhlFU&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16683622.gif Type: image/gif Size: 106 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 19:28:33 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 19:28:33 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> Message-ID: <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed to do in this case? Really appreciate your assistance. Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:25 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different tho]"Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different though. From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset]"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 106 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 107 bytes Desc: image002.gif URL: From kywang at us.ibm.com Thu Dec 19 20:56:05 2019 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Thu, 19 Dec 2019 15:56:05 -0500 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu><794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> Message-ID: Razvan, mmedquota -d -u fs:fset: -d Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command. This option will assign the default quota to the user. The quota entry type will change from "e" to "d_fset". You may need to play a little bit with your system to get the result as you can have default quota per file system set and default quota per fileset enabled. An exemple to illustrate User pfs004 in filesystem fs9 and fileset fset7 has explicit quota set: # mmrepquota -u -v fs9 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none e <=== explicit # mmlsquota -d fs9:fset7 Default Block Limits(KB) | Default File Limits Filesystem Fileset type quota limit | quota limit entryType fs9 fset7 USR 102400 1048576 | 10000 0 default on <=== default quota limits for fs9:fset7, the default fs9 fset7 GRP 0 0 | 0 0 i # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none <=== explicit # mmedquota -d -u pfs004 fs9:fset7 <=== run mmedquota -d -u to get default limits # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none <=== takes the default value # mmrepquota -u -v fs9:fset7 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none d_fset <=== now user pfs004 in fset7 takes the default limits # ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:28 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed to do in this case? Really appreciate your assistance. Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:25 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different tho"Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different though. From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=m2_UDb09pxCtr3QQCy-6gDUzpw-o_zJQig_xI3C2_1c&m=ztpfU2VfH5aJ9mmrGarTov3Rf4RZyt417t0UZAdESOg&s=AY4A_7BxD_jvDV7p9tmwCj6wTIZrD9R6ZEXTOLgZDDI&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16898169.gif Type: image/gif Size: 106 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16513130.gif Type: image/gif Size: 107 bytes Desc: not available URL: From rp2927 at gsb.columbia.edu Thu Dec 19 21:47:21 2019 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 19 Dec 2019 21:47:21 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> Message-ID: Many thanks ? that?s exactly what I?m looking for. Unfortunately I have an error when attempting to run command : First the background: [root at storinator ~]# mmrepquota -u -v --block-size auto gsb:home |grep rp2927 rp2927 home USR 8.934G 10G 20G 0 none | 86355 1048576 3145728 0 none e [root at storinator ~]# mmlsquota -d --block-size auto gsb:home Default Block Limits | Default File Limits Filesystem Fileset type quota limit | quota limit entryType gsb home USR 20G 30G | 1048576 3145728 default on gsb home GRP 0 0 | 0 0 i And now the most interesting part: [root at storinator ~]# mmedquota -d -u rp2927 gsb:home gsb USR default quota is off Attention: In file system gsb (fileset home), block soft limit (10485760) for user rp2927 is too small. Suggest setting it higher than 26214400. Attention: In file system gsb (fileset home), block hard limit (20971520) for user rp2927 is too small. Suggest setting it higher than 26214400. gsb:home is not valid user A little bit more background, maybe of help? [root at storinator ~]# mmlsquota -d gsb Default Block Limits(KB) | Default File Limits Filesystem Fileset type quota limit | quota limit entryType gsb root USR 0 0 | 0 0 i gsb root GRP 0 0 | 0 0 i gsb work USR 0 0 | 0 0 i gsb work GRP 0 0 | 0 0 i gsb misc USR 0 0 | 0 0 i gsb misc GRP 0 0 | 0 0 i gsb home USR 20971520 31457280 | 1048576 3145728 default on gsb home GRP 0 0 | 0 0 i gsb shared USR 0 0 | 0 0 i gsb shared GRP 20971520 31457280 | 1048576 3145728 default on [root at storinator ~]# mmlsfs gsb flag value description ------------------- ------------------------ ----------------------------------- -f 8192 Minimum fragment (subblock) size in bytes -i 4096 Inode size in bytes -I 32768 Indirect block size in bytes -m 2 Default number of metadata replicas -M 3 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j scatter Block allocation type -D nfs4 File locking semantics in effect -k nfs4 ACL semantics in effect -n 100 Estimated number of nodes that will mount file system -B 1048576 Block size -Q user;group;fileset Quotas accounting enabled user;group;fileset Quotas enforced none Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement --filesetdf Yes Fileset df enabled? -V 21.00 (5.0.3.0) File system version --create-time Fri Aug 30 16:25:29 2019 File system creation time -z No Is DMAPI enabled? -L 33554432 Logfile size -E Yes Exact mtime mount option -S relatime Suppress atime mount option -K whenpossible Strict replica allocation option --fastea Yes Fast external attributes enabled? --encryption No Encryption enabled? --inode-limit 105906176 Maximum number of inodes in all inode spaces --log-replicas 0 Number of log replicas --is4KAligned Yes is4KAligned? --rapid-repair Yes rapidRepair enabled? --write-cache-threshold 0 HAWC Threshold (max 65536) --subblocks-per-full-block 128 Number of subblocks per full block -P system;Main01 Disk storage pools in file system --file-audit-log No File Audit Logging enabled? --maintenance-mode No Maintenance Mode enabled? -d meta_01;meta_02;meta_03;data_1A;data_1B;data_2A;data_2B;data_3A;data_3B Disks in file system -A yes Automatic mount option -o none Additional mount options -T /gpfs/cesRoot/gsb Default mount point --mount-priority 2 Mount priority Any ideas? Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 3:56 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Razvan, mmedquota -d -u fs:fset: -d Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command. This option will assign the default quota to the user. The quota entry type will change from "e" to "d_fset". You may need to play a little bit with your system to get the result as you can have default quota per file system set and default quota per fileset enabled. An exemple to illustrate User pfs004 in filesystem fs9 and fileset fset7 has explicit quota set: # mmrepquota -u -v fs9 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none e <=== explicit # mmlsquota -d fs9:fset7 Default Block Limits(KB) | Default File Limits Filesystem Fileset type quota limit | quota limit entryType fs9 fset7 USR 102400 1048576 | 10000 0 default on <=== default quota limits for fs9:fset7, the default fs9 fset7 GRP 0 0 | 0 0 i # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 33333 0 none <=== explicit # mmedquota -d -u pfs004 fs9:fset7 <=== run mmedquota -d -u to get default limits # mmlsquota -u pfs004 fs9:fset7 Block Limits | File Limits Filesystem Fileset type KB quota limit in_doubt grace | files quota limit in_doubt grace Remarks fs9 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none <=== takes the default value # mmrepquota -u -v fs9:fset7 | grep pfs004 pfs004 fset7 USR 1088 102400 1048576 0 none | 13 10000 0 0 none d_fset <=== now user pfs004 in fset7 takes the default limits # ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:28:51 PM---I see. May I ask one follow-up question, please: what]"Popescu, Razvan" ---12/19/2019 02:28:51 PM---I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:28 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I see. May I ask one follow-up question, please: what is ?mmedquota -d -u ? supposed to do in this case? Really appreciate your assistance. Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:25 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default >> To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) Currently there is no function to revert an explicit quota entry (e) to initial (i) entry. Kuei ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different tho]"Popescu, Razvan" ---12/19/2019 02:18:54 PM---Thanks for your kind reply. My problem is different though. From: "Popescu, Razvan" To: gpfsug main discussion list Date: 12/19/2019 02:18 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks for your kind reply. My problem is different though. I have set a fileset default quota (doing all the steps you recommended) and all was Ok. During operations I have edited *individual* quotas, for example to increase certain user?s allocations. Now, I want to *revert* (change back) one of these users to the (fileset) default quota ! For example, I have used one user account to test the mmedquota command setting his limits to a certain value (just testing). I?d like now to make that user?s quota be the default fileset quota, and not just numerically, but have his quota record follow the changes in fileset default quota limits. To make it more technical ?. This fellow?s quota entryType is now ?e? . I want to change it back to entryType ?I?. (I hope I?m not talking nonsense here) mmedquota?s ?-d? option is supposed to reinstate the defaults, but it doesn?t seem to work for fileset based quotas ? !?! Razvan -- From: on behalf of Kuei-Yu Wang-Knop Reply-To: gpfsug main discussion list Date: Thursday, December 19, 2019 at 2:06 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Quota: revert user quota to FILESET default It sounds like you would like to have default perfileset quota enabled. Have you tried to enable the default quota on the filesets and then set the default quota limits for those filesets? For example, in a filesystem fs9 and fileset fset9. File system fs9 has default quota on and --perfileset-quota enabled. # mmlsfs fs9 -Q --perfileset-quota flag value description ------------------- ------------------------ ----------------------------------- -Q user;group;fileset Quotas accounting enabled user;fileset Quotas enforced user;group;fileset Default quotas enabled --perfileset-quota Yes Per-fileset quota enforcement # Enable default user quota for fileset fset9, if not enabled yet, e.g. "mmdefquotaon -u fs9:fset9" Then set the default quota for this fileset using mmdefedquota" # mmdefedquota -u fs9:fset9 .. *** Edit quota limits for USR DEFAULT entry for fileset fset9 NOTE: block limits will be rounded up to the next multiple of the block size. block units may be: K, M, G, T or P, inode units may be: K, M or G. fs9: blocks in use: 0K, limits (soft = 102400K, hard = 1048576K) inodes in use: 0, limits (soft = 10000, hard = 22222) ... Hope that this helps. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com [Inactive hide details for "Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset]"Popescu, Razvan" ---12/19/2019 12:22:34 PM---Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? From: "Popescu, Razvan" To: "gpfsug-discuss at spectrumscale.org" Date: 12/19/2019 12:22 PM Subject: [EXTERNAL] [gpfsug-discuss] Quota: revert user quota to FILESET default Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I?d like to revert a user?s quota to the fileset?s default, but ?mmedquota -d -u ? fails because I do have not set a filesystem default?. [root at xxx]# mmedquota -d -u user gsb USR default quota is off (SpectrumScale 5.0.3 Standard Ed. on RHEL7 x86) Is this a limitation of the current mmedquota implementation, or of something more profound?... I have several filesets within this filesystem, each with various quota structures. A filesystem-wide default quota didn?t seem useful so I never defined one; however I do have multiple fileset-level default quotas, and this is the level at which I?d like to be able to handle this matter? Have I hit a limitation of the implementation? Any workaround, if that?s the case? Many thanks, Razvan Popescu Columbia Business School _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 106 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 107 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 108 bytes Desc: image003.gif URL: From jonathan.buzzard at strath.ac.uk Thu Dec 19 21:56:28 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 19 Dec 2019 21:56:28 +0000 Subject: [gpfsug-discuss] Quota: revert user quota to FILESET default In-Reply-To: <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> References: <4DA994EE-3452-4783-B376-54ED17F56966@gsb.columbia.edu> <794A8C83-B179-4A7B-85F2-DC2EA97EFCDD@gsb.columbia.edu> <746C7F06-1F3A-4C5C-A2AA-BC0B2C52A0F9@gsb.columbia.edu> Message-ID: <5ffb8059-bd51-29a5-78c5-19c86dcb6dc7@strath.ac.uk> On 19/12/2019 19:28, Popescu, Razvan wrote: > I see. > > May I ask one follow-up question, please:?? what is? ?mmedquota -d -u > ?? ?supposed to do in this case? > > Really appreciate your assistance. In the past (last time I did this was on version 3.2 or 3.3) if you used mmsetquota and set a users quota to 0 then as far as GPFS was concerned it was like you had never set a quota. This was notionally before per fileset quotas where a thing. In reality on my test cluster you could enable them and set them and they seemed to work as would be expected when I tested it. Never used it in production on those versions because well that would be dumb, and never had to remove a quota completely since. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From lavila at illinois.edu Fri Dec 20 15:32:54 2019 From: lavila at illinois.edu (Avila, Leandro) Date: Fri, 20 Dec 2019 15:32:54 +0000 Subject: [gpfsug-discuss] More information about CVE-2019-4715 Message-ID: Good morning, I am looking for additional information related to CVE-2019-4715 to try to determine the applicability and impact of this vulnerability in our environment. https://exchange.xforce.ibmcloud.com/vulnerabilities/172093 and https://www.ibm.com/support/pages/node/1118913 For the documents above it is not very clear if the issue affects mmfsd or just one of the protocol components (NFS,SMB). Thank you very much for your attention and help -- ==================== Leandro Avila | NCSA From Stephan.Peinkofer at lrz.de Fri Dec 20 15:58:12 2019 From: Stephan.Peinkofer at lrz.de (Peinkofer, Stephan) Date: Fri, 20 Dec 2019 15:58:12 +0000 Subject: [gpfsug-discuss] More information about CVE-2019-4715 In-Reply-To: References: Message-ID: <663A46F4-E170-4C7E-ABDC-E0CE7488C25D@lrz.de> Dear Leonardo, I had the same issue as you today. After some time (after I already opened a case for this) I noticed that they referenced the APAR numbers in the second link you posted. A google search for this apar numbers gives this here https://www-01.ibm.com/support/docview.wss?uid=isg1IJ20901 So seems to be SMB related. Best, Stephan Peinkofer Von meinem iPhone gesendet Am 20.12.2019 um 16:33 schrieb Avila, Leandro : ?Good morning, I am looking for additional information related to CVE-2019-4715 to try to determine the applicability and impact of this vulnerability in our environment. https://exchange.xforce.ibmcloud.com/vulnerabilities/172093 and https://www.ibm.com/support/pages/node/1118913 For the documents above it is not very clear if the issue affects mmfsd or just one of the protocol components (NFS,SMB). Thank you very much for your attention and help -- ==================== Leandro Avila | NCSA _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From lavila at illinois.edu Fri Dec 20 17:14:35 2019 From: lavila at illinois.edu (Avila, Leandro) Date: Fri, 20 Dec 2019 17:14:35 +0000 Subject: [gpfsug-discuss] More information about CVE-2019-4715 In-Reply-To: <663A46F4-E170-4C7E-ABDC-E0CE7488C25D@lrz.de> References: <663A46F4-E170-4C7E-ABDC-E0CE7488C25D@lrz.de> Message-ID: <7efe86e566f610a31e178e0333b65144e5734bc3.camel@illinois.edu> On Fri, 2019-12-20 at 15:58 +0000, Peinkofer, Stephan wrote: > Dear Leonardo, > > I had the same issue as you today. After some time (after I already > opened a case for this) I noticed that they referenced the APAR > numbers in the second link you posted. > > A google search for this apar numbers gives this here > https://www-01.ibm.com/support/docview.wss?uid=isg1IJ20901 > > So seems to be SMB related. > > Best, > Stephan Peinkofer > Stephan, Thank you very much for pointing me in the right direction. I appreciate it. From kevin.doyle at manchester.ac.uk Fri Dec 27 11:45:14 2019 From: kevin.doyle at manchester.ac.uk (Kevin Doyle) Date: Fri, 27 Dec 2019 11:45:14 +0000 Subject: [gpfsug-discuss] Question about Policies Message-ID: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk [/Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1799188038] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16051 bytes Desc: image001.png URL: From YARD at il.ibm.com Fri Dec 27 12:55:06 2019 From: YARD at il.ibm.com (Yaron Daniel) Date: Fri, 27 Dec 2019 14:55:06 +0200 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Message-ID: Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=Wg3EAA9O8sH3c_zHS2h8miVpSosqtXulMRqXMRwSMe0&s=TdemXXkFD1mjpxNFg7Y_DYYPpJXZk7BmQcW9hWQDLso&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4338 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 16051 bytes Desc: not available URL: From kevin.doyle at manchester.ac.uk Fri Dec 27 13:56:29 2019 From: kevin.doyle at manchester.ac.uk (Kevin Doyle) Date: Fri, 27 Dec 2019 13:56:29 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Message-ID: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Hi Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool How do I specify the folder to move it to which needs to be different from the current location. Thanks Kevin RULE ['RuleName'] [WHEN TimeBooleanExpression] MIGRATE [COMPRESS ({'yes' | 'no' | 'lz4' | 'z'})] [FROM POOL 'FromPoolName'] [THRESHOLD (HighPercentage[,LowPercentage[,PremigratePercentage]])] [WEIGHT (WeightExpression)] TO POOL 'ToPoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [SHOW (['String'] SqlExpression)] [SIZE (numeric-sql-expression)] [ACTION (SqlExpression)] [WHERE SqlExpression] Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk [/Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1131538866] From: on behalf of Yaron Daniel Reply-To: gpfsug main discussion list Date: Friday, 27 December 2019 at 12:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Question about Policies Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd [cid:_1_10392F3C103929880046F589C22584DD] Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel [cid:_2_103C9B0C103C96FC0046F589C22584DD] [cid:_2_103C9D14103C96FC0046F589C22584DD] [cid:_2_103C9F1C103C96FC0046F589C22584DD] [cid:_2_103CA124103C96FC0046F589C22584DD] [cid:_2_103CA32C103C96FC0046F589C22584DD] [cid:_2_103CA534103C96FC0046F589C22584DD] [cid:_2_103CA73C103C96FC0046F589C22584DD] [cid:_2_103CA944103C96FC0046F589C22584DD] From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk [/Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1799188038] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16051 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 1115 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 3848 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 4267 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 3748 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 3794 bytes Desc: image006.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 4302 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 3740 bytes Desc: image008.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 3856 bytes Desc: image009.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image010.jpg Type: image/jpeg Size: 4339 bytes Desc: image010.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image011.png Type: image/png Size: 16052 bytes Desc: image011.png URL: From YARD at il.ibm.com Fri Dec 27 14:11:40 2019 From: YARD at il.ibm.com (Yaron Daniel) Date: Fri, 27 Dec 2019 14:11:40 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Message-ID: Hi As you said it migrate between different pools (ILM/External - Tape) - so in case you need to move directory to different location - you will have to use the OS mv command. From what i remember there is no directory policy for the same pool. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Kevin Doyle To: gpfsug main discussion list Date: 27/12/2019 15:57 Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool How do I specify the folder to move it to which needs to be different from the current location. Thanks Kevin RULE ['RuleName'] [WHEN TimeBooleanExpression] MIGRATE [COMPRESS ({'yes' | 'no' | 'lz4' | 'z'})] [FROM POOL 'FromPoolName'] [THRESHOLD (HighPercentage[,LowPercentage[,PremigratePercentage]])] [WEIGHT (WeightExpression)] TO POOL 'ToPoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [SHOW (['String'] SqlExpression)] [SIZE (numeric-sql-expression)] [ACTION (SqlExpression)] [WHERE SqlExpression] Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk From: on behalf of Yaron Daniel Reply-To: gpfsug main discussion list Date: Friday, 27 December 2019 at 12:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Question about Policies Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=26aKLyF8ZP9iUfCT0RV9tvO89IrBmJUY3xt0AJrp--E&s=beWwNqFpTlTds5Dir2ZVmRiNt9kLQkFZC70Mp7UqFRY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4338 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 16052 bytes Desc: not available URL: From makaplan at us.ibm.com Fri Dec 27 14:19:43 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 27 Dec 2019 09:19:43 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> References: <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Message-ID: The MIGRATE rule is for moving files from one pool to another, without changing the pathname or any attributes, except the storage devices holding the data blocks of the file. Also can be use for "external" pools to migrate to an HSM system. "moving" from one folder to another is a different concept. The mmapplypolicy LIST and EXTERNAL LIST rules can be used to find files older than 30 days and then do any operations you like on them, but you have to write a script to do those operations. See also -- the "Information Lifecycle Management" (ILM) chapter of the SS Admin Guide AND/OR for an easy to use parallel function equivalent to the classic Unix pipline `find ... | xargs ... ` Try the `mmfind ... -xargs ... ` from the samples/ilm directory. [root@~/.../samples/ilm]$./mmfind Usage: ./mmfind [mmfind args] { | -inputFileList f -policyFile f } mmfind args: [-polFlags 'flag 1 flag 2 ...'] [-logLvl {0|1|2}] [-logFile f] [-saveTmpFiles] [-fs fsName] [-mmapplypolicyOutputFile f] find invocation -- logic: ! ( ) -a -o /path1 [/path2 ...] [expression] -atime N -ctime N -mtime N -true -false -perm mode -iname PATTERN -name PATTERN -path PATTERN -ipath PATTERN -uid N -user NAME -gid N -group NAME -nouser -nogroup -newer FILE -older FILE -mindepth LEVEL -maxdepth LEVEL -links N -size N -empty -type [bcdpflsD] -inum N -exec COMMAND -execdir COMMAND -ea NAME -eaWithValue NAME===VALUE -setEA NAME[===VALUE] -deleteEA NAME -gpfsImmut -gpfsAppOnly -gpfsEnc -gpfsPool POOL_NAME -gpfsMigrate poolFrom,poolTo -gpfsSetPool poolTo -gpfsCompress -gpfsUncompress -gpfsSetRep m,r -gpfsWeight NumericExpr -ls -fls -print -fprint -print0 -fprint0 -exclude PATH -xargs [-L maxlines] [-I rplstr] COMMAND Give -h for a more verbose usage message From: Kevin Doyle To: gpfsug main discussion list Date: 12/27/2019 08:57 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool How do I specify the folder to move it to which needs to be different from the current location. Thanks Kevin RULE ['RuleName'] [WHEN TimeBooleanExpression] MIGRATE [COMPRESS ({'yes' | 'no' | 'lz4' | 'z'})] [FROM POOL 'FromPoolName'] [THRESHOLD (HighPercentage[,LowPercentage[,PremigratePercentage]])] [WEIGHT (WeightExpression)] TO POOL 'ToPoolName' [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET ('FilesetName'[,'FilesetName']...)] [SHOW (['String'] SqlExpression)] [SIZE (numeric-sql-expression)] [ACTION (SqlExpression)] [WHERE SqlExpression] Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk /Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1131538866 From: on behalf of Yaron Daniel Reply-To: gpfsug main discussion list Date: Friday, 27 December 2019 at 12:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Question about Policies Hi U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. Regards Yaron Daniel 94 Em Ha'Moshavot Rd cid:_1_10392F3C103929880046F589C22584DD Storage Architect ? IL Lab Petach Tiqva, 49527 Services (Storage) IBM Global Markets, Systems HW Israel Sales Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel cid:_2_103C9B0C103C96FC0046F589C22584DD cid:_2_103C9D14103C96FC0046F589C22584DD cid:_2_103C9F1C103C96FC0046F589C22584DD cid:_2_103CA124103C96FC0046F589C22584DD cid:_2_103CA32C103C96FC0046F589C22584DD cid:_2_103CA534103C96FC0046F589C22584DD cid:_2_103CA73C103C96FC0046F589C22584DD cid:_2_103CA944103C96FC0046F589C22584DD From: Kevin Doyle To: "gpfsug-discuss at spectrumscale.org" Date: 27/12/2019 13:45 Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? Many thanks Kevin Kevin Doyle | Linux Administrator, Scientific Computing Cancer Research UK, Manchester Institute The University of Manchester Room 13G40, Alderley Park, Macclesfield SK10 4TG Mobile: 07554 223480 Email: Kevin.Doyle at manchester.ac.uk /Users/kdoyle/Library/Containers/com.microsoft.Outlook/Data/Library/Caches/Signatures/signature_1799188038 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=w3zKI5uOkIxqfgnHm53Al4Q3apC0htUiiuFcMnh2U9s&s=rkD5iWzjhbTA_9kEHL9Laggb4NGjiYS4qoM8yXbAoyM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16547711.gif Type: image/gif Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16942257.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16264175.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16010102.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16098719.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16043707.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16546771.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16875824.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16069185.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16639470.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 16809363.gif Type: image/gif Size: 16052 bytes Desc: not available URL: From david_johnson at brown.edu Fri Dec 27 14:20:13 2019 From: david_johnson at brown.edu (david_johnson at brown.edu) Date: Fri, 27 Dec 2019 09:20:13 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: Message-ID: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> You would want to look for examples of external scripts that work on the result of running the policy engine in listing mode. The one issue that might need some attention is the way that gpfs quotes unprintable characters in the pathname. So the policy engine generates the list and your external script does the moving. -- ddj Dave Johnson > On Dec 27, 2019, at 9:11 AM, Yaron Daniel wrote: > > ?Hi > > As you said it migrate between different pools (ILM/External - Tape) - so in case you need to move directory to different location - you will have to use the OS mv command. > From what i remember there is no directory policy for the same pool. > > > > Regards > > > > > Yaron Daniel 94 Em Ha'Moshavot Rd > > Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 > IBM Global Markets, Systems HW Sales Israel > > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > Webex: https://ibm.webex.com/meet/yard > IBM Israel > > > > > > > > > > > > > > > > > From: Kevin Doyle > To: gpfsug main discussion list > Date: 27/12/2019 15:57 > Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > Hi > > Thanks for your reply, once I have a set of files that are older than 30 days would I then use the migration rule to move them ? > > Looking at the migration Rule syntax implies a ?From? pool and a ?To? pool, I only have a single pool so would I use the same pool name for From and To ? if it is the same pool > How do I specify the folder to move it to which needs to be different from the current location. > > Thanks > Kevin > > RULE['RuleName'] [WHENTimeBooleanExpression] > MIGRATE [COMPRESS({'yes' | 'no' | 'lz4' | 'z'})] > [FROM POOL'FromPoolName'] > [THRESHOLD(HighPercentage[,LowPercentage[,PremigratePercentage]])] > [WEIGHT(WeightExpression)] > TO POOL'ToPoolName' > [LIMIT(OccupancyPercentage)] > [REPLICATE(DataReplication)] > [FOR FILESET('FilesetName'[,'FilesetName']...)] > [SHOW(['String'] SqlExpression)] > [SIZE(numeric-sql-expression)] > [ACTION(SqlExpression)] > [WHERESqlExpression] > > > Kevin Doyle | Linux Administrator, Scientific Computing > Cancer Research UK, Manchester Institute > The University of Manchester > Room 13G40, Alderley Park, Macclesfield SK10 4TG > Mobile: 07554 223480 > Email: Kevin.Doyle at manchester.ac.uk > > > > > > From: on behalf of Yaron Daniel > Reply-To: gpfsug main discussion list > Date: Friday, 27 December 2019 at 12:55 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Question about Policies > > Hi > > U can create list of diretories in output file which were not modify in the last 30 days, and than second script will move this directories to the new location that u want. > > > > Regards > > > > Yaron Daniel 94 Em Ha'Moshavot Rd > > Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 > IBM Global Markets, Systems HW Sales Israel > > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > Webex: https://ibm.webex.com/meet/yard > IBM Israel > > > > > > > > > > > > > > > > > From: Kevin Doyle > To: "gpfsug-discuss at spectrumscale.org" > Date: 27/12/2019 13:45 > Subject: [EXTERNAL] [gpfsug-discuss] Question about Policies > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > Hi > > I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and have been tasked with creating a policy that will > Move files older that 30 days to a new folder within the same pool. There are a lot of files so using a policy based move will be faster. > I have read about the migration Rule but it states a source and destination pool, we only have one pool. Will it work if I define the same source and destination pool ? > > Many thanks > Kevin > > > Kevin Doyle | Linux Administrator, Scientific Computing > Cancer Research UK, Manchester Institute > The University of Manchester > Room 13G40, Alderley Park, Macclesfield SK10 4TG > Mobile: 07554 223480 > Email: Kevin.Doyle at manchester.ac.uk > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Dec 27 14:27:43 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 27 Dec 2019 14:27:43 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> References: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk>, <300DA165-5671-469D-A30C-0FAD60B6FE13@contoso.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.gif at 01D5BCBD.7015DEE0.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image007.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image008.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image009.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image010.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image011.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16052 bytes Desc: not available URL: From daniel.kidger at uk.ibm.com Fri Dec 27 14:30:46 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 27 Dec 2019 14:30:46 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9125CC21-B742-497D-9659-89B17A0575F7@manchester.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.gif at 01D5BCBD.7015DEE0.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image007.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image008.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image009.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image010.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image011.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16052 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16051 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.gif at 01D5BCBD.7015DEE0.gif Type: image/gif Size: 1115 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image003.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3848 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image004.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4267 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image005.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3748 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image006.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3794 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image007.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4302 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image008.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3740 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image009.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 3856 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image010.jpg at 01D5BCBD.7015DEE0.jpg Type: image/jpeg Size: 4339 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image011.png at 01D5BCBD.7015DEE0.png Type: image/png Size: 16052 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Sat Dec 28 15:17:05 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Sat, 28 Dec 2019 15:17:05 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: On 27/12/2019 14:20, david_johnson at brown.edu wrote: > You would want to look for examples of external scripts that work on the > result of running the policy engine in listing mode. ?The one issue that > might need some attention is the way that gpfs quotes unprintable > characters in the pathname. So the policy engine generates the list and > your external script does the moving. > In my experience a good starting point would be to scan the list of files from the policy engine and separate the files out into "normal"; that is files using basic ASCII and no special characters and the rest also known as the "wacky pile". Given that you are UK based it is not unreasonable to expect all path and file names to be in English. There might (and if not probably should) be an institutional policy mandating it. Not much use if a researcher saves everything in Greek then gets knocked over by a bus and person picking up the work is Spanish for example. Hopefully the "wacky pile" is small, however expect to find all sorts of bizarre file and path names in it. We are talking wildcards, back ticks, even newline characters to name but a few. Depending on the amount of data in the "wacky" pile you might just want to forget about moving them, as they are orders of magnitude more difficult to deal with than files with "sane" path and file names and can rapidly soak up large chunks of time trying to deal with them in scripts. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From Paul.Sanchez at deshaw.com Sat Dec 28 17:07:15 2019 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Sat, 28 Dec 2019 17:07:15 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: <9ce3971faea5493daa133b08e4a0113e@deshaw.com> If you needed to preserve the "wackiness" of the original file and pathnames (and I'm assuming you need to preserve the pathnames in order to avoid collisions between migrated files from different directories which have the same basename, and to allow the files to found/recovered again later, etc) then you can use Marc's `mmfind` suggestion, coupled with the -print0 argument to produce a null-delimited file list which could be coupled with an "xargs -0" pipeline or "rsync -0" to do most of the work. Test everything with a "dry-run" mode which reported what it would do, but without doing it, and one which copied without deleting, to help expose bugs in the process before destroying your data. If the migration doesn't cross between independent filesets, then file migrations could be performed using "mv" without any actual data copying. (For that matter, it could also be done in two stages by hard-linking, then unlinking.) But I think that there are other potential problems involved, even before considering things like path escaping or fileset boundaries... If everything is predicated on the age of a file, you will need to create the missing directory hierarchy in the target dir structure for files which need to be "migrated". If files in a directory vary in age, you may move some files but leave others alone (until they become old enough to migrate) creating incomplete and probably unusable versions at both the source and target. What if a user recreates the missing files as they disappear? As they later age, do you overwrite the files on the target? What if a directory name is later changed to a filename or vice-versa? Will you ever need to "restore" these structures? If so, will you merge these back in to the original source if both non-empty source and target dirs exist? Should we wait for an entire dir hierarchy to age out and then archive it atomically? (We would want a way to know where project dir boundaries are.) I would urge you to think about how complex this might actually get before start performing surgery within data sets. I would be inclined to challenge the original requirements to ensure that what you are able to accomplish matches up with the real goals without creating a raft of new operational problems or loss of work product. Depending on the original goal, it may be possible to do this (more safely) with snapshots or tarballs. -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: Saturday, December 28, 2019 10:17 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Question about Policies This message was sent by an external party. On 27/12/2019 14:20, david_johnson at brown.edu wrote: > You would want to look for examples of external scripts that work on > the result of running the policy engine in listing mode. The one > issue that might need some attention is the way that gpfs quotes > unprintable characters in the pathname. So the policy engine generates > the list and your external script does the moving. > In my experience a good starting point would be to scan the list of files from the policy engine and separate the files out into "normal"; that is files using basic ASCII and no special characters and the rest also known as the "wacky pile". Given that you are UK based it is not unreasonable to expect all path and file names to be in English. There might (and if not probably should) be an institutional policy mandating it. Not much use if a researcher saves everything in Greek then gets knocked over by a bus and person picking up the work is Spanish for example. Hopefully the "wacky pile" is small, however expect to find all sorts of bizarre file and path names in it. We are talking wildcards, back ticks, even newline characters to name but a few. Depending on the amount of data in the "wacky" pile you might just want to forget about moving them, as they are orders of magnitude more difficult to deal with than files with "sane" path and file names and can rapidly soak up large chunks of time trying to deal with them in scripts. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Sat Dec 28 19:49:01 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Sat, 28 Dec 2019 14:49:01 -0500 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file lists perfectly. No need to worry about whitespaces and so forth. Give it a look-see and a try -- marc of GPFS - From: Jonathan Buzzard To: "gpfsug-discuss at spectrumscale.org" Date: 12/28/2019 10:17 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org On 27/12/2019 14:20, david_johnson at brown.edu wrote: > You would want to look for examples of external scripts that work on the > result of running the policy engine in listing mode. ?The one issue that > might need some attention is the way that gpfs quotes unprintable > characters in the pathname. So the policy engine generates the list and > your external script does the moving. > In my experience a good starting point would be to scan the list of files from the policy engine and separate the files out into "normal"; that is files using basic ASCII and no special characters and the rest also known as the "wacky pile". Given that you are UK based it is not unreasonable to expect all path and file names to be in English. There might (and if not probably should) be an institutional policy mandating it. Not much use if a researcher saves everything in Greek then gets knocked over by a bus and person picking up the work is Spanish for example. Hopefully the "wacky pile" is small, however expect to find all sorts of bizarre file and path names in it. We are talking wildcards, back ticks, even newline characters to name but a few. Depending on the amount of data in the "wacky" pile you might just want to forget about moving them, as they are orders of magnitude more difficult to deal with than files with "sane" path and file names and can rapidly soak up large chunks of time trying to deal with them in scripts. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=ndS4tGx_CLuYWNl3PoYZUZGMwTDw0IFQAVCovuw2qbc&s=VLuDBejMqsG2ggu2YNluBW2c_g-bpbNluifBXQNHRM4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Sun Dec 29 10:01:16 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Sun, 29 Dec 2019 10:01:16 +0000 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: On 28/12/2019 19:49, Marc A Kaplan wrote: > The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file > lists perfectly. No need to worry about whitespaces and so forth. > Give it a look-see and a try > Indeed, but I get the feeling from the original post that you will need to mung the path/file names to produce a new directory path that the files is to be moved to. At this point the whole issue of "wacky" directory and file names will rear it's ugly head. So for example /gpfs/users/joeblogs/experiment`1234?/results *-12-2019.txt would need moving to something like /gpfs/users/joeblogs/experiment`1234?/old_data/results *-12-2019.txt That is a pit of woe unless you are confident that users are being sensible, or you just forget about wacky named files. In a similar vein, in the past I have for results coming of a piece of experimental equipment ziped up every 30 days. Each run on the equipment and the results go in a different directory/ So for example the directory /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01/ would be zipped up to /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01.zip and the original directory removed. This works well because both widows explorer and finder will allow you to click into the zip files to see the contents. However the script that did this worked in the principle of a very strict naming convention that if was not adhered to would mean the folders where not zipped up. Given the original posters institution, a good guess is that something like this is what is wanting to be achieved. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From makaplan at us.ibm.com Sun Dec 29 14:24:28 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Sun, 29 Dec 2019 09:24:28 -0500 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: Correct, you may need to use similar parsing/quoting techniques in your renaming scripts. 0 Just remember, in Unix/Posix/Linux the only 2 special characters/codes in path names are '/' and \0. The former delimits directories and the latter marks the end of the string. And technically the latter isn't ever in a path name, it's only used by system APIs to mark the end of a string that is the pathname argument. Happy New Year, From: Jonathan Buzzard To: "gpfsug-discuss at spectrumscale.org" Date: 12/29/2019 05:01 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs Sent by: gpfsug-discuss-bounces at spectrumscale.org On 28/12/2019 19:49, Marc A Kaplan wrote: > The script in mmfs/bin/mmxargs handles mmapplypolicy EXTERNAL LIST file > lists perfectly. No need to worry about whitespaces and so forth. > Give it a look-see and a try > Indeed, but I get the feeling from the original post that you will need to mung the path/file names to produce a new directory path that the files is to be moved to. At this point the whole issue of "wacky" directory and file names will rear it's ugly head. So for example /gpfs/users/joeblogs/experiment`1234?/results *-12-2019.txt would need moving to something like /gpfs/users/joeblogs/experiment`1234?/old_data/results *-12-2019.txt That is a pit of woe unless you are confident that users are being sensible, or you just forget about wacky named files. In a similar vein, in the past I have for results coming of a piece of experimental equipment ziped up every 30 days. Each run on the equipment and the results go in a different directory/ So for example the directory /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01/ would be zipped up to /gpfs/users/joeblogs/nmr_spectroscopy/2019/results-1229-01.zip and the original directory removed. This works well because both widows explorer and finder will allow you to click into the zip files to see the contents. However the script that did this worked in the principle of a very strict naming convention that if was not adhered to would mean the folders where not zipped up. Given the original posters institution, a good guess is that something like this is what is wanting to be achieved. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=prco68XIUUkBHwRlOlBP9xNlbXteQlfo6eTljgmJseQ&s=dQ0hsxzBJZzZG2Y2Xkh_u6eNGasZl-wHlffQDLn9kiw&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From makaplan at us.ibm.com Mon Dec 30 16:20:59 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 11:20:59 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: <9ce3971faea5493daa133b08e4a0113e@deshaw.com> References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: Now apart from the mechanics of handling and manipulating pathnames ... the idea to manage storage by "mv"ing instead of MIGRATEing (GPFS-wise) may be ill-advised. I suspect this is a hold-over or leftover from the old days -- when a filesystem was comprised of just a few storage devices (disk drives) and the only way available to manage space was to mv files to another filesystem or archive to tape or whatnot.. That is not the GPFS-way (Spectrum-Scale-way).... Well at least not for more than a dozen or more years! Modern Spectrum Scale has storage POOLs and also integrates with HSM systems. These separate the concept of name space (pathnames) from storage devices. Read about it, discuss it with your colleagues, clients, managers -- and use it! -- marc of GPFS. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Mon Dec 30 16:29:52 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 11:29:52 -0500 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: Also see if your distribution includes samples/ilm/mmxcp which, if you are determined to cp or mv from one path to another, shows a way to do that easily in perl, using code similar to the aforementions bin/mmxargs Here is the path changing part... ... $src =~ s/'/'\\''/g; # any ' within the name like x'y become x'\''y then we quote all names passed to commands my @src = split('/',$src); my $sra = join('/', @src[$strip+1..$#src-1]); $newtarg = "'" . $target . '/' . $sra . "'"; ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Mon Dec 30 21:48:00 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 30 Dec 2019 21:48:00 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: On 30/12/2019 16:20, Marc A Kaplan wrote: > Now apart from the mechanics of handling and manipulating pathnames ... > > the idea to manage storage by "mv"ing instead of MIGRATEing (GPFS-wise) > may be ill-advised. > > I suspect this is a hold-over or leftover from the old days -- when a > filesystem was comprised of just a few storage devices (disk drives) and > the only way available to manage space was to mv files to another > filesystem or archive to tape or whatnot.. > I suspect based on the OP is from (a cancer research institute which is basically life sciences) that this is an incorrect assumption. I would guess this is about "archiving" results coming off experimental equipment. I use the term "archiving" in the same way that various email programs try and "archive" my old emails. That is to prevent the output directory of the equipment filling up with many thousands of files and/or directories I want to automate the placement in a directory hierarchy of old results. Imagine a piece of equipment that does 50 different analysis's a day every working day. That's a 1000 a month or ~50,000 a year. It's about logically moving stuff to keep ones working directory manageable but making finding an old analysis easy to find. I would also note that some experimental equipment would do many more than 50 different analysis's a day. It's a common requirement in any sort of research facility, especially when they have central facilities for doing analysis on equipment that would be too expensive for an individual group or where it makes sense to "outsource" repetitive basics analysis to lower paid staff. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Mon Dec 30 22:14:18 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 30 Dec 2019 22:14:18 +0000 Subject: [gpfsug-discuss] Question about Policies - using mmapplypolicy/EXTERNAL LIST/mmxargs In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> Message-ID: <3127843a-403f-d360-4b6c-9b410c9ef39d@strath.ac.uk> On 29/12/2019 14:24, Marc A Kaplan wrote: > Correct, you may need to use similar parsing/quoting techniques in your > renaming scripts. > 0 > Just remember, in Unix/Posix/Linux the only 2 special characters/codes > in path names are '/' and \0. The former delimits directories and the > latter marks the end of the string. > And technically the latter isn't ever in a path name, it's only used by > system APIs to mark the end of a string that is the pathname argument. >i I am not sure even that is entirely true. Certainly MacOS X in the past would allow '/' in file names. You find this out when a MacOS user tries to migrate their files to a SMB based file server and the process trips up because they have named a whole bunch of files in the format "My Results 30/12/2019.txt" At this juncture I note that MacOS is certified Unix :-) I think it is more a file system limitation than anything else. I wonder what happens when you mount a HFS+ file system with such named files on Linux... I would at this point note that the vast majority of "wacky" file names originate from MacOS (both Classic and X) users. Also while you are otherwise technically correct about what is allowed in a file name just try creating a file name with a newline character in it using either a GUI tool or the command line. You have to be really determined to achieve it. I have also seen \007 in a file name, I mean really. Our training for new HPC users has a section covering file names which includes advising users not to use "wacky" characters in them as we don't guarantee their continued survival. That is if we do something on the file system and they get "lost" as a result it's your own fault. In my view restricting yourself to the following is entirely sensible https://docs.microsoft.com/en-us/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata Also while Unix is generally case sensitive creating files that would clash if accessed case insensitive is really dumb and should be avoided. Again, if it causes you problems in future, it sucks to be you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From makaplan at us.ibm.com Mon Dec 30 23:35:02 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 18:35:02 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu><9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: Yes, that is entirely true, if not then basic Posix calls like open(2) are broken. https://stackoverflow.com/questions/9847288/is-it-possible-to-use-in-a-filename -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Mon Dec 30 23:40:37 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Dec 2019 18:40:37 -0500 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu><9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: As I said :"MAY be ill-advised". If you have a good reason to use "mv" then certainly, use it! But there are plenty of good naming conventions for the scenario you give... Like, start a new directory of results every day, week or month... /fs/experiments/y2019/m12/d30/fileX.ZZZ ... OF course, if you want or need to mv, or cp and/or rm the metadata out of the filesystem, then eventually you do so! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Mon Dec 30 23:55:17 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 30 Dec 2019 23:55:17 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: <09180fd7-8121-02d6-6384-8ef4b9c7decd@strath.ac.uk> On 30/12/2019 23:40, Marc A Kaplan wrote: > As I said :"MAY be ill-advised". > > If you have a good reason to use "mv" then certainly, use it! > > But there are plenty of good naming conventions for the scenario you > give... > Like, start a new directory of results every day, week or month... > > > /fs/experiments/y2019/m12/d30/fileX.ZZZ ... > > OF course, if you want or need to mv, or cp and/or rm the metadata out > of the filesystem, then eventually you do so! > Possibly, but often (in fact sensibly) the results are saved in the first instance to the local machine because any network issue and boom your results are gone as doing the analysis destroys the sample. That in life sciences can easily mean several days and $1000. The results are then uploaded automatically to the file server. That gets a whole bunch more complicated. Honest you simply don't want to go there getting it to be done different. It would be less painful to have a tooth extracted without anesthetic. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Tue Dec 31 00:00:06 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 31 Dec 2019 00:00:06 +0000 Subject: [gpfsug-discuss] Question about Policies In-Reply-To: References: <0C26DC6B-A78A-469B-AD2B-218FABBFE3FB@brown.edu> <9ce3971faea5493daa133b08e4a0113e@deshaw.com> Message-ID: On 30/12/2019 23:35, Marc A Kaplan wrote: > Yes, that is entirely true, if not then basic Posix calls like open(2) > are broken. > > _https://stackoverflow.com/questions/9847288/is-it-possible-to-use-in-a-filename_ > > That's for Linux and possibly Posix. Like I said on the certified *Unix* that is macOS it's perfectly fine. I have bumped into it more times that I care to recall. Try moving a MacOS AFP server to a different OS and then get back to me... JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG