From dieter.gorecki at atos.net Fri Jun 2 11:12:25 2023 From: dieter.gorecki at atos.net (DIETER GORECKI) Date: Fri, 2 Jun 2023 10:12:25 +0000 Subject: [gpfsug-discuss] AFM synced directory size Message-ID: Hi, I am currently doing an AFM based synchronization between 2 GPFS filesystems using a multicluster connection. It works quite well apart from the fact that on cache FS we noticed directory take 4x the size they have on home FS: [root at node ~]# stat /newfs/fileset/dir File: /newfs/fileset/dir Size: 16384 Blocks: 32 IO Block: 262144 directory Device: 2eh/46d Inode: 14893057 Links: 25 Access: (2775/drwxrwsr-x) Uid: ( xxxx/ UNKNOWN) Gid: ( yyyy/ UNKNOWN) Access: 2023-06-02 08:09:25.659095673 +0000 Modify: 2023-01-27 08:56:09.636343000 +0000 Change: 2023-06-01 13:22:08.972571000 +0000 Birth: - [root at node ~]# stat /oldFS/fileset/dir File: /oldFS/fileset/dir Size: 4096 Blocks: 1 IO Block: 131072 directory Device: 32h/50d Inode: 8590516352 Links: 25 Access: (2775/drwxrwsr-x) Uid: ( xxxx/ UNKNOWN) Gid: ( yyyy/ UNKNOWN) Access: 2023-06-02 09:09:40.483041330 +0000 Modify: 2023-01-27 08:56:09.636343000 +0000 Change: 2023-01-27 08:56:09.644167000 +0000 Birth: - I saw somewhere that AFM extended attributes should take around 200 bytes so I am a bit puzzled on why this much difference here. I disables the AFM relationship between synced filesets but the size stay the same. If I create a directory manually on the new filesystem, size is 4k as expected. Any idea why we get this behaviour ? GPFS version is 5.1.6.1 on new cluster, 5.1.2.8 on old cluster. Thanks, Dieter -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Mon Jun 5 14:24:01 2023 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Mon, 5 Jun 2023 13:24:01 +0000 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet Message-ID: Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb ethernet network? If so what cables we need to connect it to a Cisco C93180YC-EX switch? Does it support 100 Gb to 4x10 GB fanout connections? Prasad Surampudi | Sr. Systems Enginee prasad.surampudi at theatsgroup.com | 302.419.5833 Innovative IT consulting & modern infrastructure solutions www.theatsgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Mon Jun 5 16:52:11 2023 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Mon, 5 Jun 2023 15:52:11 +0000 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: Prasad, You would be better off looking at the following: https://www.nvidia.com/en-au/networking/ethernet/cable-accessories/ This will allow you to adapt from the QSFP+ 100GB transceiver down to a SFP+ 10GB transceiver and then use standard OM4 cable and Cisco 10GB transceivers at the other end. However I will also comment that using 10GB network for an all flash ESS is a bit like buying an F1 racing car and never taking it out of first gear. You would honestly be better investing in a single 100GB switch to at least build the ESS cluster at 100Gbit and getting the full potential bandwidth from your investment in NVMe Otherwise buy a FS5200 + 2 x86 servers and some scale licensing, because you are down rating the ESS3200 to less than 25% of its potential performance Regards, Andrew Beattie Technical Sales Specialist - Storage for Big Data & AI IBM Australia and New Zealand P. +61 421 337 927 E. abeattie at au1.ibm.com Twitter: AndrewJBeattie LinkedIn: ________________________________ From: gpfsug-discuss on behalf of Prasad Surampudi Sent: Monday, June 5, 2023 11:24:01 PM To: gpfsug-discuss at gpfsug.org Subject: [EXTERNAL] [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb ethernet network? If so what cables we need to connect it to a Cisco C93180YC-EX switch? Does it support 100 Gb to 4x10 GB fanout connections? Prasad Surampudi | Sr. Systems ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. ZjQcmQRYFpfptBannerEnd Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb ethernet network? If so what cables we need to connect it to a Cisco C93180YC-EX switch? Does it support 100 Gb to 4x10 GB fanout connections? Prasad Surampudi | Sr. Systems Enginee prasad.surampudi at theatsgroup.com | 302.419.5833 Innovative IT consulting & modern infrastructure solutions www.theatsgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.ward at nhm.ac.uk Mon Jun 5 17:24:18 2023 From: p.ward at nhm.ac.uk (Paul Ward) Date: Mon, 5 Jun 2023 16:24:18 +0000 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: We were supplied with additional 10Gbps cards in our ESS. We didn?t realise there were two identical 10gbps cards in the chassis which lead to some confusion? I finally twigged and realised the cables were plugged into one, and the configuration was applied to the other ? Sorry network team it wasn?t a problem with the network! Kindest regards, Paul Paul Ward TS Infrastructure Architect Natural History Museum T: 02079426450 E: p.ward at nhm.ac.uk [A picture containing drawing Description automatically generated] From: gpfsug-discuss On Behalf Of Andrew Beattie Sent: Monday, June 5, 2023 4:52 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet Prasad, You would be better off looking at the following: https://www.nvidia.com/en-au/networking/ethernet/cable-accessories/ This will allow you to adapt from the QSFP+ 100GB transceiver down to a SFP+ 10GB transceiver and then use standard OM4 cable and Cisco 10GB transceivers at the other end. However I will also comment that using 10GB network for an all flash ESS is a bit like buying an F1 racing car and never taking it out of first gear. You would honestly be better investing in a single 100GB switch to at least build the ESS cluster at 100Gbit and getting the full potential bandwidth from your investment in NVMe Otherwise buy a FS5200 + 2 x86 servers and some scale licensing, because you are down rating the ESS3200 to less than 25% of its potential performance Regards, Andrew Beattie Technical Sales Specialist - Storage for Big Data & AI IBM Australia and New Zealand P. +61 421 337 927 E. abeattie at au1.ibm.com Twitter: AndrewJBeattie LinkedIn: ________________________________ From: gpfsug-discuss > on behalf of Prasad Surampudi > Sent: Monday, June 5, 2023 11:24:01 PM To: gpfsug-discuss at gpfsug.org > Subject: [EXTERNAL] [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb ethernet network? If so what cables we need to connect it to a Cisco C93180YC-EX switch? Does it support 100 Gb to 4x10 GB fanout connections? Prasad Surampudi | Sr. Systems ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. ZjQcmQRYFpfptBannerEnd Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb ethernet network? If so what cables we need to connect it to a Cisco C93180YC-EX switch? Does it support 100 Gb to 4x10 GB fanout connections? Prasad Surampudi | Sr. Systems Enginee prasad.surampudi at theatsgroup.com | 302.419.5833 Innovative IT consulting & modern infrastructure solutions www.theatsgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 5356 bytes Desc: image001.jpg URL: From jonathan.buzzard at strath.ac.uk Mon Jun 5 17:27:05 2023 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 5 Jun 2023 17:27:05 +0100 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: <5d68faec-1229-ed22-d814-84fa306a683c@strath.ac.uk> On 05/06/2023 14:24, Prasad Surampudi wrote: > > Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb > ethernet network? If so what cables ?we need to connect it to a > CiscoC93180YC-EXswitch? Does it support 100 Gb to 4x10 GB fanout > connections? > From the connecting devices perspective the 4x10Gbps fanout *should* be just another 10Gbps connection. As far as cables go, you could always play it safe and use vendor supplied SR transceivers either end. However why 10Gbps? The switch in question has 48 10/25Gbps ports and six 40/100Gbps ports. NVMe flash at 10Gbps is as daft as a brush IMHO. That said none of that is a guarantee as we currently have Lenovo servers with Lenovo SR transceivers in that are refusing to talk to our HPE SN3700M (a rebadged Mellanox SN3700M) on a fanout with both QSFP+ and QSFP28 transceivers. Crazy thing is I can pull out the Lenovo branded transceiver from the Lenovo server and plug into a Cisco X710 card in a Dell server and get link with the same fibre connection. Then again the latest firmware for the X722 cards in these Lenovo servers still won't work with a mix of DAC and transceiver plugged in, they just shut down both ports. Reported that to Lenovo last year, a couple of new firmware releases this year still no fix and no warning either of the world of pain you could be in for with a firmware upgrade. I give Lenovo 0/10 for still not having a huge warning in the changelog about that. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From anacreo at gmail.com Mon Jun 5 17:41:22 2023 From: anacreo at gmail.com (Alec) Date: Mon, 5 Jun 2023 12:41:22 -0400 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: Many network storage engineers simply don't understand bandwidth (or the importance of port groups)... With an ESS you are talking GIGA*BYTES* per second and storage and networking architects simply see 10Gbe and assume that's good enough. A 10Gbe connection can do about 1.4GB/s.. 100Gbe can do 12.5GB/s. To the wrong engineer you can explain this until you're blue in the face and they won't get it. You need to divide Gbe by 8 to get about the GB/s throughput. Explain to them that a USB-C interface is capable of 10Gbe... So you're throttling millions of dollars in technology to the speed of a consumer grade USB-C interface. When infact an ESS can drive at least 2x100Gbe to saturation. I don't have an ESS but a classic SAN array and Spectrum Scale and I can saturate 32*8Gbe fiber connections. Alec On Mon, Jun 5, 2023, 8:56 AM Andrew Beattie wrote: > Prasad, > > You would be better off looking at the following: > > https://www.nvidia.com/en-au/networking/ethernet/cable-accessories/ > > This will allow you to adapt from the QSFP+ 100GB transceiver down to a > SFP+ 10GB transceiver and then use standard OM4 cable and Cisco 10GB > transceivers at the other end. > > However I will also comment that using 10GB network for an all flash ESS > is a bit like buying an F1 racing car and never taking it out of first > gear. > > You would honestly be better investing in a single 100GB switch to at > least build the ESS cluster at 100Gbit and getting the full potential > bandwidth from your investment in NVMe > > Otherwise buy a FS5200 + 2 x86 servers and some scale licensing, because > you are down rating the ESS3200 to less than 25% of its potential > performance > > > Regards, > > Andrew Beattie > Technical Sales Specialist - Storage for Big Data & AI > IBM Australia and New Zealand > P. +61 421 337 927 > E. abeattie at au1.ibm.com > Twitter: AndrewJBeattie > LinkedIn: > ------------------------------ > *From:* gpfsug-discuss on behalf of > Prasad Surampudi > *Sent:* Monday, June 5, 2023 11:24:01 PM > *To:* gpfsug-discuss at gpfsug.org > *Subject:* [EXTERNAL] [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb > Ethernet > > Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb > ethernet network? If so what cables we need to connect it to a Cisco > C93180YC-EX switch? Does it support 100 Gb to 4x10 GB fanout connections? > Prasad Surampudi | Sr. Systems > ZjQcmQRYFpfptBannerStart > This Message Is From an External Sender > This message came from outside your organization. > > ZjQcmQRYFpfptBannerEnd > > > > Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb > ethernet network? If so what cables we need to connect it to a Cisco > C93180YC-EX switch? Does it support 100 Gb to 4x10 GB fanout connections? > > Prasad Surampudi | Sr. Systems Enginee > prasad.surampudi at theatsgr oup.com > | 302.419.5833 > > Innovative IT consulting & modern infrastructure solutions > www.theatsgroup.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From LJHenson at mdanderson.org Mon Jun 5 17:56:22 2023 From: LJHenson at mdanderson.org (Henson Jr.,Larry J) Date: Mon, 5 Jun 2023 16:56:22 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: We replaced two FS900 flash units with 8x8Gb FC on each unit with commodity Dell sever with two ESS3200 using 2x100GbE on each unit/controller and our metadata is 40% faster with 1.8 billion files in the file system. So it pays to upgrade the speed to as fast as you can afford. Best Regards, Larry Henson IT Engineering Storage Team Office (832) 750-1403 Cell (713) 702-4896 [cid:image001.png at 01D997A4.BC4C2D80] From: gpfsug-discuss On Behalf Of Alec Sent: Monday, June 5, 2023 11:41 AM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet THIS EMAIL IS A PHISHING RISK Do you trust the sender? The email address is: gpfsug-discuss-bounces at gpfsug.org While this email has passed our filters, we need you to review with caution before taking any action. If the email looks at all suspicious, click the Report a Phish button. Many network storage engineers simply don't understand bandwidth (or the importance of port groups)... With an ESS you are talking GIGA*BYTES* per second and storage and networking architects simply see 10Gbe and assume that's good enough. A 10Gbe connection can do about 1.4GB/s.. 100Gbe can do 12.5GB/s. To the wrong engineer you can explain this until you're blue in the face and they won't get it. You need to divide Gbe by 8 to get about the GB/s throughput. Explain to them that a USB-C interface is capable of 10Gbe... So you're throttling millions of dollars in technology to the speed of a consumer grade USB-C interface. When infact an ESS can drive at least 2x100Gbe to saturation. I don't have an ESS but a classic SAN array and Spectrum Scale and I can saturate 32*8Gbe fiber connections. Alec On Mon, Jun 5, 2023, 8:56 AM Andrew Beattie > wrote: Prasad, You would be better off looking at the following: https://www.nvidia.com/en-au/networking/ethernet/cable-accessories/ This will allow you to adapt from the QSFP+ 100GB transceiver down to a SFP+ 10GB transceiver and then use standard OM4 cable and Cisco 10GB transceivers at the other end. However I will also comment that using 10GB network for an all flash ESS is a bit like buying an F1 racing car and never taking it out of first gear. You would honestly be better investing in a single 100GB switch to at least build the ESS cluster at 100Gbit and getting the full potential bandwidth from your investment in NVMe Otherwise buy a FS5200 + 2 x86 servers and some scale licensing, because you are down rating the ESS3200 to less than 25% of its potential performance Regards, Andrew Beattie Technical Sales Specialist - Storage for Big Data & AI IBM Australia and New Zealand P. +61 421 337 927 E. abeattie at au1.ibm.com Twitter: AndrewJBeattie LinkedIn: ________________________________ From: gpfsug-discuss > on behalf of Prasad Surampudi > Sent: Monday, June 5, 2023 11:24:01 PM To: gpfsug-discuss at gpfsug.org > Subject: [EXTERNAL] [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb ethernet network? If so what cables we need to connect it to a Cisco C93180YC-EX switch? Does it support 100 Gb to 4x10 GB fanout connections? Prasad Surampudi | Sr. Systems ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. ZjQcmQRYFpfptBannerEnd Does anyone know if we can connect IBM ESS 3200 to a customer 10 Gb ethernet network? If so what cables we need to connect it to a Cisco C93180YC-EX switch? Does it support 100 Gb to 4x10 GB fanout connections? Prasad Surampudi | Sr. Systems Enginee prasad.surampudi at theatsgroup.com | 302.419.5833 Innovative IT consulting & modern infrastructure solutions www.theatsgroup.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 19425 bytes Desc: image001.png URL: From jonathan.buzzard at strath.ac.uk Mon Jun 5 18:03:16 2023 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 5 Jun 2023 18:03:16 +0100 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: On 05/06/2023 17:41, Alec wrote: > > Many network storage engineers simply don't understand bandwidth (or the > importance of port groups)...? With an ESS you are talking GIGA*BYTES* > per second and storage and networking architects simply see 10Gbe and > assume that's good enough. A 10Gbe connection can do about 1.4GB/s.. > 100Gbe can do 12.5GB/s.? ?To the wrong engineer you can explain this > until you're blue in the face and they won't get it.? You need to divide > Gbe by 8 to get about the GB/s throughput.? Explain to them that a USB-C > interface is capable of 10Gbe... So you're throttling millions of > dollars in technology to the speed of a consumer grade USB-C interface. > When infact an ESS can drive at least 2x100Gbe to saturation. > > I don't have an ESS but a classic SAN array and Spectrum Scale and I can > saturate 32*8Gbe fiber connections. > Your kidding right? That's basic competency for the job!!! JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From anacreo at gmail.com Mon Jun 5 18:14:42 2023 From: anacreo at gmail.com (Alec) Date: Mon, 5 Jun 2023 13:14:42 -0400 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: Sadly no I'm not kidding... Network engineers tend to be more focused on availability and general needs not special needs of a high speed data environment and so I give them a pass. As I used to say SAN engineers were Unix engineers who couldn't do Unix.. so they are what they are. I can't tell you how many times I've seen millions of dollars or hardware under perming to a tiny fraction because someone didn't cable it with enough cables to do the job. Like constrained ISL or something... Or they'll just discount the two storage ports that are RED with saturation and say 99% of ports are green... So no problem here. Alec Alec On Mon, Jun 5, 2023, 10:05 AM Jonathan Buzzard < jonathan.buzzard at strath.ac.uk> wrote: > On 05/06/2023 17:41, Alec wrote: > > > > Many network storage engineers simply don't understand bandwidth (or the > > importance of port groups)... With an ESS you are talking GIGA*BYTES* > > per second and storage and networking architects simply see 10Gbe and > > assume that's good enough. A 10Gbe connection can do about 1.4GB/s.. > > 100Gbe can do 12.5GB/s. To the wrong engineer you can explain this > > until you're blue in the face and they won't get it. You need to divide > > Gbe by 8 to get about the GB/s throughput. Explain to them that a USB-C > > interface is capable of 10Gbe... So you're throttling millions of > > dollars in technology to the speed of a consumer grade USB-C interface. > > When infact an ESS can drive at least 2x100Gbe to saturation. > > > > I don't have an ESS but a classic SAN array and Spectrum Scale and I can > > saturate 32*8Gbe fiber connections. > > > > Your kidding right? That's basic competency for the job!!! > > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Mon Jun 5 18:44:15 2023 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Mon, 5 Jun 2023 17:44:15 +0000 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: In my experience, most people just don?t need the bandwidth, unless you are working in the HPC area. Even then, in many cases. We have our system hooked up correctly, and can get something like 20 GB per second with multinode jobs. But almost nobody does that. In reality, it prevents lots of small I/O from saturating the thing. I?m not saying that?s a good reason not to care about this, but it explains why it might not be obvious to non-HBC people. Back before we had robust monitoring (you know, get it running yesterday) to make sure that nodes didn?t come up without RDMA enabled, we ran for quite some time with it accidentally disabled without anyone reporting anything until a job with bug I/O finally came along and clobbered everything. I would?ve thought that would have been immediately apparent. Not so, at least in our environment. Sent from my iPhone On Jun 5, 2023, at 13:17, Alec wrote: ? Sadly no I'm not kidding... Network engineers tend to be more focused on availability and general needs not special needs of a high speed data environment and so I give them a pass. As I used to say SAN engineers were Unix engineers who couldn't do Unix.. so they are what they are. I can't tell you how many times I've seen millions of dollars or hardware under perming to a tiny fraction because someone didn't cable it with enough cables to do the job. Like constrained ISL or something... Or they'll just discount the two storage ports that are RED with saturation and say 99% of ports are green... So no problem here. Alec Alec On Mon, Jun 5, 2023, 10:05 AM Jonathan Buzzard > wrote: On 05/06/2023 17:41, Alec wrote: > > Many network storage engineers simply don't understand bandwidth (or the > importance of port groups)... With an ESS you are talking GIGA*BYTES* > per second and storage and networking architects simply see 10Gbe and > assume that's good enough. A 10Gbe connection can do about 1.4GB/s.. > 100Gbe can do 12.5GB/s. To the wrong engineer you can explain this > until you're blue in the face and they won't get it. You need to divide > Gbe by 8 to get about the GB/s throughput. Explain to them that a USB-C > interface is capable of 10Gbe... So you're throttling millions of > dollars in technology to the speed of a consumer grade USB-C interface. > When infact an ESS can drive at least 2x100Gbe to saturation. > > I don't have an ESS but a classic SAN array and Spectrum Scale and I can > saturate 32*8Gbe fiber connections. > Your kidding right? That's basic competency for the job!!! JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From sathyasrrperumal at gmail.com Mon Jun 5 19:25:49 2023 From: sathyasrrperumal at gmail.com (Sathya S R R Perumal) Date: Mon, 5 Jun 2023 23:55:49 +0530 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: It's often the case that vendors sell more than what is needed for profit maximizing, unless there is a genuine high speed requirement. Such a way that is so used to the general requirement that not to bother the basics. /Sathya On Mon, 5 Jun 2023, 22:48 Alec, wrote: > Sadly no I'm not kidding... Network engineers tend to be more focused on > availability and general needs not special needs of a high speed data > environment and so I give them a pass. > > As I used to say SAN engineers were Unix engineers who couldn't do Unix.. > so they are what they are. > > I can't tell you how many times I've seen millions of dollars or hardware > under perming to a tiny fraction because someone didn't cable it with > enough cables to do the job. Like constrained ISL or something... Or > they'll just discount the two storage ports that are RED with saturation > and say 99% of ports are green... So no problem here. > > Alec > > Alec > > On Mon, Jun 5, 2023, 10:05 AM Jonathan Buzzard < > jonathan.buzzard at strath.ac.uk> wrote: > >> On 05/06/2023 17:41, Alec wrote: >> > >> > Many network storage engineers simply don't understand bandwidth (or >> the >> > importance of port groups)... With an ESS you are talking GIGA*BYTES* >> > per second and storage and networking architects simply see 10Gbe and >> > assume that's good enough. A 10Gbe connection can do about 1.4GB/s.. >> > 100Gbe can do 12.5GB/s. To the wrong engineer you can explain this >> > until you're blue in the face and they won't get it. You need to >> divide >> > Gbe by 8 to get about the GB/s throughput. Explain to them that a >> USB-C >> > interface is capable of 10Gbe... So you're throttling millions of >> > dollars in technology to the speed of a consumer grade USB-C >> interface. >> > When infact an ESS can drive at least 2x100Gbe to saturation. >> > >> > I don't have an ESS but a classic SAN array and Spectrum Scale and I >> can >> > saturate 32*8Gbe fiber connections. >> > >> >> Your kidding right? That's basic competency for the job!!! >> >> >> JAB. >> >> -- >> Jonathan A. Buzzard Tel: +44141-5483420 >> HPC System Administrator, ARCHIE-WeSt. >> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >> > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From enrico.tagliavini at fmi.ch Tue Jun 6 08:48:59 2023 From: enrico.tagliavini at fmi.ch (Tagliavini, Enrico) Date: Tue, 6 Jun 2023 07:48:59 +0000 Subject: [gpfsug-discuss] Connecting IBM ESS 3200 to 10 Gb Ethernet In-Reply-To: References: Message-ID: <4743051d42674fbc2444ed917b5eafa80ee1d479.camel@fmi.ch> Just a gentle reminder: please always be respectful. People might have reason you didn't think of, or might simply make a mistake. Not a good reason to be respectful. This list is supposed to be a place for open communication and help between users. Thank you. Kind regards. -- Enrico Tagliavini Systems / Software Engineer enrico.tagliavini at fmi.ch Friedrich Miescher Institute for Biomedical Research Informatics Maulbeerstrasse 66 4058 Basel Switzerland On Mon, 2023-06-05 at 18:03 +0100, Jonathan Buzzard wrote: > On 05/06/2023 17:41, Alec wrote: > > > > Many network storage engineers simply don't understand bandwidth (or the > > importance of port groups)...? With an ESS you are talking GIGA*BYTES* > > per second and storage and networking architects simply see 10Gbe and > > assume that's good enough. A 10Gbe connection can do about 1.4GB/s.. > > 100Gbe can do 12.5GB/s.? ?To the wrong engineer you can explain this > > until you're blue in the face and they won't get it.? You need to divide > > Gbe by 8 to get about the GB/s throughput.? Explain to them that a USB-C > > interface is capable of 10Gbe... So you're throttling millions of > > dollars in technology to the speed of a consumer grade USB-C interface. > > When infact an ESS can drive at least 2x100Gbe to saturation. > > > > I don't have an ESS but a classic SAN array and Spectrum Scale and I can > > saturate 32*8Gbe fiber connections. > > > > Your kidding right? That's basic competency for the job!!! > > > JAB. > From jonathan.buzzard at strath.ac.uk Fri Jun 9 11:56:57 2023 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Fri, 9 Jun 2023 11:56:57 +0100 Subject: [gpfsug-discuss] Lenovo downloads Message-ID: Does anyone know what the situation is with getting access to the latest GPFS downloads from Lenovo? They have version 5.1.7, but nothing after, and well 5.1.7 does not support RHEL 8.8 or at least building the kernel module fails. Seems rather behind the times to only have 5.1.7 when my inbox was plastered yesterday with warnings about AFM not working on 5.1.11 and RHEL 8.8 (anyone know why I get two copies of everything from IBM My Notifications?). We don't use AFM so meh but I am stymied at the moment on upgrading the cluster to 8.8 :-( Yes the DSS-G nodes are still running 8.4 EUS but we don't run EUS on our compute nodes so we need to move forward. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From gretchen at princeton.edu Fri Jun 9 13:37:00 2023 From: gretchen at princeton.edu (Gretchen L. Thiele) Date: Fri, 9 Jun 2023 12:37:00 +0000 Subject: [gpfsug-discuss] Lenovo downloads In-Reply-To: References: Message-ID: It?s under a new name probably. In Fix Central, I had to change the product selector to IBM Storage Scale. Version 5.1.8.0 is there. Regards, Gretchen Thiele HPC Storage Administrator Princeton University > On Jun 9, 2023, at 6:56 AM, Jonathan Buzzard wrote: > > > Does anyone know what the situation is with getting access to the latest GPFS downloads from Lenovo? > > They have version 5.1.7, but nothing after, and well 5.1.7 does not support RHEL 8.8 or at least building the kernel module fails. > > Seems rather behind the times to only have 5.1.7 when my inbox was plastered yesterday with warnings about AFM not working on 5.1.11 and RHEL 8.8 (anyone know why I get two copies of everything from IBM My Notifications?). We don't use AFM so meh but I am stymied at the moment on upgrading the cluster to 8.8 :-( > > Yes the DSS-G nodes are still running 8.4 EUS but we don't run EUS on our compute nodes so we need to move forward. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org From jonathan.buzzard at strath.ac.uk Fri Jun 9 16:13:17 2023 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Fri, 9 Jun 2023 16:13:17 +0100 Subject: [gpfsug-discuss] Lenovo downloads In-Reply-To: References: Message-ID: On 09/06/2023 13:37, Gretchen L. Thiele wrote: > > It?s under a new name probably. In Fix Central, I had to change the product > selector to IBM Storage Scale. Version 5.1.8.0 is there. > Yeah I don't have access to Fix Central only having DSS-G systems. I have to access through Lenovo's Service Connect and clearly Lenovo are behind uploading 5.1.8, grrr JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From gretchen at princeton.edu Fri Jun 9 16:27:56 2023 From: gretchen at princeton.edu (Gretchen L. Thiele) Date: Fri, 9 Jun 2023 15:27:56 +0000 Subject: [gpfsug-discuss] Lenovo downloads In-Reply-To: References: Message-ID: It was published to Fix Central on June 3rd. I can confirm that it works with RHEL 8.8. > On Jun 9, 2023, at 11:13 AM, Jonathan Buzzard wrote: > > On 09/06/2023 13:37, Gretchen L. Thiele wrote: >> It?s under a new name probably. In Fix Central, I had to change the product >> selector to IBM Storage Scale. Version 5.1.8.0 is there. > > Yeah I don't have access to Fix Central only having DSS-G systems. I have to access through Lenovo's Service Connect and clearly Lenovo are behind uploading 5.1.8, grrr From robert.horton at icr.ac.uk Fri Jun 9 16:34:35 2023 From: robert.horton at icr.ac.uk (Robert Horton) Date: Fri, 9 Jun 2023 15:34:35 +0000 Subject: [gpfsug-discuss] Lenovo downloads In-Reply-To: References: Message-ID: I've no idea about the Lenovo downloads but glad it's not just me who gets 2 (or sometimes 3) copies of the notifications ? Rob -----Original Message----- From: gpfsug-discuss On Behalf Of Jonathan Buzzard Sent: 09 June 2023 11:57 To: gpfsug main discussion list Subject: [gpfsug-discuss] Lenovo downloads CAUTION: This email originated from outside of the ICR. Do not click links or open attachments unless you recognize the sender's email address and know the content is safe. Does anyone know what the situation is with getting access to the latest GPFS downloads from Lenovo? They have version 5.1.7, but nothing after, and well 5.1.7 does not support RHEL 8.8 or at least building the kernel module fails. Seems rather behind the times to only have 5.1.7 when my inbox was plastered yesterday with warnings about AFM not working on 5.1.11 and RHEL 8.8 (anyone know why I get two copies of everything from IBM My Notifications?). We don't use AFM so meh but I am stymied at the moment on upgrading the cluster to 8.8 :-( Yes the DSS-G nodes are still running 8.4 EUS but we don't run EUS on our compute nodes so we need to move forward. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org The Institute of Cancer Research: Royal Cancer Hospital, a charitable Company Limited by Guarantee, Registered in England under Company No. 534147 with its Registered Office at 123 Old Brompton Road, London SW7 3RP. This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer and network. From ulmer at ulmer.org Fri Jun 9 17:45:42 2023 From: ulmer at ulmer.org (Stephen Ulmer) Date: Fri, 9 Jun 2023 12:45:42 -0400 Subject: [gpfsug-discuss] Lenovo downloads In-Reply-To: References: Message-ID: <5CDD6DF5-3CA8-4076-8C97-4123255F5921@ulmer.org> Be aware that RHEL 8.8 breaks AFM, if that is a concern. -- Stephen Ulmer Sent from a mobile device; please excuse auto-correct silliness. > On Jun 9, 2023, at 06:18, Jonathan Buzzard wrote: > > ? > Does anyone know what the situation is with getting access to the latest GPFS downloads from Lenovo? > > They have version 5.1.7, but nothing after, and well 5.1.7 does not support RHEL 8.8 or at least building the kernel module fails. > > Seems rather behind the times to only have 5.1.7 when my inbox was plastered yesterday with warnings about AFM not working on 5.1.11 and RHEL 8.8 (anyone know why I get two copies of everything from IBM My Notifications?). We don't use AFM so meh but I am stymied at the moment on upgrading the cluster to 8.8 :-( > > Yes the DSS-G nodes are still running 8.4 EUS but we don't run EUS on our compute nodes so we need to move forward. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org From novosirj at rutgers.edu Fri Jun 9 18:01:55 2023 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Fri, 9 Jun 2023 17:01:55 +0000 Subject: [gpfsug-discuss] Lenovo downloads In-Reply-To: References: Message-ID: I?ve reached out to Lenovo to ask them to upload the newer version. -- #BlackLivesMatter ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB A555B, Newark `' On Jun 9, 2023, at 06:56, Jonathan Buzzard wrote: Does anyone know what the situation is with getting access to the latest GPFS downloads from Lenovo? They have version 5.1.7, but nothing after, and well 5.1.7 does not support RHEL 8.8 or at least building the kernel module fails. Seems rather behind the times to only have 5.1.7 when my inbox was plastered yesterday with warnings about AFM not working on 5.1.11 and RHEL 8.8 (anyone know why I get two copies of everything from IBM My Notifications?). We don't use AFM so meh but I am stymied at the moment on upgrading the cluster to 8.8 :-( Yes the DSS-G nodes are still running 8.4 EUS but we don't run EUS on our compute nodes so we need to move forward. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Fri Jun 9 18:08:13 2023 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Fri, 9 Jun 2023 17:08:13 +0000 Subject: [gpfsug-discuss] Lenovo downloads In-Reply-To: References: Message-ID: <1962089E-3E52-4F08-A5E3-57C9C5D48BB8@rutgers.edu> On Jun 9, 2023, at 06:56, Jonathan Buzzard wrote: Does anyone know what the situation is with getting access to the latest GPFS downloads from Lenovo? They have version 5.1.7, but nothing after, and well 5.1.7 does not support RHEL 8.8 or at least building the kernel module fails. BTW, you may well know this, but this shouldn?t be how you are determining whether it?s supported or not. The support matrix is here: https://www.ibm.com/docs/en/storage-scale?topic=STXKQY/gpfsclustersfaq.html#fsi 5.1.7 indeed does not support RHEL 8.8, and 5.1.8 only supports up to 4.18.0-477.13.1.el8_8. These do sometimes move, even for the same point release of Storage Scale, as more testing is done or new kernels come out, so it?s not a given that, say, 5.1.7.1, will remain at 3.10.0- 1160.90.1.el7 for a highest supported kernel release for RHEL 7.9. -- #BlackLivesMatter ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB A555B, Newark `' -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at hpe.com Mon Jun 12 08:57:34 2023 From: daniel.kidger at hpe.com (Kidger, Daniel) Date: Mon, 12 Jun 2023 07:57:34 +0000 Subject: [gpfsug-discuss] Lenovo downloads In-Reply-To: <1962089E-3E52-4F08-A5E3-57C9C5D48BB8@rutgers.edu> References: <1962089E-3E52-4F08-A5E3-57C9C5D48BB8@rutgers.edu> Message-ID: Remember that all OEMs need to fully validate any new release of Storage Scale before making it available to their end customers. Not least because 1st and 2nd line support comes from the OEM itself, not IBM. Hopefully this validation process takes only a few months. Daniel Daniel Kidger HPC Storage Solutions Architect, EMEA daniel.kidger at hpe.com +44 (0)7818 522266 hpe.com [cid:9d997382-129c-4fb4-9800-76ebfff6b392] ________________________________ From: gpfsug-discuss on behalf of Ryan Novosielski Sent: 09 June 2023 18:08 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Lenovo downloads On Jun 9, 2023, at 06:56, Jonathan Buzzard wrote: Does anyone know what the situation is with getting access to the latest GPFS downloads from Lenovo? They have version 5.1.7, but nothing after, and well 5.1.7 does not support RHEL 8.8 or at least building the kernel module fails. BTW, you may well know this, but this shouldn?t be how you are determining whether it?s supported or not. The support matrix is here: https://www.ibm.com/docs/en/storage-scale?topic=STXKQY/gpfsclustersfaq.html#fsi 5.1.7 indeed does not support RHEL 8.8, and 5.1.8 only supports up to 4.18.0-477.13.1.el8_8. These do sometimes move, even for the same point release of Storage Scale, as more testing is done or new kernels come out, so it?s not a given that, say, 5.1.7.1, will remain at 3.10.0- 1160.90.1.el7 for a highest supported kernel release for RHEL 7.9. -- #BlackLivesMatter ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB A555B, Newark `' -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Outlook-r5x3xjjm.png Type: image/png Size: 4185 bytes Desc: Outlook-r5x3xjjm.png URL: From ncalimet at lenovo.com Mon Jun 12 11:46:22 2023 From: ncalimet at lenovo.com (Nicolas CALIMET) Date: Mon, 12 Jun 2023 10:46:22 +0000 Subject: [gpfsug-discuss] [External] Lenovo downloads In-Reply-To: References: Message-ID: Hi, The GPFS / Spectrum Scale / Storage Scale packages for the 5.1.8-0 level have been published under their new name on the Lenovo ESD website last Thursday (2023-06-08). They should be found by highlighting the "Product" tab of the ESD top menu, select "Search Product", type in "storage scale" in the "Product Name" text field, then hit the "Search" button at the bottom. The following products should then be listed: Storage Scale Data Management Edition Storage Scale Advanced Edition Storage Scale Data Access Edition Storage Scale Erasure Code Storage Scale Standard Edition As I reminder, these packages are the "base" levels that are suited to non-GNR head nodes, i.e. they do not apply to and are not supported on Lenovo DSS-G at this time. Hope this helps, Regards -- Nicolas Calimet, PhD | HPC System Architect | Lenovo ISG | Meitnerstrasse 9, D-70563 Stuttgart, Germany | +49 71165690146 | https://www.lenovo.com/dssg -----Original Message----- From: gpfsug-discuss On Behalf Of Jonathan Buzzard Sent: Friday, June 9, 2023 12:57 To: gpfsug main discussion list Subject: [External] [gpfsug-discuss] Lenovo downloads Does anyone know what the situation is with getting access to the latest GPFS downloads from Lenovo? They have version 5.1.7, but nothing after, and well 5.1.7 does not support RHEL 8.8 or at least building the kernel module fails. Seems rather behind the times to only have 5.1.7 when my inbox was plastered yesterday with warnings about AFM not working on 5.1.11 and RHEL 8.8 (anyone know why I get two copies of everything from IBM My Notifications?). We don't use AFM so meh but I am stymied at the moment on upgrading the cluster to 8.8 :-( Yes the DSS-G nodes are still running 8.4 EUS but we don't run EUS on our compute nodes so we need to move forward. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org From jonathan.buzzard at strath.ac.uk Mon Jun 12 12:27:30 2023 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 12 Jun 2023 12:27:30 +0100 Subject: [gpfsug-discuss] Lenovo downloads In-Reply-To: References: <1962089E-3E52-4F08-A5E3-57C9C5D48BB8@rutgers.edu> Message-ID: <364331c1-c8ef-ef7a-5948-3a359630244c@strath.ac.uk> On 12/06/2023 08:57, Kidger, Daniel wrote: > Remember that all OEMs need to fully validate any new release of Storage > Scale before making it available to their end customers. Not least > because 1^st ?and 2^nd ?line support comes from the OEM itself, not IBM. > > Hopefully this validation process takes only a few months. > No they don't. You need to draw a distinction between what I run on my DSS-G nodes and what I run on my hundreds of compute nodes, protocol nodes etc. The former does indeed need to be fully validated by the OEM, will only be the version from the DSS-G release bundle and run on genuine RHEL with extended update support. What I run on everything else needs doesn't need validation from the OEM IMHO. It is not running on their hardware is often not running on genuine RHEL, is certainly not running on an EUS version and is unlikely to be the same version as running on the DSS-G nodes. From a customer perspective with today's cyber threat landscape it needs releasing promptly, because running EUS on all my compute nodes is not economically viable. Anyway it looks like 5.1.8 was uploaded over the weekend so all is good. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From dominic.horn at atempo.com Mon Jun 12 15:24:53 2023 From: dominic.horn at atempo.com (Dominic Horn) Date: Mon, 12 Jun 2023 14:24:53 +0000 Subject: [gpfsug-discuss] Quick hello Message-ID: Greetings, everyone! I wanted to say hello and introduce myself. Our company, Atempo, is headquartered in France, near Paris, but we have branches worldwide. I'm based in Germany, near Hamburg. We use our data management software, Atempo Miria, for instance to help upgrade GPFS filesystems to their next version. Our market sector encompasses a wide range, from small businesses to enterprise customers, universities, and institutes. I'm thrilled to be a part of this group! Thanks, Dominic Mit freundlichem Gru? [Logo ATEMPO] [https://storage.letsignit.com/5f759fa4c789fb0015291bef/___Feuille_Facebook.jpg] [https://storage.letsignit.com/5f759fa4c789fb0015291bef/___Feuille_Linkedin.jpg] [https://storage.letsignit.com/5f759fa4c789fb0015291bef/___Feuille_TWitter.jpg] Dominic Horn Presales Engineer +49 152 15197409 Atempo GmbH | Savignystr. 43, 60325 Frankfurt am Main | POWERFUL DATA PROTECTION AND DATA MANAGEMENT SOLUTIONS ATEMPO.COM [https://storage.letsignit.com/5f759fa4c789fb0015291bef/132331010941249332430851027281332880910_60ace8771d9671e22589e1b3_c3c8c823be5c36e7b46eb6fc22319299.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: From uwe.falke at kit.edu Tue Jun 13 15:46:55 2023 From: uwe.falke at kit.edu (Uwe Falke) Date: Tue, 13 Jun 2023 16:46:55 +0200 Subject: [gpfsug-discuss] CCR database inconsistent Message-ID: <71f405f4-8194-7acb-c564-2d7b308ef5f7@kit.edu> Dear all. it seems we have a (minor?) inconsistency in the CCR database of one of our clusters. A quorum node was set to nonquorum. Be it node A. Another node was designated as quorum node. Be it Node N. The other two quorum nodes, B and C, were left untouched. Then,? node A has been removed (which was not reachable by ssh at that time). Now, on three of the remaining 6 nodes, the removed node A is still contained in the `mmccr dump` output along with nodes B and C while on the other 3 nodes that command lists the three current quorum nodes B, C, and N properly. mmccr lsnodes returns the proper list (B, C , N) on all 6 remaining nodes. Any idea how to get this straight? The nodes still thinking node A is part of the CCR node collective try to connect to the gone node A periodically for mmhealth stuff. Many thanks in advance Uwe -- Karlsruhe Institute of Technology (KIT) Steinbuch Centre for Computing (SCC) Scientific Data Management (SDM) Uwe Falke Hermann-von-Helmholtz-Platz 1, Building 442, Room 187 D-76344 Eggenstein-Leopoldshafen Tel: +49 721 608 28024 Email: uwe.falke at kit.edu www.scc.kit.edu Registered office: Kaiserstra?e 12, 76131 Karlsruhe, Germany KIT ? The Research University in the Helmholtz Association -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5814 bytes Desc: S/MIME Cryptographic Signature URL: From NSCHULD at de.ibm.com Tue Jun 13 17:27:55 2023 From: NSCHULD at de.ibm.com (Norbert Schuld) Date: Tue, 13 Jun 2023 16:27:55 +0000 Subject: [gpfsug-discuss] RHEL7 support In-Reply-To: <2c4ce476-4da4-6c8d-5314-41fe869a3d89@strath.ac.uk> References: <2c4ce476-4da4-6c8d-5314-41fe869a3d89@strath.ac.uk> Message-ID: Hello Jonathan, the plan as of today is to support RHEL 7 until it hits end of "Maintenance 2 support" in End of June next year. (see https://access.redhat.com/support/policy/updates/errata/#Extended_Life_Cycle_Support) Furthermore the next release, which will be EUS, is likely to be the last one to support RHEL 7 and starting next year Storage Scale may require RHEL 8 or newer. Mit freundlichen Gr??en / Kind regards Norbert Schuld Software Engineer, Release Architect IBM Storage Scale IBM Systems / 00M925 Wilhelm-Fay-Str. 32 65936 Frankfurt Phone: +49-160-7070335 E-Mail: nschuld at de.ibm.com IBM Data Privacy Statement IBM Deutschland Research & Development GmbH / Vorsitzender des Aufsichtsrats: Gregor Pillen Gesch?ftsf?hrung: David Faller Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -----Original Message----- From: gpfsug-discuss On Behalf Of Jonathan Buzzard Sent: Sunday, May 21, 2023 10:53 PM To: gpfsug-discuss at spectrumscale.org Subject: [EXTERNAL] [gpfsug-discuss] RHEL7 support I am looking at the tasks for the next year in removing our last RHEL7/CentOS7 installs and am wondering if it is planned for GPFS support to continue to the end of RHEL7 support? I have a recollection that GPFS support for RHEL6 stopped before the end of support for RHEL6, and if the same is going to happen with RHEL7 I would help to know for scheduling things over the next 12 months. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org From novosirj at rutgers.edu Tue Jun 13 18:09:13 2023 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Tue, 13 Jun 2023 17:09:13 +0000 Subject: [gpfsug-discuss] RHEL7 support In-Reply-To: References: <2c4ce476-4da4-6c8d-5314-41fe869a3d89@strath.ac.uk> Message-ID: If anyone knows what Lenovo is going to do regarding this and gen 1 DSS systems that are back on DSS-G 2.x, I?d be interested to know to plan. I will still have one of those next year theoretically. Sent from my iPhone > On Jun 13, 2023, at 12:32, Norbert Schuld wrote: > > ?Hello Jonathan, > > the plan as of today is to support RHEL 7 until it hits end of "Maintenance 2 support" in End of June next year. > (see https://access.redhat.com/support/policy/updates/errata/#Extended_Life_Cycle_Support) > > Furthermore the next release, which will be EUS, is likely to be the last one to support RHEL 7 and starting next year Storage Scale may require RHEL 8 or newer. > > Mit freundlichen Gr??en / Kind regards > Norbert Schuld > > Software Engineer, Release Architect IBM Storage Scale > IBM Systems / 00M925 > > Wilhelm-Fay-Str. 32 > 65936 Frankfurt > Phone: +49-160-7070335 > E-Mail: nschuld at de.ibm.com > > IBM Data Privacy Statement > IBM Deutschland Research & Development GmbH / Vorsitzender des Aufsichtsrats: Gregor Pillen > Gesch?ftsf?hrung: David Faller > Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > -----Original Message----- > From: gpfsug-discuss On Behalf Of Jonathan Buzzard > Sent: Sunday, May 21, 2023 10:53 PM > To: gpfsug-discuss at spectrumscale.org > Subject: [EXTERNAL] [gpfsug-discuss] RHEL7 support > > > I am looking at the tasks for the next year in removing our last > RHEL7/CentOS7 installs and am wondering if it is planned for GPFS support to continue to the end of RHEL7 support? > > I have a recollection that GPFS support for RHEL6 stopped before the end of support for RHEL6, and if the same is going to happen with RHEL7 I would help to know for scheduling things over the next 12 months. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org From NSCHULD at de.ibm.com Tue Jun 13 18:38:06 2023 From: NSCHULD at de.ibm.com (Norbert Schuld) Date: Tue, 13 Jun 2023 17:38:06 +0000 Subject: [gpfsug-discuss] RHEL7 support In-Reply-To: References: <2c4ce476-4da4-6c8d-5314-41fe869a3d89@strath.ac.uk> Message-ID: The statement I made is only for Storage Scale Software, Storage Scale Systems (aka DSS, ESS) will be supported till their EOL with the OS they ship. Mit freundlichen Gr??en / Kind regards Norbert Schuld -----Original Message----- From: gpfsug-discuss On Behalf Of Ryan Novosielski Sent: Tuesday, June 13, 2023 7:09 PM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] RHEL7 support If anyone knows what Lenovo is going to do regarding this and gen 1 DSS systems that are back on DSS-G 2.x, I?d be interested to know to plan. I will still have one of those next year theoretically. Sent from my iPhone > On Jun 13, 2023, at 12:32, Norbert Schuld wrote: > > ?Hello Jonathan, > > the plan as of today is to support RHEL 7 until it hits end of "Maintenance 2 support" in End of June next year. > (see > INVALID URI REMOVED > _support_policy_updates_errata_-23Extended-5FLife-5FCycle-5FSupport&d= > DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=i4V0h7L9ElftZNfcuPIXmAHN2jl5TLcuyFLq > tinu4j8&m=HyhwZ7Qwv-wXnpjTqlvY4Z5Hh_dbPZ8K3X0eWxOvYkbQdx893yh137hWgwet > ofrs&s=r4vUjF5GMou3nCm5mBRj8QmCDLtkJo4pQAt7XTPpgXc&e= ) > > Furthermore the next release, which will be EUS, is likely to be the last one to support RHEL 7 and starting next year Storage Scale may require RHEL 8 or newer. > > Mit freundlichen Gr??en / Kind regards Norbert Schuld > > Software Engineer, Release Architect IBM Storage Scale IBM Systems / > 00M925 > > Wilhelm-Fay-Str. 32 > 65936 Frankfurt > Phone: +49-160-7070335 > E-Mail: nschuld at de.ibm.com > > IBM Data Privacy Statement > IBM Deutschland Research & Development GmbH / Vorsitzender des > Aufsichtsrats: Gregor Pillen > Gesch?ftsf?hrung: David Faller > Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht > Stuttgart, HRB 243294 > > > -----Original Message----- > From: gpfsug-discuss On Behalf Of > Jonathan Buzzard > Sent: Sunday, May 21, 2023 10:53 PM > To: gpfsug-discuss at spectrumscale.org > Subject: [EXTERNAL] [gpfsug-discuss] RHEL7 support > > > I am looking at the tasks for the next year in removing our last > RHEL7/CentOS7 installs and am wondering if it is planned for GPFS support to continue to the end of RHEL7 support? > > I have a recollection that GPFS support for RHEL6 stopped before the end of support for RHEL6, and if the same is going to happen with RHEL7 I would help to know for scheduling things over the next 12 months. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > INVALID URI REMOVED > _listinfo_gpfsug-2Ddiscuss-5Fgpfsug.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1 > ZOg&r=i4V0h7L9ElftZNfcuPIXmAHN2jl5TLcuyFLqtinu4j8&m=HyhwZ7Qwv-wXnpjTql > vY4Z5Hh_dbPZ8K3X0eWxOvYkbQdx893yh137hWgwetofrs&s=bn_IEdvs_jZFUmPuS0IGT > x8B_1PfdsqbC3gLeyn2U3c&e= > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > INVALID URI REMOVED > _listinfo_gpfsug-2Ddiscuss-5Fgpfsug.org&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1 > ZOg&r=i4V0h7L9ElftZNfcuPIXmAHN2jl5TLcuyFLqtinu4j8&m=HyhwZ7Qwv-wXnpjTql > vY4Z5Hh_dbPZ8K3X0eWxOvYkbQdx893yh137hWgwetofrs&s=bn_IEdvs_jZFUmPuS0IGT > x8B_1PfdsqbC3gLeyn2U3c&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org From chair at gpfsug.org Wed Jun 14 17:19:55 2023 From: chair at gpfsug.org (chair at gpfsug.org) Date: Wed, 14 Jun 2023 17:19:55 +0100 Subject: [gpfsug-discuss] Spectrum Scale User Group, Tuesday 27th June - Wednesday 28th June @ IBM York Road Message-ID: <5e003bf1a018a297d3d50ca033fb4547@gpfsug.org> Hi all, Just a reminder that the next UK User Group meeting will be taking place in London (IBM York Road) on Tuesday 27th and Wednesday 28th June The Agenda is available on the Spectrum Scale website, under events and we have an evening event at Etc Venues, Prospero House If you wish to attend but have not yet registered, please do so. Thanks Paul From chair at gpfsug.org Wed Jun 14 17:21:03 2023 From: chair at gpfsug.org (chair at gpfsug.org) Date: Wed, 14 Jun 2023 17:21:03 +0100 Subject: [gpfsug-discuss] GPFS UK Chair Message-ID: All Hi, I will be stepping down as the Chair after the June event, so if you are interested in taking over and running the group, please let me know ! Regards Paul From rp2927 at gsb.columbia.edu Wed Jun 14 20:40:17 2023 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Wed, 14 Jun 2023 19:40:17 +0000 Subject: [gpfsug-discuss] Increase volume size after nsd/disk creation Message-ID: <95C630BA-E78D-45CC-8F45-76CEB7BF8AD2@contoso.com> Hello, Does anyone know if GPFS allows the increase in capacity of already existing nsd/disks, in order to add capacity to a storage pool? I need to increase the space on an existing filesystem, and wonder if rather than having to add new nds/disks to the storage pool, I could leverage the ability of the backend block storage array (NetApp E5600) to increase the capacity of existing raid volumes, and propagate the increase in space all the way through nsd/disk/storage pool. Sounds somewhat close to handling thin disks ? yes, not quite the same, but I found no reference in the manuals either way. Many thanks! Razvan Columbia Business School At the Very Center of Business -------------- next part -------------- An HTML attachment was scrubbed... URL: From anacreo at gmail.com Wed Jun 14 21:00:32 2023 From: anacreo at gmail.com (Alec) Date: Wed, 14 Jun 2023 13:00:32 -0700 Subject: [gpfsug-discuss] Increase volume size after nsd/disk creation In-Reply-To: <95C630BA-E78D-45CC-8F45-76CEB7BF8AD2@contoso.com> References: <95C630BA-E78D-45CC-8F45-76CEB7BF8AD2@contoso.com> Message-ID: perhaps mmnsddiscover will do it for you, I guess not though... You should be able at the worst case increase the disk size, then do a mmdeldisk and an mmadddisk to reimport each disk at the new size. If memory serves though the meta size is based on the size of the first disk that is imported and won't increase, which affects how large your pool can actually grow to. I'm sure there are workarounds to that though. Alec On Wed, Jun 14, 2023, 12:42 PM Popescu, Razvan wrote: > Hello, > > > > Does anyone know if GPFS allows the increase in capacity of already > existing nsd/disks, in order to add capacity to a storage pool? > > > > I need to increase the space on an existing filesystem, and wonder if > rather than having to add new nds/disks to the storage pool, I could > leverage the ability of the backend block storage array (NetApp E5600) to > increase the capacity of existing raid volumes, and propagate the increase > in space all the way through nsd/disk/storage pool. Sounds somewhat close > to handling thin disks ? yes, not quite the same, but I found no reference > in the manuals either way. > > > > Many thanks! > > Razvan > > > > > > *Columbia Business School* > > *At the Very Center of Business* > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Jun 14 20:59:15 2023 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 14 Jun 2023 19:59:15 +0000 Subject: [gpfsug-discuss] Increase volume size after nsd/disk creation In-Reply-To: <95C630BA-E78D-45CC-8F45-76CEB7BF8AD2@contoso.com> References: <95C630BA-E78D-45CC-8F45-76CEB7BF8AD2@contoso.com> Message-ID: To my knowledge there isn?t a way today to do this today. Thus you would need to delete an NSD from the FS, delete the NSD, increase the backend disk, recreate the NSD, then add it back to the FS. Cheers, -Bryan From: gpfsug-discuss On Behalf Of Popescu, Razvan Sent: Wednesday, June 14, 2023 2:40 PM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Increase volume size after nsd/disk creation This message has originated from an EXTERNAL SENDER Hello, Does anyone know if GPFS allows the increase in capacity of already existing nsd/disks, in order to add capacity to a storage pool? I need to increase the space on an existing filesystem, and wonder if rather than having to add new nds/disks to the storage pool, I could leverage the ability of the backend block storage array (NetApp E5600) to increase the capacity of existing raid volumes, and propagate the increase in space all the way through nsd/disk/storage pool. Sounds somewhat close to handling thin disks ? yes, not quite the same, but I found no reference in the manuals either way. Many thanks! Razvan Columbia Business School At the Very Center of Business ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential, or privileged information and/or personal data. If you are not the intended recipient, you are hereby notified that any review, dissemination, or copying of this email is strictly prohibited, and requested to notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request, or solicitation of any kind to buy, sell, subscribe, redeem, or perform any type of transaction of a financial product. Personal data, as defined by applicable data protection and privacy laws, contained in this email may be processed by the Company, and any of its affiliated or related companies, for legal, compliance, and/or business-related purposes. You may have rights regarding your personal data; for information on exercising these rights or the Company?s treatment of personal data, please email datarequests at jumptrading.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed Jun 14 22:53:37 2023 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 14 Jun 2023 21:53:37 +0000 Subject: [gpfsug-discuss] Increase volume size after nsd/disk creation In-Reply-To: References: <95C630BA-E78D-45CC-8F45-76CEB7BF8AD2@contoso.com> Message-ID: Razvan, We do not support increasing the size of an existing NSD. As mentioned earlier you need to add new NSD, of the new size you require, and then remove the old NSD resize to the new size and then add the new NSD back to the filesystem. When you have added all the new NSD to the filesystem (you need to make sure they are all the same size - or you will end up with performance degradation) then you need to run a mmrestripe accross thr filesystem to rebalance the data and metadata appropriately. Regards, Andrew Beattie Technical Sales Specialist - Storage for Big Data & AI IBM Australia and New Zealand P. +61 421 337 927 E. abeattie at au1.ibm.com Twitter: AndrewJBeattie LinkedIn: ________________________________ From: gpfsug-discuss on behalf of Alec Sent: Thursday, June 15, 2023 6:00:32 AM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] Increase volume size after nsd/disk creation perhaps mmnsddiscover will do it for you, I guess not though.?.?. You should be able at the worst case increase the disk size, then do a mmdeldisk and an mmadddisk to reimport each disk at the new size. If memory serves though the meta size is ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd perhaps mmnsddiscover will do it for you, I guess not though... You should be able at the worst case increase the disk size, then do a mmdeldisk and an mmadddisk to reimport each disk at the new size. If memory serves though the meta size is based on the size of the first disk that is imported and won't increase, which affects how large your pool can actually grow to. I'm sure there are workarounds to that though. Alec On Wed, Jun 14, 2023, 12:42 PM Popescu, Razvan > wrote: Hello, Does anyone know if GPFS allows the increase in capacity of already existing nsd/disks, in order to add capacity to a storage pool? I need to increase the space on an existing filesystem, and wonder if rather than having to add new nds/disks to the storage pool, I could leverage the ability of the backend block storage array (NetApp E5600) to increase the capacity of existing raid volumes, and propagate the increase in space all the way through nsd/disk/storage pool. Sounds somewhat close to handling thin disks ? yes, not quite the same, but I found no reference in the manuals either way. Many thanks! Razvan Columbia Business School At the Very Center of Business _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjdoherty at yahoo.com Thu Jun 15 00:59:58 2023 From: jjdoherty at yahoo.com (Jim Doherty) Date: Wed, 14 Jun 2023 23:59:58 +0000 (UTC) Subject: [gpfsug-discuss] Increase volume size after nsd/disk creation In-Reply-To: References: <95C630BA-E78D-45CC-8F45-76CEB7BF8AD2@contoso.com> Message-ID: <1813202200.89070.1686787198502@mail.yahoo.com> Just? create/add another nsd. On Wednesday, June 14, 2023 at 04:03:59 PM EDT, Alec wrote: perhaps mmnsddiscover will do it for you, I guess not though... You should be able at the worst case increase the disk size, then do a mmdeldisk and an mmadddisk to reimport each disk at the new size.? ?If memory serves though the meta size is based on the size of the first disk that is imported and won't increase, which affects how large your pool can actually grow to.? I'm sure there are workarounds to that though. Alec On Wed, Jun 14, 2023, 12:42 PM Popescu, Razvan wrote: Hello, ? Does anyone know if GPFS allows the increase in capacity of already existing nsd/disks, in order to add capacity to a storage pool?? ? I need to increase the space on an existing filesystem, and wonder if rather than having to add new nds/disks to the storage pool, I could leverage the ability of the backend block storage array (NetApp E5600) to increase the capacity of existing raid volumes, and propagate the increase in space all the way through nsd/disk/storage pool.? Sounds somewhat close to handling thin disks ? yes, not quite the same, but I found no reference in the manuals either way. ? Many thanks! Razvan ? ? Columbia Business School At the Very Center of Business ? _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Wed Jun 21 10:48:08 2023 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Wed, 21 Jun 2023 09:48:08 +0000 Subject: [gpfsug-discuss] Informal Social Gathering - Monday June 26th, 2023 Message-ID: Greetings, like in previous years we will meet the evening before the UK User Group Meeting for an informal social gathering (=bring your own money): Monday June 26th, 2023 - 6.30pm - 9:00pm The Mulberry Bush, 89 Upper Ground, London https://www.mulberrybushpub.co.uk/ I have booked a table for 20. Please drop a note to me, if you plan to attend. Looking forward to see many of you there! Best, Ulf Ulf Troppens Senior Technical Staff Member Spectrum Scale Development IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Gregor Pillen / Gesch?ftsf?hrung: David Faller Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rp2927 at gsb.columbia.edu Thu Jun 22 18:37:25 2023 From: rp2927 at gsb.columbia.edu (Popescu, Razvan) Date: Thu, 22 Jun 2023 17:37:25 +0000 Subject: [gpfsug-discuss] Increase volume size after nsd/disk creation In-Reply-To: References: <95C630BA-E78D-45CC-8F45-76CEB7BF8AD2@contoso.com> Message-ID: Thank you all for your kind advice. -- Razvan N. Popescu Research Computing Director Office: (212) 851-9298 razvan.popescu at columbia.edu Columbia Business School At the Very Center of Business From: gpfsug-discuss on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Wednesday, June 14, 2023 at 5:57 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Increase volume size after nsd/disk creation Razvan, We do not support increasing the size of an existing NSD. As mentioned earlier you need to add new NSD, of the new size you require, and then remove the old NSD resize to the new size and then add the new NSD back to the filesystem. When you have added all the new NSD to the filesystem (you need to make sure they are all the same size - or you will end up with performance degradation) then you need to run a mmrestripe accross thr filesystem to rebalance the data and metadata appropriately. Regards, Andrew Beattie Technical Sales Specialist - Storage for Big Data & AI IBM Australia and New Zealand P. +61 421 337 927 E. abeattie at au1.ibm.com Twitter: AndrewJBeattie LinkedIn: ________________________________ From: gpfsug-discuss on behalf of Alec Sent: Thursday, June 15, 2023 6:00:32 AM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] Increase volume size after nsd/disk creation perhaps mmnsddiscover will do it for you, I guess not though.?.?. You should be able at the worst case increase the disk size, then do a mmdeldisk and an mmadddisk to reimport each disk at the new size. If memory serves though the meta size is ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd perhaps mmnsddiscover will do it for you, I guess not though... You should be able at the worst case increase the disk size, then do a mmdeldisk and an mmadddisk to reimport each disk at the new size. If memory serves though the meta size is based on the size of the first disk that is imported and won't increase, which affects how large your pool can actually grow to. I'm sure there are workarounds to that though. Alec On Wed, Jun 14, 2023, 12:42 PM Popescu, Razvan > wrote: Hello, Does anyone know if GPFS allows the increase in capacity of already existing nsd/disks, in order to add capacity to a storage pool? I need to increase the space on an existing filesystem, and wonder if rather than having to add new nds/disks to the storage pool, I could leverage the ability of the backend block storage array (NetApp E5600) to increase the capacity of existing raid volumes, and propagate the increase in space all the way through nsd/disk/storage pool. Sounds somewhat close to handling thin disks ? yes, not quite the same, but I found no reference in the manuals either way. Many thanks! Razvan Columbia Business School At the Very Center of Business _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Fri Jun 23 21:59:12 2023 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Fri, 23 Jun 2023 20:59:12 +0000 Subject: [gpfsug-discuss] AFM-DR Control file error Message-ID: We are trying to configure AFM-DR to convert exising filesets to Primary and Secondary. But the fileset state show up as Unmounted with error message ?Cant find control file?. Anyone seen this message before? We are running 5.1.6.1 version. [Logo Description automatically generated] Prasad Surampudi | Sr. Systems Engineer prasad.surampudi at theatsgroup.com | 302.419.5833 Innovative IT consulting & modern infrastructure solutions www.theatsgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 6597 bytes Desc: image001.png URL: From scale at us.ibm.com Fri Jun 23 22:32:11 2023 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 23 Jun 2023 21:32:11 +0000 Subject: [gpfsug-discuss] AFM-DR Control file error In-Reply-To: References: Message-ID: If you have the commands you have executed to convert the filesets that would be helpful, along with their output. And is this Scale 5.1.6.1 at both the AFM-DR primary and secondary clusters? From: gpfsug-discuss on behalf of Prasad Surampudi Date: Friday, June 23, 2023 at 5:02 PM To: gpfsug-discuss at gpfsug.org Subject: [EXTERNAL] [gpfsug-discuss] AFM-DR Control file error We are trying to configure AFM-DR to convert exising filesets to Primary and Secondary. But the fileset state show up as Unmounted with error message ?Cant find control file?. Anyone seen this message before? We are running 5.?1.?6.?1 version.? ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ? ZjQcmQRYFpfptBannerEnd We are trying to configure AFM-DR to convert exising filesets to Primary and Secondary. But the fileset state show up as Unmounted with error message ?Cant find control file?. Anyone seen this message before? We are running 5.1.6.1 version. [Logo Description automatically generated] Prasad Surampudi | Sr. Systems Engineer prasad.surampudi at theatsgroup.com | 302.419.5833 Innovative IT consulting & modern infrastructure solutions www.theatsgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 6597 bytes Desc: image001.png URL: From chair at gpfsug.org Mon Jun 26 13:37:23 2023 From: chair at gpfsug.org (chair at gpfsug.org) Date: Mon, 26 Jun 2023 13:37:23 +0100 Subject: [gpfsug-discuss] GPFS UK Meeting Tuesday 27th June - Wednesday 28th June 2023 Message-ID: Dear All, If you attending our meeting commencing tomorrow, please bring Photo ID as this will be required when you checkin at the IBM Front Desk. Just mention you are part of the User Group meeting. See you all tomorrow Regards Paul From leonardo.sala at psi.ch Tue Jun 27 15:17:41 2023 From: leonardo.sala at psi.ch (Leonardo Sala) Date: Tue, 27 Jun 2023 16:17:41 +0200 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Message-ID: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From juergen.hannappel at desy.de Tue Jun 27 17:06:58 2023 From: juergen.hannappel at desy.de (Hannappel, Juergen) Date: Tue, 27 Jun 2023 18:06:58 +0200 (CEST) Subject: [gpfsug-discuss] Why cluster-wide locks for firmware-updates and the like Message-ID: <1925528444.11292003.1687882018833.JavaMail.zimbra@desy.de> Moin, when e.g doing mmchfirmware there is a cluster-wide lock preventing me from running mmchfirmware on several building blocks at once, while I would assume that only within one building block a lock is needed. Why is that so? Can that be changed in a future release? Also some apparently cluster wide locks create false alarms when checking for the recovery group status on one building block is blocked by some actions on another one... -- Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616 From chair at gpfsug.org Tue Jun 27 17:19:37 2023 From: chair at gpfsug.org (Spectrum Scale UG) Date: Tue, 27 Jun 2023 17:19:37 +0100 Subject: [gpfsug-discuss] Address for evening event Message-ID: <5BA096D2-B963-DF4D-BCD1-5CC2B75F97B8@hxcore.ol> An HTML attachment was scrubbed... URL: From luis.bolinches at fi.ibm.com Tue Jun 27 17:26:39 2023 From: luis.bolinches at fi.ibm.com (Luis Bolinches) Date: Tue, 27 Jun 2023 16:26:39 +0000 Subject: [gpfsug-discuss] Why cluster-wide locks for firmware-updates and the like In-Reply-To: <1925528444.11292003.1687882018833.JavaMail.zimbra@desy.de> References: <1925528444.11292003.1687882018833.JavaMail.zimbra@desy.de> Message-ID: Hi If you are doing it offline (which for bigger setups) and pass the class or CSV of nodes, it is done in parallel in all nodes. For your request I think there is a RFE (not sure public or not) already created, but I don?t disagree would be nice improvement to lock at the single BB -- Yst?v?llisin terveisin/Regards/Saludos/Salutations/Salutacions Luis Bolinches Executive IT Specialist IBM Storage Scale development Phone: +358503112585 Ab IBM Finland Oy Toinen linja 7 00530 Helsinki Uusimaa - Finland Visitors entrance: Siltasaarenkatu 22 "If you always give you will always have" -- Anonymous https://www.credly.com/users/luis-bolinches/badges -----Original Message----- From: gpfsug-discuss On Behalf Of Hannappel, Juergen Sent: Tuesday, 27 June 2023 19.07 To: gpfsug main discussion list Subject: [EXTERNAL] [gpfsug-discuss] Why cluster-wide locks for firmware-updates and the like Moin, when e.g doing mmchfirmware there is a cluster-wide lock preventing me from running mmchfirmware on several building blocks at once, while I would assume that only within one building block a lock is needed. Why is that so? Can that be changed in a future release? Also some apparently cluster wide locks create false alarms when checking for the recovery group status on one building block is blocked by some actions on another one... -- Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org Unless otherwise stated above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland From luis.bolinches at fi.ibm.com Tue Jun 27 17:26:39 2023 From: luis.bolinches at fi.ibm.com (Luis Bolinches) Date: Tue, 27 Jun 2023 16:26:39 +0000 Subject: [gpfsug-discuss] Why cluster-wide locks for firmware-updates and the like In-Reply-To: <1925528444.11292003.1687882018833.JavaMail.zimbra@desy.de> References: <1925528444.11292003.1687882018833.JavaMail.zimbra@desy.de> Message-ID: Hi If you are doing it offline (which for bigger setups) and pass the class or CSV of nodes, it is done in parallel in all nodes. For your request I think there is a RFE (not sure public or not) already created, but I don?t disagree would be nice improvement to lock at the single BB -- Yst?v?llisin terveisin/Regards/Saludos/Salutations/Salutacions Luis Bolinches Executive IT Specialist IBM Storage Scale development Phone: +358503112585 Ab IBM Finland Oy Toinen linja 7 00530 Helsinki Uusimaa - Finland Visitors entrance: Siltasaarenkatu 22 "If you always give you will always have" -- Anonymous https://www.credly.com/users/luis-bolinches/badges -----Original Message----- From: gpfsug-discuss On Behalf Of Hannappel, Juergen Sent: Tuesday, 27 June 2023 19.07 To: gpfsug main discussion list Subject: [EXTERNAL] [gpfsug-discuss] Why cluster-wide locks for firmware-updates and the like Moin, when e.g doing mmchfirmware there is a cluster-wide lock preventing me from running mmchfirmware on several building blocks at once, while I would assume that only within one building block a lock is needed. Why is that so? Can that be changed in a future release? Also some apparently cluster wide locks create false alarms when checking for the recovery group status on one building block is blocked by some actions on another one... -- Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org Unless otherwise stated above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland From ewahl at osc.edu Tue Jun 27 18:41:28 2023 From: ewahl at osc.edu (Wahl, Edward) Date: Tue, 27 Jun 2023 17:41:28 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> Message-ID: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From ewahl at osc.edu Tue Jun 27 18:41:28 2023 From: ewahl at osc.edu (Wahl, Edward) Date: Tue, 27 Jun 2023 17:41:28 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> Message-ID: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From leonardo.sala at psi.ch Wed Jun 28 07:53:35 2023 From: leonardo.sala at psi.ch (Leonardo Sala) Date: Wed, 28 Jun 2023 08:53:35 +0200 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> Message-ID: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: > > I vaguely recall seeing this and testing it.? My notes to myself say: > ?As long as the export_id is unique, you are fine.??? See the manuals, > ganesha loves Camel Case so it?s more than likely actually ?Export_Id? > or some such. > > Ed Wahl > > Ohio Supercomputer Center > > *From:*gpfsug-discuss *On Behalf > Of *Leonardo Sala > *Sent:* Tuesday, June 27, 2023 10:18 AM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* [gpfsug-discuss] CES, Ganesha, and Filesystem_id > > Hallo, we are checking our current CES configuration, and we noticed > that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter > which Export_Id value the export has. To my understanding (which is > poor!), this means that all clients > > Hallo, > > we are checking our current CES configuration, and we noticed that by > default GPFS puts always Filesystem_Id=666.666 [*], no matter which > Export_Id value the export has. To my understanding (which is poor!), > this means that all clients will see all our exports (~20) with the > same device number, creating various possible issues (e.g. file state > handles). Questions: > > * is there a reason for such default value? If we change it, are there > unpleasant effects we could see? > > * what would be a reasonable value? Looking around I saw that > Filesystem_Id = Export_Id.Export_Id is quite common, with the possible > issue of using the forbidden 152.152 [**] > > * what happens if we actually remove the Filesytem_Id parameter from > gpfs.ganesha.exports.conf? > > * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf > without editing the file, eg using mmnfs commands (seems not, but I > might be mistaken)? > > Thanks a lot! > > cheers > > leo > > [*] > https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs > > > [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 > > > -- > Paul Scherrer Institut > Dr. Leonardo Sala > Group Leader Data Analysis and Research Infrastructure > Deputy Department Head a.i Science IT Infrastructure and Services department > Science IT Infrastructure and Services department (AWI) > WHGA/036 > Forschungstrasse 111 > 5232 Villigen PSI > Switzerland > > Phone: +41 56 310 3369 > leonardo.sala at psi.ch > www.psi.ch > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From leonardo.sala at psi.ch Wed Jun 28 07:53:35 2023 From: leonardo.sala at psi.ch (Leonardo Sala) Date: Wed, 28 Jun 2023 08:53:35 +0200 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> Message-ID: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: > > I vaguely recall seeing this and testing it.? My notes to myself say: > ?As long as the export_id is unique, you are fine.??? See the manuals, > ganesha loves Camel Case so it?s more than likely actually ?Export_Id? > or some such. > > Ed Wahl > > Ohio Supercomputer Center > > *From:*gpfsug-discuss *On Behalf > Of *Leonardo Sala > *Sent:* Tuesday, June 27, 2023 10:18 AM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* [gpfsug-discuss] CES, Ganesha, and Filesystem_id > > Hallo, we are checking our current CES configuration, and we noticed > that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter > which Export_Id value the export has. To my understanding (which is > poor!), this means that all clients > > Hallo, > > we are checking our current CES configuration, and we noticed that by > default GPFS puts always Filesystem_Id=666.666 [*], no matter which > Export_Id value the export has. To my understanding (which is poor!), > this means that all clients will see all our exports (~20) with the > same device number, creating various possible issues (e.g. file state > handles). Questions: > > * is there a reason for such default value? If we change it, are there > unpleasant effects we could see? > > * what would be a reasonable value? Looking around I saw that > Filesystem_Id = Export_Id.Export_Id is quite common, with the possible > issue of using the forbidden 152.152 [**] > > * what happens if we actually remove the Filesytem_Id parameter from > gpfs.ganesha.exports.conf? > > * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf > without editing the file, eg using mmnfs commands (seems not, but I > might be mistaken)? > > Thanks a lot! > > cheers > > leo > > [*] > https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs > > > [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 > > > -- > Paul Scherrer Institut > Dr. Leonardo Sala > Group Leader Data Analysis and Research Infrastructure > Deputy Department Head a.i Science IT Infrastructure and Services department > Science IT Infrastructure and Services department (AWI) > WHGA/036 > Forschungstrasse 111 > 5232 Villigen PSI > Switzerland > > Phone: +41 56 310 3369 > leonardo.sala at psi.ch > www.psi.ch > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed Jun 28 13:22:26 2023 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 28 Jun 2023 12:22:26 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> Message-ID: <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed Jun 28 13:22:26 2023 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 28 Jun 2023 12:22:26 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> Message-ID: <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From leonardo.sala at psi.ch Wed Jun 28 13:33:00 2023 From: leonardo.sala at psi.ch (Leonardo Sala) Date: Wed, 28 Jun 2023 14:33:00 +0200 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> Message-ID: <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: > The "FileSystem_Id" is a unique identifier for the file system. The > technical background is that Ganesha asks the file system for a file > handle, but that is only unique within the file system. If there are > NFS exports on different file systems, there needs to be a way to make > the file handles unique across multiple file systems. So if there are > NFS exports on different file systems, this parameter should be set > with a unique value for each file system. If there is only one file > system with NFS exports, then this should not be necessary. > > Regards, > > Christof > > On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: >> Hi Ed, thanks! In our case we do have unique export ids, but the same >> fsid, and this seems to create issues. Also, reading Ganesha docs, I >> can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config >> option, FileSystem_ID. This really >> ZjQcmQRYFpfptBannerStart >> This Message Is From an External Sender >> This message came from outside your organization. >> Report?Suspicious >> >> ZjQcmQRYFpfptBannerEnd >> >> Hi Ed, >> >> thanks! In our case we do have unique export ids, but the same fsid, >> and this seems to create issues. Also, reading Ganesha docs, I can >> see [*]: >> >> >> FileSystem_ID EXPORT Option >> >> There is an EXPORT config option, FileSystem_ID. This really should >> not be used, all it does it designate an fsid to be used with the >> attributes of all objects in the export. It will be folded to fit >> into NFSv3. Because it applies to the entire export, it prevents >> exporting multiple file systems since there will likely be issues >> with collision of inode numbers on the client. >> >> so before touching the defaults in GPFS CES configuration I would >> like some guidance or experiences from this mlist :) >> >> cheers >> >> leo >> >> [*] >> https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option >> >> >> Paul Scherrer Institut >> Dr. Leonardo Sala >> Group Leader Data Analysis and Research Infrastructure >> Deputy Department Head a.i Science IT Infrastructure and Services department >> Science IT Infrastructure and Services department (AWI) >> WHGA/036 >> Forschungstrasse 111 >> 5232 Villigen PSI >> Switzerland >> Phone: +41 56 310 3369 >> leonardo.sala at psi.ch >> www.psi.ch >> On 6/27/23 19:41, Wahl, Edward wrote: >>> >>> I vaguely recall seeing this and testing it.? My notes to myself >>> say: ?As long as the export_id is unique, you are fine.??? See the >>> manuals, ganesha loves Camel Case so it?s more than likely actually >>> ?Export_Id? or some such. >>> >>> Ed Wahl >>> >>> Ohio Supercomputer Center >>> >>> *From:*gpfsug-discuss *On Behalf >>> Of *Leonardo Sala >>> *Sent:* Tuesday, June 27, 2023 10:18 AM >>> *To:* gpfsug-discuss at spectrumscale.org >>> *Subject:* [gpfsug-discuss] CES, Ganesha, and Filesystem_id >>> >>> Hallo, we are checking our current CES configuration, and we noticed >>> that by default GPFS puts always Filesystem_Id=666.?666 [*], no >>> matter which Export_Id value the export has. To my understanding >>> (which is poor!), this means that all clients >>> >>> Hallo, >>> >>> we are checking our current CES configuration, and we noticed that >>> by default GPFS puts always Filesystem_Id=666.666 [*], no matter >>> which Export_Id value the export has. To my understanding (which is >>> poor!), this means that all clients will see all our exports (~20) >>> with the same device number, creating various possible issues (e.g. >>> file state handles). Questions: >>> >>> * is there a reason for such default value? If we change it, are >>> there unpleasant effects we could see? >>> >>> * what would be a reasonable value? Looking around I saw that >>> Filesystem_Id = Export_Id.Export_Id is quite common, with the >>> possible issue of using the forbidden 152.152 [**] >>> >>> * what happens if we actually remove the Filesytem_Id parameter from >>> gpfs.ganesha.exports.conf? >>> >>> * is there a way to modify Filesystem_Id in >>> gpfs.ganesha.exports.conf without editing the file, eg using mmnfs >>> commands (seems not, but I might be mistaken)? >>> >>> Thanks a lot! >>> >>> cheers >>> >>> leo >>> >>> [*] >>> https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs >>> >>> [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 >>> >>> -- >>> Paul Scherrer Institut >>> Dr. Leonardo Sala >>> Group Leader Data Analysis and Research Infrastructure >>> Deputy Department Head a.i Science IT Infrastructure and Services department >>> Science IT Infrastructure and Services department (AWI) >>> WHGA/036 >>> Forschungstrasse 111 >>> 5232 Villigen PSI >>> Switzerland >>> >>> Phone: +41 56 310 3369 >>> leonardo.sala at psi.ch >>> www.psi.ch >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at gpfsug.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >> >> > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From leonardo.sala at psi.ch Wed Jun 28 13:33:00 2023 From: leonardo.sala at psi.ch (Leonardo Sala) Date: Wed, 28 Jun 2023 14:33:00 +0200 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> Message-ID: <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: > The "FileSystem_Id" is a unique identifier for the file system. The > technical background is that Ganesha asks the file system for a file > handle, but that is only unique within the file system. If there are > NFS exports on different file systems, there needs to be a way to make > the file handles unique across multiple file systems. So if there are > NFS exports on different file systems, this parameter should be set > with a unique value for each file system. If there is only one file > system with NFS exports, then this should not be necessary. > > Regards, > > Christof > > On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: >> Hi Ed, thanks! In our case we do have unique export ids, but the same >> fsid, and this seems to create issues. Also, reading Ganesha docs, I >> can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config >> option, FileSystem_ID. This really >> ZjQcmQRYFpfptBannerStart >> This Message Is From an External Sender >> This message came from outside your organization. >> Report?Suspicious >> >> ZjQcmQRYFpfptBannerEnd >> >> Hi Ed, >> >> thanks! In our case we do have unique export ids, but the same fsid, >> and this seems to create issues. Also, reading Ganesha docs, I can >> see [*]: >> >> >> FileSystem_ID EXPORT Option >> >> There is an EXPORT config option, FileSystem_ID. This really should >> not be used, all it does it designate an fsid to be used with the >> attributes of all objects in the export. It will be folded to fit >> into NFSv3. Because it applies to the entire export, it prevents >> exporting multiple file systems since there will likely be issues >> with collision of inode numbers on the client. >> >> so before touching the defaults in GPFS CES configuration I would >> like some guidance or experiences from this mlist :) >> >> cheers >> >> leo >> >> [*] >> https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option >> >> >> Paul Scherrer Institut >> Dr. Leonardo Sala >> Group Leader Data Analysis and Research Infrastructure >> Deputy Department Head a.i Science IT Infrastructure and Services department >> Science IT Infrastructure and Services department (AWI) >> WHGA/036 >> Forschungstrasse 111 >> 5232 Villigen PSI >> Switzerland >> Phone: +41 56 310 3369 >> leonardo.sala at psi.ch >> www.psi.ch >> On 6/27/23 19:41, Wahl, Edward wrote: >>> >>> I vaguely recall seeing this and testing it.? My notes to myself >>> say: ?As long as the export_id is unique, you are fine.??? See the >>> manuals, ganesha loves Camel Case so it?s more than likely actually >>> ?Export_Id? or some such. >>> >>> Ed Wahl >>> >>> Ohio Supercomputer Center >>> >>> *From:*gpfsug-discuss *On Behalf >>> Of *Leonardo Sala >>> *Sent:* Tuesday, June 27, 2023 10:18 AM >>> *To:* gpfsug-discuss at spectrumscale.org >>> *Subject:* [gpfsug-discuss] CES, Ganesha, and Filesystem_id >>> >>> Hallo, we are checking our current CES configuration, and we noticed >>> that by default GPFS puts always Filesystem_Id=666.?666 [*], no >>> matter which Export_Id value the export has. To my understanding >>> (which is poor!), this means that all clients >>> >>> Hallo, >>> >>> we are checking our current CES configuration, and we noticed that >>> by default GPFS puts always Filesystem_Id=666.666 [*], no matter >>> which Export_Id value the export has. To my understanding (which is >>> poor!), this means that all clients will see all our exports (~20) >>> with the same device number, creating various possible issues (e.g. >>> file state handles). Questions: >>> >>> * is there a reason for such default value? If we change it, are >>> there unpleasant effects we could see? >>> >>> * what would be a reasonable value? Looking around I saw that >>> Filesystem_Id = Export_Id.Export_Id is quite common, with the >>> possible issue of using the forbidden 152.152 [**] >>> >>> * what happens if we actually remove the Filesytem_Id parameter from >>> gpfs.ganesha.exports.conf? >>> >>> * is there a way to modify Filesystem_Id in >>> gpfs.ganesha.exports.conf without editing the file, eg using mmnfs >>> commands (seems not, but I might be mistaken)? >>> >>> Thanks a lot! >>> >>> cheers >>> >>> leo >>> >>> [*] >>> https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs >>> >>> [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 >>> >>> -- >>> Paul Scherrer Institut >>> Dr. Leonardo Sala >>> Group Leader Data Analysis and Research Infrastructure >>> Deputy Department Head a.i Science IT Infrastructure and Services department >>> Science IT Infrastructure and Services department (AWI) >>> WHGA/036 >>> Forschungstrasse 111 >>> 5232 Villigen PSI >>> Switzerland >>> >>> Phone: +41 56 310 3369 >>> leonardo.sala at psi.ch >>> www.psi.ch >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at gpfsug.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >> >> > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed Jun 28 14:22:13 2023 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 28 Jun 2023 13:22:13 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> Message-ID: <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> After another discussion it turns out that this parameter is not required. While my previous comment is correct, that there is the need to have unique handles across file systems, GPFS already provides that information and Ganesha handles that correctly. So there is no need to set the parameter in the Ganesha config. Regards, Christof On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed Jun 28 14:22:13 2023 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 28 Jun 2023 13:22:13 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> Message-ID: <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> After another discussion it turns out that this parameter is not required. While my previous comment is correct, that there is the need to have unique handles across file systems, GPFS already provides that information and Ganesha handles that correctly. So there is no need to set the parameter in the Ganesha config. Regards, Christof On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From leonardo.sala at psi.ch Wed Jun 28 14:49:54 2023 From: leonardo.sala at psi.ch (Leonardo Sala) Date: Wed, 28 Jun 2023 15:49:54 +0200 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> Message-ID: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.666, so the suggested procedure during the CES setup should be to manually modify gpfs.ganesha.exports.conf and remove this parameter from all the exports, is that correct? Is there an easier way, or is there a plan to remove the 666.666 default value? In our case, we do rely on two separated CES clusters (one in prod and one in stand by, so that we can perform upgrades with no downtime by migrating the IPs from one to the other), so it might be safer to explicilty set Filesystem_Id, to ensure consistency among the cluster - would that make sense? Thanks again! Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 15:22, Christof Schmitt wrote: > After another discussion it turns out that this parameter is not > required. While my previous comment is correct, that there is the need > to have unique handles across file systems, GPFS already provides that > information and Ganesha handles that correctly. So there is no need to > set the parameter in the Ganesha config. > Regards, > > Christof > > On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: >> Hi Christof, thanks a lot! In our case we are exporting multiple >> filesets from 2 filesystems, I guess we should fix unique Fileset_IDs >> for each fileset? What would happen in case we just remove the >> Fileset_id parameter, as suggested by the >> ZjQcmQRYFpfptBannerStart >> This Message Is From an External Sender >> This message came from outside your organization. >> Report?Suspicious >> >> ZjQcmQRYFpfptBannerEnd >> >> Hi Christof, >> >> thanks a lot! In our case we are exporting multiple filesets from 2 >> filesystems, I guess we should fix unique Fileset_IDs for each >> fileset? What would happen in case we just remove the Fileset_id >> parameter, as suggested by the ganesha docs? >> >> Regards >> >> leo >> >> Paul Scherrer Institut >> Dr. Leonardo Sala >> Group Leader Data Analysis and Research Infrastructure >> Deputy Department Head a.i Science IT Infrastructure and Services department >> Science IT Infrastructure and Services department (AWI) >> WHGA/036 >> Forschungstrasse 111 >> 5232 Villigen PSI >> Switzerland >> Phone: +41 56 310 3369 >> leonardo.sala at psi.ch >> www.psi.ch >> On 6/28/23 14:22, Christof Schmitt wrote: >>> The "FileSystem_Id" is a unique identifier for the file system. The >>> technical background is that Ganesha asks the file system for a file >>> handle, but that is only unique within the file system. If there are >>> NFS exports on different file systems, there needs to be a way to >>> make the file handles unique across multiple file systems. So if >>> there are NFS exports on different file systems, this parameter >>> should be set with a unique value for each file system. If there is >>> only one file system with NFS exports, then this should not be >>> necessary. >>> >>> Regards, >>> >>> Christof >>> >>> On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: >>>> Hi Ed, thanks! In our case we do have unique export ids, but the >>>> same fsid, and this seems to create issues. Also, reading Ganesha >>>> docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT >>>> config option, FileSystem_ID. This really >>>> ZjQcmQRYFpfptBannerStart >>>> This Message Is From an External Sender >>>> This message came from outside your organization. >>>> Report?Suspicious >>>> >>>> ZjQcmQRYFpfptBannerEnd >>>> >>>> Hi Ed, >>>> >>>> thanks! In our case we do have unique export ids, but the same >>>> fsid, and this seems to create issues. Also, reading Ganesha docs, >>>> I can see [*]: >>>> >>>> >>>> FileSystem_ID EXPORT Option >>>> >>>> There is an EXPORT config option, FileSystem_ID. This really should >>>> not be used, all it does it designate an fsid to be used with the >>>> attributes of all objects in the export. It will be folded to fit >>>> into NFSv3. Because it applies to the entire export, it prevents >>>> exporting multiple file systems since there will likely be issues >>>> with collision of inode numbers on the client. >>>> >>>> so before touching the defaults in GPFS CES configuration I would >>>> like some guidance or experiences from this mlist :) >>>> >>>> cheers >>>> >>>> leo >>>> >>>> [*] >>>> https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option >>>> >>>> >>>> Paul Scherrer Institut >>>> Dr. Leonardo Sala >>>> Group Leader Data Analysis and Research Infrastructure >>>> Deputy Department Head a.i Science IT Infrastructure and Services department >>>> Science IT Infrastructure and Services department (AWI) >>>> WHGA/036 >>>> Forschungstrasse 111 >>>> 5232 Villigen PSI >>>> Switzerland >>>> Phone: +41 56 310 3369 >>>> leonardo.sala at psi.ch >>>> www.psi.ch >>>> On 6/27/23 19:41, Wahl, Edward wrote: >>>>> >>>>> I vaguely recall seeing this and testing it.? My notes to myself >>>>> say: ?As long as the export_id is unique, you are fine.??? See the >>>>> manuals, ganesha loves Camel Case so it?s more than likely >>>>> actually ?Export_Id? or some such. >>>>> >>>>> Ed Wahl >>>>> >>>>> Ohio Supercomputer Center >>>>> >>>>> *From:*gpfsug-discuss *On >>>>> Behalf Of *Leonardo Sala >>>>> *Sent:* Tuesday, June 27, 2023 10:18 AM >>>>> *To:* gpfsug-discuss at spectrumscale.org >>>>> *Subject:* [gpfsug-discuss] CES, Ganesha, and Filesystem_id >>>>> >>>>> Hallo, we are checking our current CES configuration, and we >>>>> noticed that by default GPFS puts always Filesystem_Id=666.?666 >>>>> [*], no matter which Export_Id value the export has. To my >>>>> understanding (which is poor!), this means that all clients >>>>> >>>>> Hallo, >>>>> >>>>> we are checking our current CES configuration, and we noticed that >>>>> by default GPFS puts always Filesystem_Id=666.666 [*], no matter >>>>> which Export_Id value the export has. To my understanding (which >>>>> is poor!), this means that all clients will see all our exports >>>>> (~20) with the same device number, creating various possible >>>>> issues (e.g. file state handles). Questions: >>>>> >>>>> * is there a reason for such default value? If we change it, are >>>>> there unpleasant effects we could see? >>>>> >>>>> * what would be a reasonable value? Looking around I saw that >>>>> Filesystem_Id = Export_Id.Export_Id is quite common, with the >>>>> possible issue of using the forbidden 152.152 [**] >>>>> >>>>> * what happens if we actually remove the Filesytem_Id parameter >>>>> from gpfs.ganesha.exports.conf? >>>>> >>>>> * is there a way to modify Filesystem_Id in >>>>> gpfs.ganesha.exports.conf without editing the file, eg using mmnfs >>>>> commands (seems not, but I might be mistaken)? >>>>> >>>>> Thanks a lot! >>>>> >>>>> cheers >>>>> >>>>> leo >>>>> >>>>> [*] >>>>> https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs >>>>> >>>>> [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 >>>>> >>>>> -- >>>>> Paul Scherrer Institut >>>>> Dr. Leonardo Sala >>>>> Group Leader Data Analysis and Research Infrastructure >>>>> Deputy Department Head a.i Science IT Infrastructure and Services department >>>>> Science IT Infrastructure and Services department (AWI) >>>>> WHGA/036 >>>>> Forschungstrasse 111 >>>>> 5232 Villigen PSI >>>>> Switzerland >>>>> >>>>> Phone: +41 56 310 3369 >>>>> leonardo.sala at psi.ch >>>>> www.psi.ch >>>>> >>>>> >>>>> _______________________________________________ >>>>> gpfsug-discuss mailing list >>>>> gpfsug-discuss at gpfsug.org >>>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >>>> _______________________________________________ >>>> gpfsug-discuss mailing list >>>> gpfsug-discuss at gpfsug.org >>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >>>> >>>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at gpfsug.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From leonardo.sala at psi.ch Wed Jun 28 14:49:54 2023 From: leonardo.sala at psi.ch (Leonardo Sala) Date: Wed, 28 Jun 2023 15:49:54 +0200 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> Message-ID: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.666, so the suggested procedure during the CES setup should be to manually modify gpfs.ganesha.exports.conf and remove this parameter from all the exports, is that correct? Is there an easier way, or is there a plan to remove the 666.666 default value? In our case, we do rely on two separated CES clusters (one in prod and one in stand by, so that we can perform upgrades with no downtime by migrating the IPs from one to the other), so it might be safer to explicilty set Filesystem_Id, to ensure consistency among the cluster - would that make sense? Thanks again! Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 15:22, Christof Schmitt wrote: > After another discussion it turns out that this parameter is not > required. While my previous comment is correct, that there is the need > to have unique handles across file systems, GPFS already provides that > information and Ganesha handles that correctly. So there is no need to > set the parameter in the Ganesha config. > Regards, > > Christof > > On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: >> Hi Christof, thanks a lot! In our case we are exporting multiple >> filesets from 2 filesystems, I guess we should fix unique Fileset_IDs >> for each fileset? What would happen in case we just remove the >> Fileset_id parameter, as suggested by the >> ZjQcmQRYFpfptBannerStart >> This Message Is From an External Sender >> This message came from outside your organization. >> Report?Suspicious >> >> ZjQcmQRYFpfptBannerEnd >> >> Hi Christof, >> >> thanks a lot! In our case we are exporting multiple filesets from 2 >> filesystems, I guess we should fix unique Fileset_IDs for each >> fileset? What would happen in case we just remove the Fileset_id >> parameter, as suggested by the ganesha docs? >> >> Regards >> >> leo >> >> Paul Scherrer Institut >> Dr. Leonardo Sala >> Group Leader Data Analysis and Research Infrastructure >> Deputy Department Head a.i Science IT Infrastructure and Services department >> Science IT Infrastructure and Services department (AWI) >> WHGA/036 >> Forschungstrasse 111 >> 5232 Villigen PSI >> Switzerland >> Phone: +41 56 310 3369 >> leonardo.sala at psi.ch >> www.psi.ch >> On 6/28/23 14:22, Christof Schmitt wrote: >>> The "FileSystem_Id" is a unique identifier for the file system. The >>> technical background is that Ganesha asks the file system for a file >>> handle, but that is only unique within the file system. If there are >>> NFS exports on different file systems, there needs to be a way to >>> make the file handles unique across multiple file systems. So if >>> there are NFS exports on different file systems, this parameter >>> should be set with a unique value for each file system. If there is >>> only one file system with NFS exports, then this should not be >>> necessary. >>> >>> Regards, >>> >>> Christof >>> >>> On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: >>>> Hi Ed, thanks! In our case we do have unique export ids, but the >>>> same fsid, and this seems to create issues. Also, reading Ganesha >>>> docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT >>>> config option, FileSystem_ID. This really >>>> ZjQcmQRYFpfptBannerStart >>>> This Message Is From an External Sender >>>> This message came from outside your organization. >>>> Report?Suspicious >>>> >>>> ZjQcmQRYFpfptBannerEnd >>>> >>>> Hi Ed, >>>> >>>> thanks! In our case we do have unique export ids, but the same >>>> fsid, and this seems to create issues. Also, reading Ganesha docs, >>>> I can see [*]: >>>> >>>> >>>> FileSystem_ID EXPORT Option >>>> >>>> There is an EXPORT config option, FileSystem_ID. This really should >>>> not be used, all it does it designate an fsid to be used with the >>>> attributes of all objects in the export. It will be folded to fit >>>> into NFSv3. Because it applies to the entire export, it prevents >>>> exporting multiple file systems since there will likely be issues >>>> with collision of inode numbers on the client. >>>> >>>> so before touching the defaults in GPFS CES configuration I would >>>> like some guidance or experiences from this mlist :) >>>> >>>> cheers >>>> >>>> leo >>>> >>>> [*] >>>> https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option >>>> >>>> >>>> Paul Scherrer Institut >>>> Dr. Leonardo Sala >>>> Group Leader Data Analysis and Research Infrastructure >>>> Deputy Department Head a.i Science IT Infrastructure and Services department >>>> Science IT Infrastructure and Services department (AWI) >>>> WHGA/036 >>>> Forschungstrasse 111 >>>> 5232 Villigen PSI >>>> Switzerland >>>> Phone: +41 56 310 3369 >>>> leonardo.sala at psi.ch >>>> www.psi.ch >>>> On 6/27/23 19:41, Wahl, Edward wrote: >>>>> >>>>> I vaguely recall seeing this and testing it.? My notes to myself >>>>> say: ?As long as the export_id is unique, you are fine.??? See the >>>>> manuals, ganesha loves Camel Case so it?s more than likely >>>>> actually ?Export_Id? or some such. >>>>> >>>>> Ed Wahl >>>>> >>>>> Ohio Supercomputer Center >>>>> >>>>> *From:*gpfsug-discuss *On >>>>> Behalf Of *Leonardo Sala >>>>> *Sent:* Tuesday, June 27, 2023 10:18 AM >>>>> *To:* gpfsug-discuss at spectrumscale.org >>>>> *Subject:* [gpfsug-discuss] CES, Ganesha, and Filesystem_id >>>>> >>>>> Hallo, we are checking our current CES configuration, and we >>>>> noticed that by default GPFS puts always Filesystem_Id=666.?666 >>>>> [*], no matter which Export_Id value the export has. To my >>>>> understanding (which is poor!), this means that all clients >>>>> >>>>> Hallo, >>>>> >>>>> we are checking our current CES configuration, and we noticed that >>>>> by default GPFS puts always Filesystem_Id=666.666 [*], no matter >>>>> which Export_Id value the export has. To my understanding (which >>>>> is poor!), this means that all clients will see all our exports >>>>> (~20) with the same device number, creating various possible >>>>> issues (e.g. file state handles). Questions: >>>>> >>>>> * is there a reason for such default value? If we change it, are >>>>> there unpleasant effects we could see? >>>>> >>>>> * what would be a reasonable value? Looking around I saw that >>>>> Filesystem_Id = Export_Id.Export_Id is quite common, with the >>>>> possible issue of using the forbidden 152.152 [**] >>>>> >>>>> * what happens if we actually remove the Filesytem_Id parameter >>>>> from gpfs.ganesha.exports.conf? >>>>> >>>>> * is there a way to modify Filesystem_Id in >>>>> gpfs.ganesha.exports.conf without editing the file, eg using mmnfs >>>>> commands (seems not, but I might be mistaken)? >>>>> >>>>> Thanks a lot! >>>>> >>>>> cheers >>>>> >>>>> leo >>>>> >>>>> [*] >>>>> https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs >>>>> >>>>> [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 >>>>> >>>>> -- >>>>> Paul Scherrer Institut >>>>> Dr. Leonardo Sala >>>>> Group Leader Data Analysis and Research Infrastructure >>>>> Deputy Department Head a.i Science IT Infrastructure and Services department >>>>> Science IT Infrastructure and Services department (AWI) >>>>> WHGA/036 >>>>> Forschungstrasse 111 >>>>> 5232 Villigen PSI >>>>> Switzerland >>>>> >>>>> Phone: +41 56 310 3369 >>>>> leonardo.sala at psi.ch >>>>> www.psi.ch >>>>> >>>>> >>>>> _______________________________________________ >>>>> gpfsug-discuss mailing list >>>>> gpfsug-discuss at gpfsug.org >>>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >>>> _______________________________________________ >>>> gpfsug-discuss mailing list >>>> gpfsug-discuss at gpfsug.org >>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org >>>> >>>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at gpfsug.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed Jun 28 15:20:41 2023 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 28 Jun 2023 14:20:41 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> Message-ID: <8b12725800afff70b8d0ce76c1f313fb75e480d9.camel@us.ibm.com> Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.?666, so the suggested procedure during the CES setup should be to manually modify ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.666, so the suggested procedure during the CES setup should be to manually modify gpfs.ganesha.exports.conf and remove this parameter from all the exports, is that correct? Is there an easier way, or is there a plan to remove the 666.666 default value? In our case, we do rely on two separated CES clusters (one in prod and one in stand by, so that we can perform upgrades with no downtime by migrating the IPs from one to the other), so it might be safer to explicilty set Filesystem_Id, to ensure consistency among the cluster - would that make sense? Thanks again! Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 15:22, Christof Schmitt wrote: After another discussion it turns out that this parameter is not required. While my previous comment is correct, that there is the need to have unique handles across file systems, GPFS already provides that information and Ganesha handles that correctly. So there is no need to set the parameter in the Ganesha config. Regards, Christof On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed Jun 28 15:20:41 2023 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 28 Jun 2023 14:20:41 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> Message-ID: <8b12725800afff70b8d0ce76c1f313fb75e480d9.camel@us.ibm.com> Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.?666, so the suggested procedure during the CES setup should be to manually modify ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.666, so the suggested procedure during the CES setup should be to manually modify gpfs.ganesha.exports.conf and remove this parameter from all the exports, is that correct? Is there an easier way, or is there a plan to remove the 666.666 default value? In our case, we do rely on two separated CES clusters (one in prod and one in stand by, so that we can perform upgrades with no downtime by migrating the IPs from one to the other), so it might be safer to explicilty set Filesystem_Id, to ensure consistency among the cluster - would that make sense? Thanks again! Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 15:22, Christof Schmitt wrote: After another discussion it turns out that this parameter is not required. While my previous comment is correct, that there is the need to have unique handles across file systems, GPFS already provides that information and Ganesha handles that correctly. So there is no need to set the parameter in the Ganesha config. Regards, Christof On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed Jun 28 15:48:03 2023 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 28 Jun 2023 14:48:03 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <8b12725800afff70b8d0ce76c1f313fb75e480d9.camel@us.ibm.com> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> <8b12725800afff70b8d0ce76c1f313fb75e480d9.camel@us.ibm.com> Message-ID: <6a4b6e6e05ee71c6393320117c0998121330f606.camel@us.ibm.com> Apologies for the back and forth. Please keep the parameter for now. It should be ok for most cases. If there is a problem, please open a support ticket for further debugging. Regards, Christof On Wed, 2023-06-28 at 14:20 +0000, Christof Schmitt wrote: Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:?49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.?666, so the suggested procedure during the CES setup should be to manually modify ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.666, so the suggested procedure during the CES setup should be to manually modify gpfs.ganesha.exports.conf and remove this parameter from all the exports, is that correct? Is there an easier way, or is there a plan to remove the 666.666 default value? In our case, we do rely on two separated CES clusters (one in prod and one in stand by, so that we can perform upgrades with no downtime by migrating the IPs from one to the other), so it might be safer to explicilty set Filesystem_Id, to ensure consistency among the cluster - would that make sense? Thanks again! Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 15:22, Christof Schmitt wrote: After another discussion it turns out that this parameter is not required. While my previous comment is correct, that there is the need to have unique handles across file systems, GPFS already provides that information and Ganesha handles that correctly. So there is no need to set the parameter in the Ganesha config. Regards, Christof On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed Jun 28 15:48:03 2023 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 28 Jun 2023 14:48:03 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <8b12725800afff70b8d0ce76c1f313fb75e480d9.camel@us.ibm.com> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> <8b12725800afff70b8d0ce76c1f313fb75e480d9.camel@us.ibm.com> Message-ID: <6a4b6e6e05ee71c6393320117c0998121330f606.camel@us.ibm.com> Apologies for the back and forth. Please keep the parameter for now. It should be ok for most cases. If there is a problem, please open a support ticket for further debugging. Regards, Christof On Wed, 2023-06-28 at 14:20 +0000, Christof Schmitt wrote: Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:?49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.?666, so the suggested procedure during the CES setup should be to manually modify ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.666, so the suggested procedure during the CES setup should be to manually modify gpfs.ganesha.exports.conf and remove this parameter from all the exports, is that correct? Is there an easier way, or is there a plan to remove the 666.666 default value? In our case, we do rely on two separated CES clusters (one in prod and one in stand by, so that we can perform upgrades with no downtime by migrating the IPs from one to the other), so it might be safer to explicilty set Filesystem_Id, to ensure consistency among the cluster - would that make sense? Thanks again! Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 15:22, Christof Schmitt wrote: After another discussion it turns out that this parameter is not required. While my previous comment is correct, that there is the need to have unique handles across file systems, GPFS already provides that information and Ganesha handles that correctly. So there is no need to set the parameter in the Ganesha config. Regards, Christof On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really ZjQcmQRYFpfptBannerStart This Message Is From an External Sender This message came from outside your organization. Report Suspicious ZjQcmQRYFpfptBannerEnd Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ewahl at osc.edu Wed Jun 28 15:55:21 2023 From: ewahl at osc.edu (Wahl, Edward) Date: Wed, 28 Jun 2023 14:55:21 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <6a4b6e6e05ee71c6393320117c0998121330f606.camel@us.ibm.com> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> <8b12725800afff70b8d0ce76c1f313fb75e480d9.camel@us.ibm.com> <6a4b6e6e05ee71c6393320117c0998121330f606.camel@us.ibm.com> Message-ID: I?ll just chime in here that we export multiple file systems on a singular FileSystem_ID (666.666, the default) but with different Export_id?s, and have no stale file handle issues on the nfs clients. I?d recommend opening up a case if you do have issues. Ed Wahl OSC From: Christof Schmitt Sent: Wednesday, June 28, 2023 10:48 AM To: gpfsug-discuss at gpfsug.org; leonardo.sala at psi.ch; gpfsug-discuss at spectrumscale.org; Wahl, Edward Subject: RE: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Apologies for the back and forth. Please keep the parameter for now. It should be ok for most cases. If there is a problem, please open a support ticket for further debugging. Regards, Christof On Wed, 2023-06-28 at 14:?20 +0000, Christof Schmitt Apologies for the back and forth. Please keep the parameter for now. It should be ok for most cases. If there is a problem, please open a support ticket for further debugging. Regards, Christof On Wed, 2023-06-28 at 14:20 +0000, Christof Schmitt wrote: Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:?49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.?666, so the suggested procedure during the CES setup should be to manually modify Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.666, so the suggested procedure during the CES setup should be to manually modify gpfs.ganesha.exports.conf and remove this parameter from all the exports, is that correct? Is there an easier way, or is there a plan to remove the 666.666 default value? In our case, we do rely on two separated CES clusters (one in prod and one in stand by, so that we can perform upgrades with no downtime by migrating the IPs from one to the other), so it might be safer to explicilty set Filesystem_Id, to ensure consistency among the cluster - would that make sense? Thanks again! Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 15:22, Christof Schmitt wrote: After another discussion it turns out that this parameter is not required. While my previous comment is correct, that there is the need to have unique handles across file systems, GPFS already provides that information and Ganesha handles that correctly. So there is no need to set the parameter in the Ganesha config. Regards, Christof On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ewahl at osc.edu Wed Jun 28 15:55:21 2023 From: ewahl at osc.edu (Wahl, Edward) Date: Wed, 28 Jun 2023 14:55:21 +0000 Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id In-Reply-To: <6a4b6e6e05ee71c6393320117c0998121330f606.camel@us.ibm.com> References: <0e269a0c-1b34-31cf-73f2-bfbcba8d5717@psi.ch> <558222eeef7cd5169214368c9a04af813509b086.camel@us.ibm.com> <6302ae47-72bf-0a8c-fca2-87e4e8499b67@psi.ch> <46f2c6856498222769ed523483a14c04ffb6a858.camel@us.ibm.com> <8b12725800afff70b8d0ce76c1f313fb75e480d9.camel@us.ibm.com> <6a4b6e6e05ee71c6393320117c0998121330f606.camel@us.ibm.com> Message-ID: I?ll just chime in here that we export multiple file systems on a singular FileSystem_ID (666.666, the default) but with different Export_id?s, and have no stale file handle issues on the nfs clients. I?d recommend opening up a case if you do have issues. Ed Wahl OSC From: Christof Schmitt Sent: Wednesday, June 28, 2023 10:48 AM To: gpfsug-discuss at gpfsug.org; leonardo.sala at psi.ch; gpfsug-discuss at spectrumscale.org; Wahl, Edward Subject: RE: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Apologies for the back and forth. Please keep the parameter for now. It should be ok for most cases. If there is a problem, please open a support ticket for further debugging. Regards, Christof On Wed, 2023-06-28 at 14:?20 +0000, Christof Schmitt Apologies for the back and forth. Please keep the parameter for now. It should be ok for most cases. If there is a problem, please open a support ticket for further debugging. Regards, Christof On Wed, 2023-06-28 at 14:20 +0000, Christof Schmitt wrote: Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:?49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, Is the parameter set in your config? If so, then yes, remove it. Regards, Christof On Wed, 2023-06-28 at 15:49 +0200, Leonardo Sala wrote: Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.?666, so the suggested procedure during the CES setup should be to manually modify Hi Christof, thanks! So the preferred way should be not to have Filesystem_Id. If I have understood correctly, this is anyhow set up by default in CES to 666.666, so the suggested procedure during the CES setup should be to manually modify gpfs.ganesha.exports.conf and remove this parameter from all the exports, is that correct? Is there an easier way, or is there a plan to remove the 666.666 default value? In our case, we do rely on two separated CES clusters (one in prod and one in stand by, so that we can perform upgrades with no downtime by migrating the IPs from one to the other), so it might be safer to explicilty set Filesystem_Id, to ensure consistency among the cluster - would that make sense? Thanks again! Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 15:22, Christof Schmitt wrote: After another discussion it turns out that this parameter is not required. While my previous comment is correct, that there is the need to have unique handles across file systems, GPFS already provides that information and Ganesha handles that correctly. So there is no need to set the parameter in the Ganesha config. Regards, Christof On Wed, 2023-06-28 at 14:33 +0200, Leonardo Sala wrote: Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the Hi Christof, thanks a lot! In our case we are exporting multiple filesets from 2 filesystems, I guess we should fix unique Fileset_IDs for each fileset? What would happen in case we just remove the Fileset_id parameter, as suggested by the ganesha docs? Regards leo Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/28/23 14:22, Christof Schmitt wrote: The "FileSystem_Id" is a unique identifier for the file system. The technical background is that Ganesha asks the file system for a file handle, but that is only unique within the file system. If there are NFS exports on different file systems, there needs to be a way to make the file handles unique across multiple file systems. So if there are NFS exports on different file systems, this parameter should be set with a unique value for each file system. If there is only one file system with NFS exports, then this should not be necessary. Regards, Christof On Wed, 2023-06-28 at 08:53 +0200, Leonardo Sala wrote: Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really Hi Ed, thanks! In our case we do have unique export ids, but the same fsid, and this seems to create issues. Also, reading Ganesha docs, I can see [*]: FileSystem_ID EXPORT Option There is an EXPORT config option, FileSystem_ID. This really should not be used, all it does it designate an fsid to be used with the attributes of all objects in the export. It will be folded to fit into NFSv3. Because it applies to the entire export, it prevents exporting multiple file systems since there will likely be issues with collision of inode numbers on the client. so before touching the defaults in GPFS CES configuration I would like some guidance or experiences from this mlist :) cheers leo [*] https://github.com/nfs-ganesha/nfs-ganesha/wiki/File-Systems#FileSystem_ID_EXPORT_Option Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch On 6/27/23 19:41, Wahl, Edward wrote: I vaguely recall seeing this and testing it. My notes to myself say: ?As long as the export_id is unique, you are fine.? See the manuals, ganesha loves Camel Case so it?s more than likely actually ?Export_Id? or some such. Ed Wahl Ohio Supercomputer Center From: gpfsug-discuss On Behalf Of Leonardo Sala Sent: Tuesday, June 27, 2023 10:18 AM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] CES, Ganesha, and Filesystem_id Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.?666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients Hallo, we are checking our current CES configuration, and we noticed that by default GPFS puts always Filesystem_Id=666.666 [*], no matter which Export_Id value the export has. To my understanding (which is poor!), this means that all clients will see all our exports (~20) with the same device number, creating various possible issues (e.g. file state handles). Questions: * is there a reason for such default value? If we change it, are there unpleasant effects we could see? * what would be a reasonable value? Looking around I saw that Filesystem_Id = Export_Id.Export_Id is quite common, with the possible issue of using the forbidden 152.152 [**] * what happens if we actually remove the Filesytem_Id parameter from gpfs.ganesha.exports.conf? * is there a way to modify Filesystem_Id in gpfs.ganesha.exports.conf without editing the file, eg using mmnfs commands (seems not, but I might be mistaken)? Thanks a lot! cheers leo [*] https://www.ibm.com/docs/en/storage-scale/5.0.4?topic=exports-making-bulk-changes-nfs [**] https://github.com/nfs-ganesha/nfs-ganesha/issues/615 -- Paul Scherrer Institut Dr. Leonardo Sala Group Leader Data Analysis and Research Infrastructure Deputy Department Head a.i Science IT Infrastructure and Services department Science IT Infrastructure and Services department (AWI) WHGA/036 Forschungstrasse 111 5232 Villigen PSI Switzerland Phone: +41 56 310 3369 leonardo.sala at psi.ch www.psi.ch _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Thu Jun 29 00:18:32 2023 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Wed, 28 Jun 2023 23:18:32 +0000 Subject: [gpfsug-discuss] File placement policy based on creation and modification times Message-ID: Can we setup a file placement policy based on creating and modification times when copying data from Windows into GPFS? It looks like the placement policy only accepts only CREATION_TIME and not MODIFICATION_TIME or ACCESS_TIME. If I try to use these, I get message saying these are not supported in the context (placement?) But even the policy with CREATION_TIME is not working properly. We wanted files with CREATION_TIME which is 365 days ago go to a different pool other than ?system?. But when we copy files it is dumping all files into system pool. But the creation time is looks correct on the file after copied into GPFS. Does it check file CREATION_TIME when a file gets copied over to GPFS? Here is the placement policy: RULE ?tiering? SET POOL ?pool2? WHERE ( DAYS(CURRENT_TIMESTAMP) - DAYS(CREATION_TIME) > 365 ) RULE ?default? SET POOL ?system? [Logo Description automatically generated] Prasad Surampudi | Sr. Systems Engineer prasad.surampudi at theatsgroup.com | 302.419.5833 Innovative IT consulting & modern infrastructure solutions www.theatsgroup.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 6597 bytes Desc: image001.png URL: From timm.stamer at uni-oldenburg.de Thu Jun 29 06:10:32 2023 From: timm.stamer at uni-oldenburg.de (Timm Stamer) Date: Thu, 29 Jun 2023 05:10:32 +0000 Subject: [gpfsug-discuss] File placement policy based on creation and modification times In-Reply-To: References: Message-ID: <6bf891bbbb43861bb1feafc049e789d1350d1f85.camel@uni-oldenburg.de> Hello Prasad, we use this in our weekly policy run: RULE 'migrate cold data' MIGRATE FROM POOL 'system' TO POOL 'data' WHERE CURRENT_TIMESTAMP - ACCESS_TIME > INTERVAL '30' DAYS I do not know if a direct placement based on timestamps is possible. Kind regards Timm Stamer Am Mittwoch, dem 28.06.2023 um 23:18 +0000 schrieb Prasad Surampudi: > > > > ACHTUNG! Diese E-Mail kommt von Extern!WARNING! This email originated > off-campus. > > > > > Can we setup a file placement policy based on creating and > modification times when copying data from Windows into GPFS? It looks > like the placement policy only accepts only CREATION_TIME and not > MODIFICATION_TIME or ACCESS_TIME.? If I try to use these, I get > message saying these are not supported in the context (placement?) > But even the policy with CREATION_TIME is not working properly. We > wanted files with CREATION_TIME which is 365 days ago go to a > different pool other than ?system?. But when we copy files it is > dumping all files into system pool. But the creation time is looks > correct on the file after copied into GPFS. Does it check file > CREATION_TIME when a file gets copied over to GPFS? > ? > Here is the placement policy: > RULE ?tiering? SET POOL ?pool2? > WHERE ( DAYS(CURRENT_TIMESTAMP) - DAYS(CREATION_TIME) > 365 ) > RULE ?default? SET POOL ?system? > ? > > > Logo > > Description automatically generated > > Prasad Surampudi?| Sr.?Systems Engineer > prasad.surampudi at theatsgroup.com?| 302.419.5833? > > Innovative IT consulting &?modern infrastructure?solutions > www.theatsgroup.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 7667 bytes Desc: not available URL: From anacreo at gmail.com Thu Jun 29 10:01:35 2023 From: anacreo at gmail.com (Alec) Date: Thu, 29 Jun 2023 02:01:35 -0700 Subject: [gpfsug-discuss] File placement policy based on creation and modification times In-Reply-To: <6bf891bbbb43861bb1feafc049e789d1350d1f85.camel@uni-oldenburg.de> References: <6bf891bbbb43861bb1feafc049e789d1350d1f85.camel@uni-oldenburg.de> Message-ID: Yeah that kind of placement isn't possible, because you can only use attributes you know at the time of inode creation. When a file is created it's created with the current timestamp and then updated (usually after the copy finishes). If the majority of your data is going to be older than 365 days you may want to make your file placement default to your pool2, and then when you've finished copying all your older data, and want to freshen your data, update the placement policy it to the proper pool so new data hits the high speed disk. You can use a file path/name in the placement policy and some copy engines do give inodes temporary names before giving them their proper name... like rsync will start a file off with a . (and add a random suffix) until the file is completely transferred, then move it to the destination filename, then it will update the date, time, and ownership on that inode. So you could have your placement engine put anything starting with a . into your pool2 and then migrate fresher files back up to your higher tiered storage if desired. Not sure if any of that helps. Alec On Wed, Jun 28, 2023 at 10:12?PM Timm Stamer wrote: > Hello Prasad, > > we use this in our weekly policy run: > > RULE 'migrate cold data' MIGRATE FROM POOL 'system' TO POOL 'data' > WHERE CURRENT_TIMESTAMP - ACCESS_TIME > INTERVAL '30' DAYS > > > I do not know if a direct placement based on timestamps is possible. > > > > Kind regards > Timm Stamer > > > Am Mittwoch, dem 28.06.2023 um 23:18 +0000 schrieb Prasad Surampudi: > > > > > > > > ACHTUNG! Diese E-Mail kommt von Extern!WARNING! This email originated > > off-campus. > > > > > > > > > > Can we setup a file placement policy based on creating and > > modification times when copying data from Windows into GPFS? It looks > > like the placement policy only accepts only CREATION_TIME and not > > MODIFICATION_TIME or ACCESS_TIME. If I try to use these, I get > > message saying these are not supported in the context (placement?) > > But even the policy with CREATION_TIME is not working properly. We > > wanted files with CREATION_TIME which is 365 days ago go to a > > different pool other than ?system?. But when we copy files it is > > dumping all files into system pool. But the creation time is looks > > correct on the file after copied into GPFS. Does it check file > > CREATION_TIME when a file gets copied over to GPFS? > > > > Here is the placement policy: > > RULE ?tiering? SET POOL ?pool2? > > WHERE ( DAYS(CURRENT_TIMESTAMP) - DAYS(CREATION_TIME) > 365 ) > > RULE ?default? SET POOL ?system? > > > > > > > > Logo > > > > Description automatically generated > > > > Prasad Surampudi | Sr. Systems Engineer > > prasad.surampudi at theatsgroup.com | 302.419.5833 > > > > Innovative IT consulting & modern infrastructure solutions > > www.theatsgroup.com > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Thu Jun 29 18:12:36 2023 From: chair at gpfsug.org (Spectrum Scale UG) Date: Thu, 29 Jun 2023 18:12:36 +0100 Subject: [gpfsug-discuss] Recent UK Meeting Message-ID: An HTML attachment was scrubbed... URL: From ott.oopkaup at ut.ee Fri Jun 30 13:18:28 2023 From: ott.oopkaup at ut.ee (Ott Oopkaup) Date: Fri, 30 Jun 2023 15:18:28 +0300 Subject: [gpfsug-discuss] File placement policy based on creation and modification times In-Reply-To: References: <6bf891bbbb43861bb1feafc049e789d1350d1f85.camel@uni-oldenburg.de> Message-ID: <7ff76f98-b715-6f4c-3a61-c0b154012b36@ut.ee> Hi while maybe not the solution to the exact problem, GPFS does allow heat based tiering which seems to me like a more correct way to ensure efficient utilisation of fast SSD space. https://www.ibm.com/docs/en/storage-scale/5.0.5?topic=scale-file-heat-tracking-file-access-temperature Best, Ott Oopkaup University of Tartu, High Performance Computing Centre Systems Administrator On 6/29/23 12:01, Alec wrote: > Yeah that kind of placement isn't possible, because you can only use > attributes you know at the time of inode creation.? When a file is > created it's created with the current timestamp and then updated > (usually after the copy finishes). If the majority of your data is > going to be older than 365 days you may want to make your file > placement default to your pool2, and then when you've finished copying > all your older data, and want to freshen your data, update the > placement policy it to the proper pool so new data hits the high speed > disk. > > You can use a file path/name in the placement policy and some copy > engines do give inodes temporary names before giving them their proper > name...? like rsync will start a file off with a . (and add a random > suffix) until the file is completely transferred, then move it to the > destination filename, then it will update the date, time, and > ownership on that inode.? So you could have your placement engine put > anything starting with a . into your pool2 and then migrate fresher > files back up to your higher tiered storage if desired. > > Not sure if any of that helps. > > Alec > > On Wed, Jun 28, 2023 at 10:12?PM Timm Stamer > wrote: > > Hello Prasad, > > we use this in our weekly policy run: > > RULE 'migrate cold data' MIGRATE FROM POOL 'system' TO POOL 'data' > WHERE CURRENT_TIMESTAMP - ACCESS_TIME > INTERVAL '30' DAYS > > > I do not know if a direct placement based on timestamps is possible. > > > > Kind regards > Timm Stamer > > > Am Mittwoch, dem 28.06.2023 um 23:18 +0000 schrieb Prasad Surampudi: > > > > > > > > ACHTUNG! Diese E-Mail kommt von Extern!WARNING! This email > originated > > off-campus. > > > > > > > > > > Can we setup a file placement policy based on creating and > > modification times when copying data from Windows into GPFS? It > looks > > like the placement policy only accepts only CREATION_TIME and not > > MODIFICATION_TIME or ACCESS_TIME.? If I try to use these, I get > > message saying these are not supported in the context (placement?) > > But even the policy with CREATION_TIME is not working properly. We > > wanted files with CREATION_TIME which is 365 days ago go to a > > different pool other than ?system?. But when we copy files it is > > dumping all files into system pool. But the creation time is looks > > correct on the file after copied into GPFS. Does it check file > > CREATION_TIME when a file gets copied over to GPFS? > > > > Here is the placement policy: > > RULE ?tiering? SET POOL ?pool2? > > WHERE ( DAYS(CURRENT_TIMESTAMP) - DAYS(CREATION_TIME) > 365 ) > > RULE ?default? SET POOL ?system? > > > > > > > > Logo > > > > Description automatically generated > > > > Prasad Surampudi?| Sr.?Systems Engineer > > prasad.surampudi at theatsgroup.com?| 302.419.5833 > > > > Innovative IT consulting &?modern infrastructure?solutions > > www.theatsgroup.com > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From duersch at us.ibm.com Fri Jun 30 21:55:38 2023 From: duersch at us.ibm.com (Steve Duersch) Date: Fri, 30 Jun 2023 20:55:38 +0000 Subject: [gpfsug-discuss] Why cluster-wide locks for firmware-updates and the like Message-ID: This behavior is expected. A cluster wide lock is necessary because mmchfirmware itself will update the cluster as a whole during this process. So, there shouldn't be a need to run updates elsewhere at the same time. Steve Duersch IBM Storage Scale/Storage Scale System 845-433-7902 IBM Poughkeepsie, New York ________________________________ Hi If you are doing it offline (which for bigger setups) and pass the class or CSV of nodes, it is done in parallel in all nodes. For your request I think there is a RFE (not sure public or not) already created, but I don?t disagree would be nice improvement to lock at the single BB -- Yst?v?llisin terveisin/Regards/Saludos/Salutations/Salutacions Luis Bolinches Executive IT Specialist IBM Storage Scale development Phone: +358503112585 Ab IBM Finland Oy Toinen linja 7 00530 Helsinki Uusimaa - Finland Visitors entrance: Siltasaarenkatu 22 "If you always give you will always have" -- Anonymous https://www.credly.com/users/luis-bolinches/badges -----Original Message----- From: gpfsug-discuss > On Behalf Of Hannappel, Juergen Sent: Tuesday, 27 June 2023 19.07 To: gpfsug main discussion list > Subject: [EXTERNAL] [gpfsug-discuss] Why cluster-wide locks for firmware-updates and the like Moin, when e.g doing mmchfirmware there is a cluster-wide lock preventing me from running mmchfirmware on several building blocks at once, while I would assume that only within one building block a lock is needed. Why is that so? Can that be changed in a future release? Also some apparently cluster wide locks create false alarms when checking for the recovery group status on one building block is blocked by some actions on another one... -- Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Fri Jun 30 22:16:20 2023 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 30 Jun 2023 23:16:20 +0200 Subject: [gpfsug-discuss] Why cluster-wide locks for firmware-updates and the like In-Reply-To: References: Message-ID: These locks seems a bit excessive.. even blocking for mmstartup, which means a node can?t join the cluster for hours during enclosure updates. -jf fre. 30. jun. 2023 kl. 22:59 skrev Steve Duersch : > This behavior is expected. A cluster wide lock is necessary because > mmchfirmware itself will update the cluster as a whole during this > process. So, there shouldn't be a need to run updates elsewhere at the > same time. > > > > Steve Duersch > > IBM Storage Scale/Storage Scale System > > 845-433-7902 > > IBM Poughkeepsie, New York > > > ------------------------------ > > > > > > Hi > > > > If you are doing it offline (which for bigger setups) and pass the class or CSV of nodes, it is done in parallel in all nodes. > > > > For your request I think there is a RFE (not sure public or not) already created, but I don?t disagree would be nice improvement to lock at the single BB > > > > -- > > Yst?v?llisin terveisin/Regards/Saludos/Salutations/Salutacions > > > > > Luis Bolinches > > > > > Executive IT Specialist > > IBM Storage Scale development > > Phone: +358503112585 > > > > Ab IBM Finland Oy > > Toinen linja 7 > > 00530 Helsinki > > Uusimaa - Finland > > > > Visitors entrance: Siltasaarenkatu 22 > > > > "If you always give you will always have" -- Anonymous > > > > https://www.credly.com/users/luis-bolinches/badges > > > > -----Original Message----- > > From: gpfsug-discuss > On Behalf Of Hannappel, Juergen > > Sent: Tuesday, 27 June 2023 19.07 > > To: gpfsug main discussion list > > > Subject: [EXTERNAL] [gpfsug-discuss] Why cluster-wide locks for firmware-updates and the like > > > > Moin, > > when e.g doing mmchfirmware there is a cluster-wide lock preventing me from running mmchfirmware on several building blocks at once, while I would assume that only within one building block a lock is needed. > > Why is that so? Can that be changed in a future release? > > > > Also some apparently cluster wide locks create false alarms when checking for the recovery group status on one building block is blocked by some actions on another one... > > > > -- > > Dr. J?rgen Hannappel DESY/IT Tel. : +49 40 8998-4616 > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: