From p.ward at nhm.ac.uk Thu Jul 2 13:00:41 2020 From: p.ward at nhm.ac.uk (Paul Ward) Date: Thu, 2 Jul 2020 12:00:41 +0000 Subject: [gpfsug-discuss] Change uidNumber and gidNumber for billions of files In-Reply-To: References: Message-ID: Sorry a bit behind the discussion... We were using GPFS's internal TBD2 method for UID and GID assignment (15 years ago GPFS was purchased for a single purpose with a handful of accounts) I have just been through 88 million files ADDING NFSv4 ACEs with UIDs and GIDs derived from AD RIDs. We have both the TBD2 and AD RID ACEs in the ACLs. This allowed us to do a single switch over between the authentication methods for all the data at once. The testing and prep work took months though. We have Spectrum protect and SP Space management with a tape library in the mix, so I needed to make sure ACL changes didn't cause a backup and recall then backup for migrated files. My scripts made use of mmgetacl and mmputacl. I had less than 50 unique ACEs to construct and I created a spreadsheet that auto created the commands. This could have been automated, but for that number it was just as quick for me to do by hand than learn to program it. I wrote my own scripts, with a lot of safety checks, as it went AWOL at one point and started changing permissions at the root for the GPFS file system, removing access for everyone. We had a mix of posix only and nfsv4 ACLs. Testing them revealed a lot of skeletons in the way some systems had been set up - allow a lot of time for unknowns if you have systems using GPFS as a back end. Some way into it to this, I discovered IBM have created code to do this - I didn't keep the link as it was too late for me. The switch over went seamlessly btw, it had to with all the prep work! Kindest regards, Paul Paul Ward TS Infrastructure Architect Natural History Museum T: 02079426450 E: p.ward at nhm.ac.uk [A picture containing drawing Description automatically generated] From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Lohit Valleru Sent: 08 June 2020 18:44 To: gpfsug main discussion list Subject: [gpfsug-discuss] Change uidNumber and gidNumber for billions of files Hello Everyone, We are planning to migrate from LDAP to AD, and one of the best solution was to change the uidNumber and gidNumber to what SSSD or Centrify would resolve. May I know, if anyone has come across a tool/tools that can change the uidNumbers and gidNumbers of billions of files efficiently and in a reliable manner? We could spend some time to write a custom script, but wanted to know if a tool already exists. Please do let me know, if any one else has come across a similar situation, and the steps/tools used to resolve the same. Regards, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 5356 bytes Desc: image001.jpg URL: From damir.krstic at gmail.com Tue Jul 7 14:37:46 2020 From: damir.krstic at gmail.com (Damir Krstic) Date: Tue, 7 Jul 2020 08:37:46 -0500 Subject: [gpfsug-discuss] dependent versus independent filesets Message-ID: We are deploying our new ESS and are considering moving to independent filesets. The snapshot per fileset feature appeals to us. Has anyone considered independent vs. dependent filesets and what was your reasoning to go with one as opposed to the other? Or perhaps you opted to have both on your filesystem, and if, what was the reasoning for it? Thank you. Damir -------------- next part -------------- An HTML attachment was scrubbed... URL: From skylar2 at uw.edu Tue Jul 7 14:59:58 2020 From: skylar2 at uw.edu (Skylar Thompson) Date: Tue, 7 Jul 2020 06:59:58 -0700 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: Message-ID: <20200707135958.leqp3q6f3rbtslji@illuin> We wanted to be able to snapshot and backup filesets separately with mmbackup, so went with independent filesets. On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > We are deploying our new ESS and are considering moving to independent > filesets. The snapshot per fileset feature appeals to us. > > Has anyone considered independent vs. dependent filesets and what was your > reasoning to go with one as opposed to the other? Or perhaps you opted to > have both on your filesystem, and if, what was the reasoning for it? > > Thank you. > Damir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department (UW Medicine), System Administrator -- Foege Building S046, (206)-685-7354 -- Pronouns: He/Him/His From chair at spectrumscale.org Tue Jul 7 15:52:19 2020 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Tue, 07 Jul 2020 15:52:19 +0100 Subject: [gpfsug-discuss] SSUG::Digital Talk 2 Message-ID: <1D2B20FD-257E-49C3-9D24-C63978758ED0@spectrumscale.org> Hi All, The next talk in the SSUG:: Digital series is taking place on Monday 13th July at 4pm BST. (Other time-zones are listed on the website!) Speaker: Lindsay Todd Topic: Best Practices for building a stretched cluster More details at: https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-best-practices-for-building-a-stretched-cluster/ (The next one after that will be 27th July) Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ewahl at osc.edu Tue Jul 7 15:44:16 2020 From: ewahl at osc.edu (Wahl, Edward) Date: Tue, 7 Jul 2020 14:44:16 +0000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: <20200707135958.leqp3q6f3rbtslji@illuin> References: <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: We also went with independent filesets for both backup (and quota) reasons for several years now, and have stuck with this across to 5.x. However we still maintain a minor number of dependent filesets for administrative use. Being able to mmbackup on many filesets at once can increase your parallelization _quite_ nicely! We create and delete the individual snaps before and after each backup, as you may expect. Just be aware that if you do massive numbers of fast snapshot deletes and creates you WILL reach a point where you will run into issues due to quiescing compute clients, and that certain types of workloads have issues with snapshotting in general. You have to more closely watch what you pre-allocate, and what you have left in the common metadata/inode pool. Once allocated, even if not being used, you cannot reduce the inode allocation without removing the fileset and re-creating. (say a fileset user had 5 million inodes and now only needs 500,000) Growth can also be an issue if you do NOT fully pre-allocate each space. This can be scary if you are not used to over-subscription in general. But I imagine that most sites have some decent % of oversubscription if they use filesets and quotas. Ed OSC -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Skylar Thompson Sent: Tuesday, July 7, 2020 10:00 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] dependent versus independent filesets We wanted to be able to snapshot and backup filesets separately with mmbackup, so went with independent filesets. On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > We are deploying our new ESS and are considering moving to independent > filesets. The snapshot per fileset feature appeals to us. > > Has anyone considered independent vs. dependent filesets and what was > your reasoning to go with one as opposed to the other? Or perhaps you > opted to have both on your filesystem, and if, what was the reasoning for it? > > Thank you. > Damir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug- > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY > vcGNh4M_no$ -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department (UW Medicine), System Administrator -- Foege Building S046, (206)-685-7354 -- Pronouns: He/Him/His _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$ From skylar2 at uw.edu Tue Jul 7 17:07:07 2020 From: skylar2 at uw.edu (Skylar Thompson) Date: Tue, 7 Jul 2020 09:07:07 -0700 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: <20200707160707.mk5e5hfspn7d6vnq@illuin> Ah, yes, I forgot about the quota rationale; we use independent filesets for that as well. We have run into confusion with inodes as one has to be careful to allocate inodes /and/ adjust a quota to expand a fileset. IIRC GPFS generates ENOSPC if it actually runs out of inodes, and EDQUOT if it hits a quota. We've also run into the quiescing issue but have been able to workaround it for now by increasing the splay between the different schedules. On Tue, Jul 07, 2020 at 02:44:16PM +0000, Wahl, Edward wrote: > We also went with independent filesets for both backup (and quota) reasons for several years now, and have stuck with this across to 5.x. However we still maintain a minor number of dependent filesets for administrative use. Being able to mmbackup on many filesets at once can increase your parallelization _quite_ nicely! We create and delete the individual snaps before and after each backup, as you may expect. Just be aware that if you do massive numbers of fast snapshot deletes and creates you WILL reach a point where you will run into issues due to quiescing compute clients, and that certain types of workloads have issues with snapshotting in general. > > You have to more closely watch what you pre-allocate, and what you have left in the common metadata/inode pool. Once allocated, even if not being used, you cannot reduce the inode allocation without removing the fileset and re-creating. (say a fileset user had 5 million inodes and now only needs 500,000) > > Growth can also be an issue if you do NOT fully pre-allocate each space. This can be scary if you are not used to over-subscription in general. But I imagine that most sites have some decent % of oversubscription if they use filesets and quotas. > > Ed > OSC > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Skylar Thompson > Sent: Tuesday, July 7, 2020 10:00 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] dependent versus independent filesets > > We wanted to be able to snapshot and backup filesets separately with mmbackup, so went with independent filesets. > > On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > > We are deploying our new ESS and are considering moving to independent > > filesets. The snapshot per fileset feature appeals to us. > > > > Has anyone considered independent vs. dependent filesets and what was > > your reasoning to go with one as opposed to the other? Or perhaps you > > opted to have both on your filesystem, and if, what was the reasoning for it? > > > > Thank you. > > Damir > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug- > > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY > > vcGNh4M_no$ > > > -- > -- Skylar Thompson (skylar2 at u.washington.edu) > -- Genome Sciences Department (UW Medicine), System Administrator > -- Foege Building S046, (206)-685-7354 > -- Pronouns: He/Him/His > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$ > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department (UW Medicine), System Administrator -- Foege Building S046, (206)-685-7354 -- Pronouns: He/Him/His From stockf at us.ibm.com Tue Jul 7 17:25:27 2020 From: stockf at us.ibm.com (Frederick Stock) Date: Tue, 7 Jul 2020 16:25:27 +0000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: , <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Tue Jul 7 19:19:51 2020 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Tue, 7 Jul 2020 18:19:51 +0000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: , , <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: An HTML attachment was scrubbed... URL: From leslie.james.elliott at gmail.com Wed Jul 8 00:19:20 2020 From: leslie.james.elliott at gmail.com (leslie elliott) Date: Wed, 8 Jul 2020 09:19:20 +1000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: as long as your currently do not need more than 1000 on a filesystem On Wed, 8 Jul 2020 at 04:20, Daniel Kidger wrote: > It is worth noting that Independent Filesets are a relatively recent > addition to Spectrum Scale, compared to Dependant Filesets. They havesolved > some of the limitations of the former. > > > My view would be to always use Independent FIlesets unless there is a > particular reason to use Dependant ones. > > Daniel > > _________________________________________________________ > *Daniel Kidger Ph.D.* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum Discover and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "Frederick Stock" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Cc: gpfsug-discuss at spectrumscale.org > Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent > filesets > Date: Tue, Jul 7, 2020 17:25 > > One comment about inode preallocation. There was a time when inode > creation was performance challenged but in my opinion that is no longer the > case, unless you have need for file creates to complete at extreme speed. > In my experience it is the rare customer that requires extremely fast file > create times so pre-allocation is not truly necessary. As was noted once > an inode is allocated it cannot be deallocated. The more important item is > the maximum inodes defined for a fileset or file system. Yes, those do > need to be monitored so they can be increased if necessary to avoid out of > space errors. > > Fred > __________________________________________________ > Fred Stock | IBM Pittsburgh Lab | 720-430-8821 > stockf at us.ibm.com > > > > ----- Original message ----- > From: "Wahl, Edward" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent > filesets > Date: Tue, Jul 7, 2020 11:59 AM > > We also went with independent filesets for both backup (and quota) reasons > for several years now, and have stuck with this across to 5.x. However we > still maintain a minor number of dependent filesets for administrative use. > Being able to mmbackup on many filesets at once can increase your > parallelization _quite_ nicely! We create and delete the individual snaps > before and after each backup, as you may expect. Just be aware that if you > do massive numbers of fast snapshot deletes and creates you WILL reach a > point where you will run into issues due to quiescing compute clients, and > that certain types of workloads have issues with snapshotting in general. > > You have to more closely watch what you pre-allocate, and what you have > left in the common metadata/inode pool. Once allocated, even if not being > used, you cannot reduce the inode allocation without removing the fileset > and re-creating. (say a fileset user had 5 million inodes and now only > needs 500,000) > > Growth can also be an issue if you do NOT fully pre-allocate each space. > This can be scary if you are not used to over-subscription in general. But > I imagine that most sites have some decent % of oversubscription if they > use filesets and quotas. > > Ed > OSC > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Skylar Thompson > Sent: Tuesday, July 7, 2020 10:00 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] dependent versus independent filesets > > We wanted to be able to snapshot and backup filesets separately with > mmbackup, so went with independent filesets. > > On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > > We are deploying our new ESS and are considering moving to independent > > filesets. The snapshot per fileset feature appeals to us. > > > > Has anyone considered independent vs. dependent filesets and what was > > your reasoning to go with one as opposed to the other? Or perhaps you > > opted to have both on your filesystem, and if, what was the reasoning > for it? > > > > Thank you. > > Damir > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug- > > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY > > vcGNh4M_no$ > > > -- > -- Skylar Thompson (skylar2 at u.washington.edu) > -- Genome Sciences Department (UW Medicine), System Administrator > -- Foege Building S046, (206)-685-7354 > -- Pronouns: He/Him/His > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Jul 10 09:28:48 2020 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Fri, 10 Jul 2020 08:28:48 +0000 Subject: [gpfsug-discuss] SSUG::Digital Talk 2 Message-ID: Just a reminder that the next talk is on Monday. For some technical reasons, the link to join the event has changed, so if you?d added a calendar event with the link already, please update it to: https://ibm.webex.com/ibm/onstage/g.php?MTID=ed52933f6b6a9eee6d980d1a0807a8e5a The SSUG website has also been updated with the new event link already. Simon From: on behalf of "chair at spectrumscale.org" Reply to: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 7 July 2020 at 15:52 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] SSUG::Digital Talk 2 Hi All, The next talk in the SSUG:: Digital series is taking place on Monday 13th July at 4pm BST. (Other time-zones are listed on the website!) Speaker: Lindsay Todd Topic: Best Practices for building a stretched cluster More details at: https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-best-practices-for-building-a-stretched-cluster/ (The next one after that will be 27th July) Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Jul 15 16:15:21 2020 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 15 Jul 2020 17:15:21 +0200 Subject: [gpfsug-discuss] rsync NFS4 ACLs Message-ID: It looks like the old NFS4 ACL patch for rsync is no longer needed. Starting with rsync-3.2.0 (and backported to rsync-3.1.2-9 in RHEL7), it will now copy NFS4 ACLs if we tell it to ignore the posix ACLs: rsync -X --filter '-x system.posix_acl' file-with-acl copy-with-acl -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef.coene at docum.org Thu Jul 16 14:13:44 2020 From: stef.coene at docum.org (Stef Coene) Date: Thu, 16 Jul 2020 15:13:44 +0200 Subject: [gpfsug-discuss] GUI refresh task error Message-ID: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Hi, On brand new 5.0.5 cluster we have the following errors on all nodes: "The following GUI refresh task(s) failed: WATCHFOLDER" It also says "Failure reason: Command mmwatch all functional --list-clustered-status failed" Running mmwatch manually gives: mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. mmwatch: Command failed. Examine previous error messages to determine cause. How can I get rid of this error? I tried to disable the task with: chtask WATCHFOLDER --inactive EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. Stef From roland.schuemann at postbank.de Thu Jul 16 14:25:49 2020 From: roland.schuemann at postbank.de (Roland Schuemann) Date: Thu, 16 Jul 2020 13:25:49 +0000 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Message-ID: <975f874a066c4ba6a45c62f9b280efa2@postbank.de> Hi Stef, we already recognized this error too and opened a PMR/Case at IBM. You can set this task to inactive, but this is not persistent. After gui restart it comes again. This was the answer from IBM Support. >>>>>>>>>>>>>>>>> This will be fixed in the next release of 5.0.5.2, right now there is no work-around but will not cause issue besides the cosmetic task failed message. Is this OK for you? >>>>>>>>>>>>>>>>> So we ignore (Gui is still degraded) it and wait for the fix. Kind regards Roland Sch?mann Freundliche Gr??e / Kind regards Roland Sch?mann ____________________________________________ Roland Sch?mann Infrastructure Engineering (BTE) CIO PB Germany Deutsche Bank I Technology, Data and Innovation Postbank Systems AG -----Urspr?ngliche Nachricht----- Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von Stef Coene Gesendet: Donnerstag, 16. Juli 2020 15:14 An: gpfsug main discussion list Betreff: [gpfsug-discuss] GUI refresh task error Hi, On brand new 5.0.5 cluster we have the following errors on all nodes: "The following GUI refresh task(s) failed: WATCHFOLDER" It also says "Failure reason: Command mmwatch all functional --list-clustered-status failed" Running mmwatch manually gives: mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. mmwatch: Command failed. Examine previous error messages to determine cause. How can I get rid of this error? I tried to disable the task with: chtask WATCHFOLDER --inactive EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. Stef _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Die Europ?ische Kommission hat unter http://ec.europa.eu/consumers/odr/ eine Europ?ische Online-Streitbeilegungsplattform (OS-Plattform) errichtet. Verbraucher k?nnen die OS-Plattform f?r die au?ergerichtliche Beilegung von Streitigkeiten aus Online-Vertr?gen mit in der EU niedergelassenen Unternehmen nutzen. Informationen (einschlie?lich Pflichtangaben) zu einzelnen, innerhalb der EU t?tigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter https://www.deutsche-bank.de/Pflichtangaben. Diese E-Mail enth?lt vertrauliche und/ oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The European Commission has established a European online dispute resolution platform (OS platform) under http://ec.europa.eu/consumers/odr/. Consumers may use the OS platform to resolve disputes arising from online contracts with providers established in the EU. Please refer to https://www.db.com/disclosures for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. From Achim.Rehor at de.ibm.com Thu Jul 16 14:44:34 2020 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Thu, 16 Jul 2020 15:44:34 +0200 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Message-ID: you may want to replace the mmwatch command with a simple script like this #!/bin/ksh echo "dummy for removing gui error" exit 0 or install either 5.1.0.0 or 5.0.5.2 (when it gets available ..) Mit freundlichen Gr??en / Kind regards Achim Rehor Remote Technical Support Engineer Storage IBM Systems Storage Support - EMEA Storage Competence Center (ESCC) Spectrum Scale / Elastic Storage Server ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49-170-4521194 E-Mail: Achim.Rehor at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Sebastian Krause Gesch?ftsf?hrung: Gregor Pillen (Vorsitzender), Agnes Heftberger, Norbert Janzen, Markus Koerner, Christian Noll, Nicole Reimer Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 gpfsug-discuss-bounces at spectrumscale.org wrote on 16/07/2020 15:13:44: > From: Stef Coene > To: gpfsug main discussion list > Date: 16/07/2020 15:18 > Subject: [EXTERNAL] [gpfsug-discuss] GUI refresh task error > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > Hi, > > On brand new 5.0.5 cluster we have the following errors on all nodes: > "The following GUI refresh task(s) failed: WATCHFOLDER" > > It also says > "Failure reason: Command mmwatch all functional --list-clustered-status > failed" > > Running mmwatch manually gives: > mmwatch: The Clustered Watch Folder function is only available in the > IBM Spectrum Scale Advanced Edition > or the Data Management Edition. > mmwatch: Command failed. Examine previous error messages to determine cause. > > How can I get rid of this error? > > I tried to disable the task with: > chtask WATCHFOLDER --inactive > EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. > > > Stef > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url? > u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx- > siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2- > M&m=PJdg0uy6rMRzbLqiuOb4e2gUtNwAojhPfAupgPOi2nA&s=x3-02anMU4TcV4bTGAZoNJ8CvIfbqLXQJqyBpeyHuUk&e= > From macthev at gmail.com Thu Jul 16 15:13:31 2020 From: macthev at gmail.com (dale mac) Date: Fri, 17 Jul 2020 00:13:31 +1000 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Message-ID: On Thu, 16 Jul 2020 at 23:44, Achim Rehor wrote: > you may want to replace the mmwatch command with a simple script like this > > > #!/bin/ksh > echo "dummy for removing gui error" > exit 0 > > or install either 5.1.0.0 or 5.0.5.2 (when it gets available ..) > > > Mit freundlichen Gr??en / Kind regards > > Achim Rehor > > Remote Technical Support Engineer Storage > IBM Systems Storage Support - EMEA Storage Competence Center (ESCC) > Spectrum Scale / Elastic Storage Server > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland > Am Weiher 24 > 65451 Kelsterbach > Phone: +49-170-4521194 > E-Mail: Achim.Rehor at de.ibm.com > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Sebastian Krause > Gesch?ftsf?hrung: Gregor Pillen (Vorsitzender), Agnes Heftberger, Norbert > Janzen, Markus Koerner, Christian Noll, Nicole Reimer > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, > HRB 14562 / WEEE-Reg.-Nr. DE 99369940 > > > gpfsug-discuss-bounces at spectrumscale.org wrote on 16/07/2020 15:13:44: > > > From: Stef Coene > > To: gpfsug main discussion list > > Date: 16/07/2020 15:18 > > Subject: [EXTERNAL] [gpfsug-discuss] GUI refresh task error > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hi, > > > > On brand new 5.0.5 cluster we have the following errors on all nodes: > > "The following GUI refresh task(s) failed: WATCHFOLDER" > > > > It also says > > "Failure reason: Command mmwatch all functional > --list-clustered-status > > failed" > > > > Running mmwatch manually gives: > > mmwatch: The Clustered Watch Folder function is only available in the > > IBM Spectrum Scale Advanced Edition > > or the Data Management Edition. > > mmwatch: Command failed. Examine previous error messages to determine > cause. > > > > How can I get rid of this error? > > > > I tried to disable the task with: > > chtask WATCHFOLDER --inactive > > EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. > > > > > > Stef > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > https://urldefense.proofpoint.com/v2/url? > > > > u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx- > > siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2- > > > > M&m=PJdg0uy6rMRzbLqiuOb4e2gUtNwAojhPfAupgPOi2nA&s=x3-02anMU4TcV4bTGAZoNJ8CvIfbqLXQJqyBpeyHuUk&e= > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Regards Dale -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Thu Jul 16 15:28:28 2020 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Thu, 16 Jul 2020 14:28:28 +0000 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <975f874a066c4ba6a45c62f9b280efa2@postbank.de> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org>, <975f874a066c4ba6a45c62f9b280efa2@postbank.de> Message-ID: I can?t speak for you, but that would not be OK for me. We monitor the mmhealth command and it?s fairly inconvenient to have portions of it broken/have to be worked around on the alerts side rather than the GPFS side. I see others here have provided better solutions for that. -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' On Jul 16, 2020, at 09:33, Roland Schuemann wrote: ?Hi Stef, we already recognized this error too and opened a PMR/Case at IBM. You can set this task to inactive, but this is not persistent. After gui restart it comes again. This was the answer from IBM Support. This will be fixed in the next release of 5.0.5.2, right now there is no work-around but will not cause issue besides the cosmetic task failed message. Is this OK for you? So we ignore (Gui is still degraded) it and wait for the fix. Kind regards Roland Sch?mann Freundliche Gr??e / Kind regards Roland Sch?mann ____________________________________________ Roland Sch?mann Infrastructure Engineering (BTE) CIO PB Germany Deutsche Bank I Technology, Data and Innovation Postbank Systems AG -----Urspr?ngliche Nachricht----- Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von Stef Coene Gesendet: Donnerstag, 16. Juli 2020 15:14 An: gpfsug main discussion list Betreff: [gpfsug-discuss] GUI refresh task error Hi, On brand new 5.0.5 cluster we have the following errors on all nodes: "The following GUI refresh task(s) failed: WATCHFOLDER" It also says "Failure reason: Command mmwatch all functional --list-clustered-status failed" Running mmwatch manually gives: mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. mmwatch: Command failed. Examine previous error messages to determine cause. How can I get rid of this error? I tried to disable the task with: chtask WATCHFOLDER --inactive EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. Stef _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Die Europ?ische Kommission hat unter http://ec.europa.eu/consumers/odr/ eine Europ?ische Online-Streitbeilegungsplattform (OS-Plattform) errichtet. Verbraucher k?nnen die OS-Plattform f?r die au?ergerichtliche Beilegung von Streitigkeiten aus Online-Vertr?gen mit in der EU niedergelassenen Unternehmen nutzen. Informationen (einschlie?lich Pflichtangaben) zu einzelnen, innerhalb der EU t?tigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter https://www.deutsche-bank.de/Pflichtangaben. Diese E-Mail enth?lt vertrauliche und/ oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The European Commission has established a European online dispute resolution platform (OS platform) under http://ec.europa.eu/consumers/odr/. Consumers may use the OS platform to resolve disputes arising from online contracts with providers established in the EU. Please refer to https://www.db.com/disclosures for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef.coene at docum.org Thu Jul 16 14:47:18 2020 From: stef.coene at docum.org (Stef Coene) Date: Thu, 16 Jul 2020 15:47:18 +0200 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <975f874a066c4ba6a45c62f9b280efa2@postbank.de> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> <975f874a066c4ba6a45c62f9b280efa2@postbank.de> Message-ID: Ok, thanx for the answer. I will wait for the fix. Stef On 2020-07-16 15:25, Roland Schuemann wrote: > Hi Stef, > > we already recognized this error too and opened a PMR/Case at IBM. > You can set this task to inactive, but this is not persistent. After gui restart it comes again. > > This was the answer from IBM Support. >>>>>>>>>>>>>>>>>> > This will be fixed in the next release of 5.0.5.2, right now there is no work-around but will not cause issue besides the cosmetic task failed message. > Is this OK for you? >>>>>>>>>>>>>>>>>> > > So we ignore (Gui is still degraded) it and wait for the fix. > > Kind regards > Roland Sch?mann > > > Freundliche Gr??e / Kind regards > Roland Sch?mann > > ____________________________________________ > > Roland Sch?mann > Infrastructure Engineering (BTE) > CIO PB Germany > > Deutsche Bank I Technology, Data and Innovation > Postbank Systems AG > > > -----Urspr?ngliche Nachricht----- > Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von Stef Coene > Gesendet: Donnerstag, 16. Juli 2020 15:14 > An: gpfsug main discussion list > Betreff: [gpfsug-discuss] GUI refresh task error > > Hi, > > On brand new 5.0.5 cluster we have the following errors on all nodes: > "The following GUI refresh task(s) failed: WATCHFOLDER" > > It also says > "Failure reason: Command mmwatch all functional --list-clustered-status > failed" > > Running mmwatch manually gives: > mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. > mmwatch: Command failed. Examine previous error messages to determine cause. > > How can I get rid of this error? > > I tried to disable the task with: > chtask WATCHFOLDER --inactive > EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. > > > Stef > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > Die Europ?ische Kommission hat unter http://ec.europa.eu/consumers/odr/ eine Europ?ische Online-Streitbeilegungsplattform (OS-Plattform) errichtet. Verbraucher k?nnen die OS-Plattform f?r die au?ergerichtliche Beilegung von Streitigkeiten aus Online-Vertr?gen mit in der EU niedergelassenen Unternehmen nutzen. > > Informationen (einschlie?lich Pflichtangaben) zu einzelnen, innerhalb der EU t?tigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter https://www.deutsche-bank.de/Pflichtangaben. Diese E-Mail enth?lt vertrauliche und/ oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. > > The European Commission has established a European online dispute resolution platform (OS platform) under http://ec.europa.eu/consumers/odr/. Consumers may use the OS platform to resolve disputes arising from online contracts with providers established in the EU. > > Please refer to https://www.db.com/disclosures for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From scale at us.ibm.com Fri Jul 17 18:34:36 2020 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 17 Jul 2020 23:04:36 +0530 Subject: [gpfsug-discuss] rsync NFS4 ACLs In-Reply-To: References: Message-ID: Hi Jan-Frode, Do you have a specific question on this or is this sent just for informing others. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Jan-Frode Myklebust To: gpfsug main discussion list Date: 15-07-2020 08.44 PM Subject: [EXTERNAL] [gpfsug-discuss] rsync NFS4 ACLs Sent by: gpfsug-discuss-bounces at spectrumscale.org It looks like the old NFS4 ACL patch for rsync is no longer needed. Starting with rsync-3.2.0 (and backported to rsync-3.1.2-9 in RHEL7), it will now copy NFS4 ACLs if we tell it to ignore the posix ACLs: rsync -X --filter '-x system.posix_acl' file-with-acl copy-with-acl _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=GEVuZDFyUhFpvxxYM6W6ts3YvduD9Vu6oIQPJFta6eo&s=MydZiOHO7AFkY1MRBL5kY5vFGTeCYvzJBwMt-14T-8Y&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From janfrode at tanso.net Fri Jul 17 20:50:53 2020 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 17 Jul 2020 21:50:53 +0200 Subject: [gpfsug-discuss] rsync NFS4 ACLs In-Reply-To: References: Message-ID: It was sent to inform others. Meant to write a bit more, but mistakingly hit send too soon :-) So, again. Starting with rsync v3.2.0 and backported to v3.1.2-9 in RHEL7, it now handles NFS4 ACLs on GPFS. The syntax to get it working is: rsync -X --filter '-x system.posix_acl' And it works on at least v3.5 filesystems and later. Didn?t try earlier than v3.5. -jf fre. 17. jul. 2020 kl. 20:31 skrev IBM Spectrum Scale : > Hi Jan-Frode, > > Do you have a specific question on this or is this sent just for informing > others. > > Regards, The Spectrum Scale (GPFS) team > > > ------------------------------------------------------------------------------------------------------------------ > If you feel that your question can benefit other users of Spectrum Scale > (GPFS), then please post it to the public IBM developerWroks Forum at > https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. > > > If your query concerns a potential software error in Spectrum Scale (GPFS) > and you have an IBM software maintenance contract please contact > 1-800-237-5511 in the United States or your local IBM Service Center in > other countries. > > The forum is informally monitored as time permits and should not be used > for priority messages to the Spectrum Scale (GPFS) team. > > [image: Inactive hide details for Jan-Frode Myklebust ---15-07-2020 > 08.44.49 PM---It looks like the old NFS4 ACL patch for rsync is no]Jan-Frode > Myklebust ---15-07-2020 08.44.49 PM---It looks like the old NFS4 ACL patch > for rsync is no longer needed. Starting with rsync-3.2.0 (and b > > > > From: Jan-Frode Myklebust > To: gpfsug main discussion list > Date: 15-07-2020 08.44 PM > Subject: [EXTERNAL] [gpfsug-discuss] rsync NFS4 ACLs > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > It looks like the old NFS4 ACL patch for rsync is no longer needed. > Starting with rsync-3.2.0 (and backported to rsync-3.1.2-9 in RHEL7), it > will now copy NFS4 ACLs if we tell it to ignore the posix ACLs: > > rsync -X --filter '-x system.posix_acl' file-with-acl copy-with-acl > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From chair at spectrumscale.org Tue Jul 21 09:03:34 2020 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Tue, 21 Jul 2020 09:03:34 +0100 Subject: [gpfsug-discuss] https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-strategy-update/ Message-ID: <> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 1949 bytes Desc: not available URL: From joe at excelero.com Tue Jul 21 13:42:19 2020 From: joe at excelero.com (joe at excelero.com) Date: Tue, 21 Jul 2020 07:42:19 -0500 Subject: [gpfsug-discuss] Accepted: gpfsug-discuss Digest, Vol 102, Issue 9 Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reply.ics Type: application/ics Size: 0 bytes Desc: not available URL: From carlz at us.ibm.com Tue Jul 21 16:36:46 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Tue, 21 Jul 2020 15:36:46 +0000 Subject: [gpfsug-discuss] Quick survey on PTF frequency Message-ID: <5381ACF7-252C-4F1A-903A-5D9B79A71E3C@us.ibm.com> Folks, We?re gathering some data on how people consume PTFs for Scale. There is a very brief survey online, and we?d appreciate all responses. No identifying information is collected. Survey: https://www.surveygizmo.com/s3/5727746/47520248d614 Thanks, Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_884492198] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From carlz at us.ibm.com Wed Jul 22 22:13:25 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Wed, 22 Jul 2020 21:13:25 +0000 Subject: [gpfsug-discuss] Developer Edition upgraded to 5.0.5.1 Message-ID: Developer Edition 5.0.5.1 is now available for download Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_647541561] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From prasad.surampudi at theatsgroup.com Thu Jul 23 01:34:02 2020 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Thu, 23 Jul 2020 00:34:02 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Message-ID: Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: 1. What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? 2. If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? 3. Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). 4. We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Thu Jul 23 08:09:17 2020 From: YARD at il.ibm.com (Yaron Daniel) Date: Thu, 23 Jul 2020 10:09:17 +0300 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: Message-ID: Hi What is the output for: #mmlsconfig |grep -i verbs #ibstat Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Date: 07/23/2020 03:34 AM Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=3V12EzdqYBk1P235cOvncsD-pOXNf5e5vPp85RnNhP8&s=XxlITEUK0nSjIyiu9XY1DEbYiVzVbp5XHcvQPfFJ2NY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3776 bytes Desc: not available URL: From stockf at us.ibm.com Thu Jul 23 12:14:57 2020 From: stockf at us.ibm.com (Frederick Stock) Date: Thu, 23 Jul 2020 11:14:57 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: From prasad.surampudi at theatsgroup.com Thu Jul 23 14:33:13 2020 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Thu, 23 Jul 2020 13:33:13 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 102, Issue 12 In-Reply-To: References: Message-ID: Hi Yaron, Please see the outputs of mmlsconfig and ibstat below: sudo /usr/lpp/mmfs/bin/mmlsconfig |grep -i verbs verbsRdmasPerNode 192 verbsRdma enable verbsRdmaSend yes verbsRdmasPerConnection 48 verbsRdmasPerConnection 16 verbsPorts mlx5_4/1/1 mlx5_5/1/2 verbsPorts mlx4_0/1/0 mlx4_0/2/0 verbsPorts mlx5_0/1/1 mlx5_1/1/2 verbsPorts mlx5_0/1/1 mlx5_2/1/2 verbsPorts mlx5_2/1/1 mlx5_3/1/2 ?ibstat output on NSD server: CA 'mlx5_0' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0x506b4b03000fdb74 System image GUID: 0x506b4b03000fdb74 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0x526b4bfffe0fdb74 Link layer: Ethernet CA 'mlx5_1' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0x506b4b03000fdb75 System image GUID: 0x506b4b03000fdb74 Port 1: State: Down Physical state: Disabled Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0x526b4bfffe0fdb75 Link layer: Ethernet CA 'mlx5_2' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300a7e928 System image GUID: 0xec0d9a0300a7e928 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0x526b4bfffe0fdb74 Link layer: Ethernet CA 'mlx5_3' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300a7e929 System image GUID: 0xec0d9a0300a7e928 Port 1: State: Down Physical state: Disabled Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0xee0d9afffea7e929 Link layer: Ethernet CA 'mlx5_4' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300da5f92 System image GUID: 0xec0d9a0300da5f92 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 13 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xec0d9a0300da5f92 Link layer: InfiniBand CA 'mlx5_5' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300da5f93 System image GUID: 0xec0d9a0300da5f92 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 6 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xec0d9a0300da5f93 Link layer: InfiniBand ?ibstat output on CES server: CA 'mlx5_0' CA type: MT4115 Number of ports: 1 Firmware version: 12.22.4030 Hardware version: 0 Node GUID: 0xb88303ffff5ec6ec System image GUID: 0xb88303ffff5ec6ec Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 9 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xb88303ffff5ec6ec Link layer: InfiniBand CA 'mlx5_1' CA type: MT4115 Number of ports: 1 Firmware version: 12.22.4030 Hardware version: 0 Node GUID: 0xb88303ffff5ec6ed System image GUID: 0xb88303ffff5ec6ec Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 12 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xb88303ffff5ec6ed Link layer: InfiniBand Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: Thursday, July 23, 2020 3:09 AM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 102, Issue 12 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Spectrum Scale pagepool size with RDMA (Prasad Surampudi) 2. Re: Spectrum Scale pagepool size with RDMA (Yaron Daniel) ---------------------------------------------------------------------- Message: 1 Date: Thu, 23 Jul 2020 00:34:02 +0000 From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: 1. What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? 2. If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? 3. Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). 4. We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 23 Jul 2020 10:09:17 +0300 From: "Yaron Daniel" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi What is the output for: #mmlsconfig |grep -i verbs #ibstat Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Date: 07/23/2020 03:34 AM Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=3V12EzdqYBk1P235cOvncsD-pOXNf5e5vPp85RnNhP8&s=XxlITEUK0nSjIyiu9XY1DEbYiVzVbp5XHcvQPfFJ2NY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3776 bytes Desc: not available URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 102, Issue 12 *********************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Thu Jul 23 14:48:44 2020 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Thu, 23 Jul 2020 13:48:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: From olaf.weiser at de.ibm.com Thu Jul 23 14:48:44 2020 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Thu, 23 Jul 2020 13:48:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: From p.ward at nhm.ac.uk Thu Jul 2 13:00:41 2020 From: p.ward at nhm.ac.uk (Paul Ward) Date: Thu, 2 Jul 2020 12:00:41 +0000 Subject: [gpfsug-discuss] Change uidNumber and gidNumber for billions of files In-Reply-To: References: Message-ID: Sorry a bit behind the discussion... We were using GPFS's internal TBD2 method for UID and GID assignment (15 years ago GPFS was purchased for a single purpose with a handful of accounts) I have just been through 88 million files ADDING NFSv4 ACEs with UIDs and GIDs derived from AD RIDs. We have both the TBD2 and AD RID ACEs in the ACLs. This allowed us to do a single switch over between the authentication methods for all the data at once. The testing and prep work took months though. We have Spectrum protect and SP Space management with a tape library in the mix, so I needed to make sure ACL changes didn't cause a backup and recall then backup for migrated files. My scripts made use of mmgetacl and mmputacl. I had less than 50 unique ACEs to construct and I created a spreadsheet that auto created the commands. This could have been automated, but for that number it was just as quick for me to do by hand than learn to program it. I wrote my own scripts, with a lot of safety checks, as it went AWOL at one point and started changing permissions at the root for the GPFS file system, removing access for everyone. We had a mix of posix only and nfsv4 ACLs. Testing them revealed a lot of skeletons in the way some systems had been set up - allow a lot of time for unknowns if you have systems using GPFS as a back end. Some way into it to this, I discovered IBM have created code to do this - I didn't keep the link as it was too late for me. The switch over went seamlessly btw, it had to with all the prep work! Kindest regards, Paul Paul Ward TS Infrastructure Architect Natural History Museum T: 02079426450 E: p.ward at nhm.ac.uk [A picture containing drawing Description automatically generated] From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Lohit Valleru Sent: 08 June 2020 18:44 To: gpfsug main discussion list Subject: [gpfsug-discuss] Change uidNumber and gidNumber for billions of files Hello Everyone, We are planning to migrate from LDAP to AD, and one of the best solution was to change the uidNumber and gidNumber to what SSSD or Centrify would resolve. May I know, if anyone has come across a tool/tools that can change the uidNumbers and gidNumbers of billions of files efficiently and in a reliable manner? We could spend some time to write a custom script, but wanted to know if a tool already exists. Please do let me know, if any one else has come across a similar situation, and the steps/tools used to resolve the same. Regards, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 5356 bytes Desc: image001.jpg URL: From damir.krstic at gmail.com Tue Jul 7 14:37:46 2020 From: damir.krstic at gmail.com (Damir Krstic) Date: Tue, 7 Jul 2020 08:37:46 -0500 Subject: [gpfsug-discuss] dependent versus independent filesets Message-ID: We are deploying our new ESS and are considering moving to independent filesets. The snapshot per fileset feature appeals to us. Has anyone considered independent vs. dependent filesets and what was your reasoning to go with one as opposed to the other? Or perhaps you opted to have both on your filesystem, and if, what was the reasoning for it? Thank you. Damir -------------- next part -------------- An HTML attachment was scrubbed... URL: From skylar2 at uw.edu Tue Jul 7 14:59:58 2020 From: skylar2 at uw.edu (Skylar Thompson) Date: Tue, 7 Jul 2020 06:59:58 -0700 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: Message-ID: <20200707135958.leqp3q6f3rbtslji@illuin> We wanted to be able to snapshot and backup filesets separately with mmbackup, so went with independent filesets. On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > We are deploying our new ESS and are considering moving to independent > filesets. The snapshot per fileset feature appeals to us. > > Has anyone considered independent vs. dependent filesets and what was your > reasoning to go with one as opposed to the other? Or perhaps you opted to > have both on your filesystem, and if, what was the reasoning for it? > > Thank you. > Damir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department (UW Medicine), System Administrator -- Foege Building S046, (206)-685-7354 -- Pronouns: He/Him/His From chair at spectrumscale.org Tue Jul 7 15:52:19 2020 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Tue, 07 Jul 2020 15:52:19 +0100 Subject: [gpfsug-discuss] SSUG::Digital Talk 2 Message-ID: <1D2B20FD-257E-49C3-9D24-C63978758ED0@spectrumscale.org> Hi All, The next talk in the SSUG:: Digital series is taking place on Monday 13th July at 4pm BST. (Other time-zones are listed on the website!) Speaker: Lindsay Todd Topic: Best Practices for building a stretched cluster More details at: https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-best-practices-for-building-a-stretched-cluster/ (The next one after that will be 27th July) Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ewahl at osc.edu Tue Jul 7 15:44:16 2020 From: ewahl at osc.edu (Wahl, Edward) Date: Tue, 7 Jul 2020 14:44:16 +0000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: <20200707135958.leqp3q6f3rbtslji@illuin> References: <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: We also went with independent filesets for both backup (and quota) reasons for several years now, and have stuck with this across to 5.x. However we still maintain a minor number of dependent filesets for administrative use. Being able to mmbackup on many filesets at once can increase your parallelization _quite_ nicely! We create and delete the individual snaps before and after each backup, as you may expect. Just be aware that if you do massive numbers of fast snapshot deletes and creates you WILL reach a point where you will run into issues due to quiescing compute clients, and that certain types of workloads have issues with snapshotting in general. You have to more closely watch what you pre-allocate, and what you have left in the common metadata/inode pool. Once allocated, even if not being used, you cannot reduce the inode allocation without removing the fileset and re-creating. (say a fileset user had 5 million inodes and now only needs 500,000) Growth can also be an issue if you do NOT fully pre-allocate each space. This can be scary if you are not used to over-subscription in general. But I imagine that most sites have some decent % of oversubscription if they use filesets and quotas. Ed OSC -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Skylar Thompson Sent: Tuesday, July 7, 2020 10:00 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] dependent versus independent filesets We wanted to be able to snapshot and backup filesets separately with mmbackup, so went with independent filesets. On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > We are deploying our new ESS and are considering moving to independent > filesets. The snapshot per fileset feature appeals to us. > > Has anyone considered independent vs. dependent filesets and what was > your reasoning to go with one as opposed to the other? Or perhaps you > opted to have both on your filesystem, and if, what was the reasoning for it? > > Thank you. > Damir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug- > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY > vcGNh4M_no$ -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department (UW Medicine), System Administrator -- Foege Building S046, (206)-685-7354 -- Pronouns: He/Him/His _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$ From skylar2 at uw.edu Tue Jul 7 17:07:07 2020 From: skylar2 at uw.edu (Skylar Thompson) Date: Tue, 7 Jul 2020 09:07:07 -0700 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: <20200707160707.mk5e5hfspn7d6vnq@illuin> Ah, yes, I forgot about the quota rationale; we use independent filesets for that as well. We have run into confusion with inodes as one has to be careful to allocate inodes /and/ adjust a quota to expand a fileset. IIRC GPFS generates ENOSPC if it actually runs out of inodes, and EDQUOT if it hits a quota. We've also run into the quiescing issue but have been able to workaround it for now by increasing the splay between the different schedules. On Tue, Jul 07, 2020 at 02:44:16PM +0000, Wahl, Edward wrote: > We also went with independent filesets for both backup (and quota) reasons for several years now, and have stuck with this across to 5.x. However we still maintain a minor number of dependent filesets for administrative use. Being able to mmbackup on many filesets at once can increase your parallelization _quite_ nicely! We create and delete the individual snaps before and after each backup, as you may expect. Just be aware that if you do massive numbers of fast snapshot deletes and creates you WILL reach a point where you will run into issues due to quiescing compute clients, and that certain types of workloads have issues with snapshotting in general. > > You have to more closely watch what you pre-allocate, and what you have left in the common metadata/inode pool. Once allocated, even if not being used, you cannot reduce the inode allocation without removing the fileset and re-creating. (say a fileset user had 5 million inodes and now only needs 500,000) > > Growth can also be an issue if you do NOT fully pre-allocate each space. This can be scary if you are not used to over-subscription in general. But I imagine that most sites have some decent % of oversubscription if they use filesets and quotas. > > Ed > OSC > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Skylar Thompson > Sent: Tuesday, July 7, 2020 10:00 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] dependent versus independent filesets > > We wanted to be able to snapshot and backup filesets separately with mmbackup, so went with independent filesets. > > On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > > We are deploying our new ESS and are considering moving to independent > > filesets. The snapshot per fileset feature appeals to us. > > > > Has anyone considered independent vs. dependent filesets and what was > > your reasoning to go with one as opposed to the other? Or perhaps you > > opted to have both on your filesystem, and if, what was the reasoning for it? > > > > Thank you. > > Damir > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug- > > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY > > vcGNh4M_no$ > > > -- > -- Skylar Thompson (skylar2 at u.washington.edu) > -- Genome Sciences Department (UW Medicine), System Administrator > -- Foege Building S046, (206)-685-7354 > -- Pronouns: He/Him/His > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$ > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department (UW Medicine), System Administrator -- Foege Building S046, (206)-685-7354 -- Pronouns: He/Him/His From stockf at us.ibm.com Tue Jul 7 17:25:27 2020 From: stockf at us.ibm.com (Frederick Stock) Date: Tue, 7 Jul 2020 16:25:27 +0000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: , <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Tue Jul 7 19:19:51 2020 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Tue, 7 Jul 2020 18:19:51 +0000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: , , <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: An HTML attachment was scrubbed... URL: From leslie.james.elliott at gmail.com Wed Jul 8 00:19:20 2020 From: leslie.james.elliott at gmail.com (leslie elliott) Date: Wed, 8 Jul 2020 09:19:20 +1000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: as long as your currently do not need more than 1000 on a filesystem On Wed, 8 Jul 2020 at 04:20, Daniel Kidger wrote: > It is worth noting that Independent Filesets are a relatively recent > addition to Spectrum Scale, compared to Dependant Filesets. They havesolved > some of the limitations of the former. > > > My view would be to always use Independent FIlesets unless there is a > particular reason to use Dependant ones. > > Daniel > > _________________________________________________________ > *Daniel Kidger Ph.D.* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum Discover and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "Frederick Stock" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Cc: gpfsug-discuss at spectrumscale.org > Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent > filesets > Date: Tue, Jul 7, 2020 17:25 > > One comment about inode preallocation. There was a time when inode > creation was performance challenged but in my opinion that is no longer the > case, unless you have need for file creates to complete at extreme speed. > In my experience it is the rare customer that requires extremely fast file > create times so pre-allocation is not truly necessary. As was noted once > an inode is allocated it cannot be deallocated. The more important item is > the maximum inodes defined for a fileset or file system. Yes, those do > need to be monitored so they can be increased if necessary to avoid out of > space errors. > > Fred > __________________________________________________ > Fred Stock | IBM Pittsburgh Lab | 720-430-8821 > stockf at us.ibm.com > > > > ----- Original message ----- > From: "Wahl, Edward" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent > filesets > Date: Tue, Jul 7, 2020 11:59 AM > > We also went with independent filesets for both backup (and quota) reasons > for several years now, and have stuck with this across to 5.x. However we > still maintain a minor number of dependent filesets for administrative use. > Being able to mmbackup on many filesets at once can increase your > parallelization _quite_ nicely! We create and delete the individual snaps > before and after each backup, as you may expect. Just be aware that if you > do massive numbers of fast snapshot deletes and creates you WILL reach a > point where you will run into issues due to quiescing compute clients, and > that certain types of workloads have issues with snapshotting in general. > > You have to more closely watch what you pre-allocate, and what you have > left in the common metadata/inode pool. Once allocated, even if not being > used, you cannot reduce the inode allocation without removing the fileset > and re-creating. (say a fileset user had 5 million inodes and now only > needs 500,000) > > Growth can also be an issue if you do NOT fully pre-allocate each space. > This can be scary if you are not used to over-subscription in general. But > I imagine that most sites have some decent % of oversubscription if they > use filesets and quotas. > > Ed > OSC > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Skylar Thompson > Sent: Tuesday, July 7, 2020 10:00 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] dependent versus independent filesets > > We wanted to be able to snapshot and backup filesets separately with > mmbackup, so went with independent filesets. > > On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > > We are deploying our new ESS and are considering moving to independent > > filesets. The snapshot per fileset feature appeals to us. > > > > Has anyone considered independent vs. dependent filesets and what was > > your reasoning to go with one as opposed to the other? Or perhaps you > > opted to have both on your filesystem, and if, what was the reasoning > for it? > > > > Thank you. > > Damir > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug- > > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY > > vcGNh4M_no$ > > > -- > -- Skylar Thompson (skylar2 at u.washington.edu) > -- Genome Sciences Department (UW Medicine), System Administrator > -- Foege Building S046, (206)-685-7354 > -- Pronouns: He/Him/His > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Jul 10 09:28:48 2020 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Fri, 10 Jul 2020 08:28:48 +0000 Subject: [gpfsug-discuss] SSUG::Digital Talk 2 Message-ID: Just a reminder that the next talk is on Monday. For some technical reasons, the link to join the event has changed, so if you?d added a calendar event with the link already, please update it to: https://ibm.webex.com/ibm/onstage/g.php?MTID=ed52933f6b6a9eee6d980d1a0807a8e5a The SSUG website has also been updated with the new event link already. Simon From: on behalf of "chair at spectrumscale.org" Reply to: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 7 July 2020 at 15:52 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] SSUG::Digital Talk 2 Hi All, The next talk in the SSUG:: Digital series is taking place on Monday 13th July at 4pm BST. (Other time-zones are listed on the website!) Speaker: Lindsay Todd Topic: Best Practices for building a stretched cluster More details at: https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-best-practices-for-building-a-stretched-cluster/ (The next one after that will be 27th July) Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Jul 15 16:15:21 2020 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 15 Jul 2020 17:15:21 +0200 Subject: [gpfsug-discuss] rsync NFS4 ACLs Message-ID: It looks like the old NFS4 ACL patch for rsync is no longer needed. Starting with rsync-3.2.0 (and backported to rsync-3.1.2-9 in RHEL7), it will now copy NFS4 ACLs if we tell it to ignore the posix ACLs: rsync -X --filter '-x system.posix_acl' file-with-acl copy-with-acl -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef.coene at docum.org Thu Jul 16 14:13:44 2020 From: stef.coene at docum.org (Stef Coene) Date: Thu, 16 Jul 2020 15:13:44 +0200 Subject: [gpfsug-discuss] GUI refresh task error Message-ID: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Hi, On brand new 5.0.5 cluster we have the following errors on all nodes: "The following GUI refresh task(s) failed: WATCHFOLDER" It also says "Failure reason: Command mmwatch all functional --list-clustered-status failed" Running mmwatch manually gives: mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. mmwatch: Command failed. Examine previous error messages to determine cause. How can I get rid of this error? I tried to disable the task with: chtask WATCHFOLDER --inactive EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. Stef From roland.schuemann at postbank.de Thu Jul 16 14:25:49 2020 From: roland.schuemann at postbank.de (Roland Schuemann) Date: Thu, 16 Jul 2020 13:25:49 +0000 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Message-ID: <975f874a066c4ba6a45c62f9b280efa2@postbank.de> Hi Stef, we already recognized this error too and opened a PMR/Case at IBM. You can set this task to inactive, but this is not persistent. After gui restart it comes again. This was the answer from IBM Support. >>>>>>>>>>>>>>>>> This will be fixed in the next release of 5.0.5.2, right now there is no work-around but will not cause issue besides the cosmetic task failed message. Is this OK for you? >>>>>>>>>>>>>>>>> So we ignore (Gui is still degraded) it and wait for the fix. Kind regards Roland Sch?mann Freundliche Gr??e / Kind regards Roland Sch?mann ____________________________________________ Roland Sch?mann Infrastructure Engineering (BTE) CIO PB Germany Deutsche Bank I Technology, Data and Innovation Postbank Systems AG -----Urspr?ngliche Nachricht----- Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von Stef Coene Gesendet: Donnerstag, 16. Juli 2020 15:14 An: gpfsug main discussion list Betreff: [gpfsug-discuss] GUI refresh task error Hi, On brand new 5.0.5 cluster we have the following errors on all nodes: "The following GUI refresh task(s) failed: WATCHFOLDER" It also says "Failure reason: Command mmwatch all functional --list-clustered-status failed" Running mmwatch manually gives: mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. mmwatch: Command failed. Examine previous error messages to determine cause. How can I get rid of this error? I tried to disable the task with: chtask WATCHFOLDER --inactive EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. Stef _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Die Europ?ische Kommission hat unter http://ec.europa.eu/consumers/odr/ eine Europ?ische Online-Streitbeilegungsplattform (OS-Plattform) errichtet. Verbraucher k?nnen die OS-Plattform f?r die au?ergerichtliche Beilegung von Streitigkeiten aus Online-Vertr?gen mit in der EU niedergelassenen Unternehmen nutzen. Informationen (einschlie?lich Pflichtangaben) zu einzelnen, innerhalb der EU t?tigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter https://www.deutsche-bank.de/Pflichtangaben. Diese E-Mail enth?lt vertrauliche und/ oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The European Commission has established a European online dispute resolution platform (OS platform) under http://ec.europa.eu/consumers/odr/. Consumers may use the OS platform to resolve disputes arising from online contracts with providers established in the EU. Please refer to https://www.db.com/disclosures for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. From Achim.Rehor at de.ibm.com Thu Jul 16 14:44:34 2020 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Thu, 16 Jul 2020 15:44:34 +0200 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Message-ID: you may want to replace the mmwatch command with a simple script like this #!/bin/ksh echo "dummy for removing gui error" exit 0 or install either 5.1.0.0 or 5.0.5.2 (when it gets available ..) Mit freundlichen Gr??en / Kind regards Achim Rehor Remote Technical Support Engineer Storage IBM Systems Storage Support - EMEA Storage Competence Center (ESCC) Spectrum Scale / Elastic Storage Server ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49-170-4521194 E-Mail: Achim.Rehor at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Sebastian Krause Gesch?ftsf?hrung: Gregor Pillen (Vorsitzender), Agnes Heftberger, Norbert Janzen, Markus Koerner, Christian Noll, Nicole Reimer Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 gpfsug-discuss-bounces at spectrumscale.org wrote on 16/07/2020 15:13:44: > From: Stef Coene > To: gpfsug main discussion list > Date: 16/07/2020 15:18 > Subject: [EXTERNAL] [gpfsug-discuss] GUI refresh task error > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > Hi, > > On brand new 5.0.5 cluster we have the following errors on all nodes: > "The following GUI refresh task(s) failed: WATCHFOLDER" > > It also says > "Failure reason: Command mmwatch all functional --list-clustered-status > failed" > > Running mmwatch manually gives: > mmwatch: The Clustered Watch Folder function is only available in the > IBM Spectrum Scale Advanced Edition > or the Data Management Edition. > mmwatch: Command failed. Examine previous error messages to determine cause. > > How can I get rid of this error? > > I tried to disable the task with: > chtask WATCHFOLDER --inactive > EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. > > > Stef > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url? > u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx- > siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2- > M&m=PJdg0uy6rMRzbLqiuOb4e2gUtNwAojhPfAupgPOi2nA&s=x3-02anMU4TcV4bTGAZoNJ8CvIfbqLXQJqyBpeyHuUk&e= > From macthev at gmail.com Thu Jul 16 15:13:31 2020 From: macthev at gmail.com (dale mac) Date: Fri, 17 Jul 2020 00:13:31 +1000 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Message-ID: On Thu, 16 Jul 2020 at 23:44, Achim Rehor wrote: > you may want to replace the mmwatch command with a simple script like this > > > #!/bin/ksh > echo "dummy for removing gui error" > exit 0 > > or install either 5.1.0.0 or 5.0.5.2 (when it gets available ..) > > > Mit freundlichen Gr??en / Kind regards > > Achim Rehor > > Remote Technical Support Engineer Storage > IBM Systems Storage Support - EMEA Storage Competence Center (ESCC) > Spectrum Scale / Elastic Storage Server > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland > Am Weiher 24 > 65451 Kelsterbach > Phone: +49-170-4521194 > E-Mail: Achim.Rehor at de.ibm.com > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Sebastian Krause > Gesch?ftsf?hrung: Gregor Pillen (Vorsitzender), Agnes Heftberger, Norbert > Janzen, Markus Koerner, Christian Noll, Nicole Reimer > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, > HRB 14562 / WEEE-Reg.-Nr. DE 99369940 > > > gpfsug-discuss-bounces at spectrumscale.org wrote on 16/07/2020 15:13:44: > > > From: Stef Coene > > To: gpfsug main discussion list > > Date: 16/07/2020 15:18 > > Subject: [EXTERNAL] [gpfsug-discuss] GUI refresh task error > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hi, > > > > On brand new 5.0.5 cluster we have the following errors on all nodes: > > "The following GUI refresh task(s) failed: WATCHFOLDER" > > > > It also says > > "Failure reason: Command mmwatch all functional > --list-clustered-status > > failed" > > > > Running mmwatch manually gives: > > mmwatch: The Clustered Watch Folder function is only available in the > > IBM Spectrum Scale Advanced Edition > > or the Data Management Edition. > > mmwatch: Command failed. Examine previous error messages to determine > cause. > > > > How can I get rid of this error? > > > > I tried to disable the task with: > > chtask WATCHFOLDER --inactive > > EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. > > > > > > Stef > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > https://urldefense.proofpoint.com/v2/url? > > > > u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx- > > siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2- > > > > M&m=PJdg0uy6rMRzbLqiuOb4e2gUtNwAojhPfAupgPOi2nA&s=x3-02anMU4TcV4bTGAZoNJ8CvIfbqLXQJqyBpeyHuUk&e= > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Regards Dale -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Thu Jul 16 15:28:28 2020 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Thu, 16 Jul 2020 14:28:28 +0000 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <975f874a066c4ba6a45c62f9b280efa2@postbank.de> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org>, <975f874a066c4ba6a45c62f9b280efa2@postbank.de> Message-ID: I can?t speak for you, but that would not be OK for me. We monitor the mmhealth command and it?s fairly inconvenient to have portions of it broken/have to be worked around on the alerts side rather than the GPFS side. I see others here have provided better solutions for that. -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' On Jul 16, 2020, at 09:33, Roland Schuemann wrote: ?Hi Stef, we already recognized this error too and opened a PMR/Case at IBM. You can set this task to inactive, but this is not persistent. After gui restart it comes again. This was the answer from IBM Support. This will be fixed in the next release of 5.0.5.2, right now there is no work-around but will not cause issue besides the cosmetic task failed message. Is this OK for you? So we ignore (Gui is still degraded) it and wait for the fix. Kind regards Roland Sch?mann Freundliche Gr??e / Kind regards Roland Sch?mann ____________________________________________ Roland Sch?mann Infrastructure Engineering (BTE) CIO PB Germany Deutsche Bank I Technology, Data and Innovation Postbank Systems AG -----Urspr?ngliche Nachricht----- Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von Stef Coene Gesendet: Donnerstag, 16. Juli 2020 15:14 An: gpfsug main discussion list Betreff: [gpfsug-discuss] GUI refresh task error Hi, On brand new 5.0.5 cluster we have the following errors on all nodes: "The following GUI refresh task(s) failed: WATCHFOLDER" It also says "Failure reason: Command mmwatch all functional --list-clustered-status failed" Running mmwatch manually gives: mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. mmwatch: Command failed. Examine previous error messages to determine cause. How can I get rid of this error? I tried to disable the task with: chtask WATCHFOLDER --inactive EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. Stef _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Die Europ?ische Kommission hat unter http://ec.europa.eu/consumers/odr/ eine Europ?ische Online-Streitbeilegungsplattform (OS-Plattform) errichtet. Verbraucher k?nnen die OS-Plattform f?r die au?ergerichtliche Beilegung von Streitigkeiten aus Online-Vertr?gen mit in der EU niedergelassenen Unternehmen nutzen. Informationen (einschlie?lich Pflichtangaben) zu einzelnen, innerhalb der EU t?tigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter https://www.deutsche-bank.de/Pflichtangaben. Diese E-Mail enth?lt vertrauliche und/ oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The European Commission has established a European online dispute resolution platform (OS platform) under http://ec.europa.eu/consumers/odr/. Consumers may use the OS platform to resolve disputes arising from online contracts with providers established in the EU. Please refer to https://www.db.com/disclosures for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef.coene at docum.org Thu Jul 16 14:47:18 2020 From: stef.coene at docum.org (Stef Coene) Date: Thu, 16 Jul 2020 15:47:18 +0200 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <975f874a066c4ba6a45c62f9b280efa2@postbank.de> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> <975f874a066c4ba6a45c62f9b280efa2@postbank.de> Message-ID: Ok, thanx for the answer. I will wait for the fix. Stef On 2020-07-16 15:25, Roland Schuemann wrote: > Hi Stef, > > we already recognized this error too and opened a PMR/Case at IBM. > You can set this task to inactive, but this is not persistent. After gui restart it comes again. > > This was the answer from IBM Support. >>>>>>>>>>>>>>>>>> > This will be fixed in the next release of 5.0.5.2, right now there is no work-around but will not cause issue besides the cosmetic task failed message. > Is this OK for you? >>>>>>>>>>>>>>>>>> > > So we ignore (Gui is still degraded) it and wait for the fix. > > Kind regards > Roland Sch?mann > > > Freundliche Gr??e / Kind regards > Roland Sch?mann > > ____________________________________________ > > Roland Sch?mann > Infrastructure Engineering (BTE) > CIO PB Germany > > Deutsche Bank I Technology, Data and Innovation > Postbank Systems AG > > > -----Urspr?ngliche Nachricht----- > Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von Stef Coene > Gesendet: Donnerstag, 16. Juli 2020 15:14 > An: gpfsug main discussion list > Betreff: [gpfsug-discuss] GUI refresh task error > > Hi, > > On brand new 5.0.5 cluster we have the following errors on all nodes: > "The following GUI refresh task(s) failed: WATCHFOLDER" > > It also says > "Failure reason: Command mmwatch all functional --list-clustered-status > failed" > > Running mmwatch manually gives: > mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. > mmwatch: Command failed. Examine previous error messages to determine cause. > > How can I get rid of this error? > > I tried to disable the task with: > chtask WATCHFOLDER --inactive > EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. > > > Stef > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > Die Europ?ische Kommission hat unter http://ec.europa.eu/consumers/odr/ eine Europ?ische Online-Streitbeilegungsplattform (OS-Plattform) errichtet. Verbraucher k?nnen die OS-Plattform f?r die au?ergerichtliche Beilegung von Streitigkeiten aus Online-Vertr?gen mit in der EU niedergelassenen Unternehmen nutzen. > > Informationen (einschlie?lich Pflichtangaben) zu einzelnen, innerhalb der EU t?tigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter https://www.deutsche-bank.de/Pflichtangaben. Diese E-Mail enth?lt vertrauliche und/ oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. > > The European Commission has established a European online dispute resolution platform (OS platform) under http://ec.europa.eu/consumers/odr/. Consumers may use the OS platform to resolve disputes arising from online contracts with providers established in the EU. > > Please refer to https://www.db.com/disclosures for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From scale at us.ibm.com Fri Jul 17 18:34:36 2020 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 17 Jul 2020 23:04:36 +0530 Subject: [gpfsug-discuss] rsync NFS4 ACLs In-Reply-To: References: Message-ID: Hi Jan-Frode, Do you have a specific question on this or is this sent just for informing others. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Jan-Frode Myklebust To: gpfsug main discussion list Date: 15-07-2020 08.44 PM Subject: [EXTERNAL] [gpfsug-discuss] rsync NFS4 ACLs Sent by: gpfsug-discuss-bounces at spectrumscale.org It looks like the old NFS4 ACL patch for rsync is no longer needed. Starting with rsync-3.2.0 (and backported to rsync-3.1.2-9 in RHEL7), it will now copy NFS4 ACLs if we tell it to ignore the posix ACLs: rsync -X --filter '-x system.posix_acl' file-with-acl copy-with-acl _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=GEVuZDFyUhFpvxxYM6W6ts3YvduD9Vu6oIQPJFta6eo&s=MydZiOHO7AFkY1MRBL5kY5vFGTeCYvzJBwMt-14T-8Y&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From janfrode at tanso.net Fri Jul 17 20:50:53 2020 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 17 Jul 2020 21:50:53 +0200 Subject: [gpfsug-discuss] rsync NFS4 ACLs In-Reply-To: References: Message-ID: It was sent to inform others. Meant to write a bit more, but mistakingly hit send too soon :-) So, again. Starting with rsync v3.2.0 and backported to v3.1.2-9 in RHEL7, it now handles NFS4 ACLs on GPFS. The syntax to get it working is: rsync -X --filter '-x system.posix_acl' And it works on at least v3.5 filesystems and later. Didn?t try earlier than v3.5. -jf fre. 17. jul. 2020 kl. 20:31 skrev IBM Spectrum Scale : > Hi Jan-Frode, > > Do you have a specific question on this or is this sent just for informing > others. > > Regards, The Spectrum Scale (GPFS) team > > > ------------------------------------------------------------------------------------------------------------------ > If you feel that your question can benefit other users of Spectrum Scale > (GPFS), then please post it to the public IBM developerWroks Forum at > https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. > > > If your query concerns a potential software error in Spectrum Scale (GPFS) > and you have an IBM software maintenance contract please contact > 1-800-237-5511 in the United States or your local IBM Service Center in > other countries. > > The forum is informally monitored as time permits and should not be used > for priority messages to the Spectrum Scale (GPFS) team. > > [image: Inactive hide details for Jan-Frode Myklebust ---15-07-2020 > 08.44.49 PM---It looks like the old NFS4 ACL patch for rsync is no]Jan-Frode > Myklebust ---15-07-2020 08.44.49 PM---It looks like the old NFS4 ACL patch > for rsync is no longer needed. Starting with rsync-3.2.0 (and b > > > > From: Jan-Frode Myklebust > To: gpfsug main discussion list > Date: 15-07-2020 08.44 PM > Subject: [EXTERNAL] [gpfsug-discuss] rsync NFS4 ACLs > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > It looks like the old NFS4 ACL patch for rsync is no longer needed. > Starting with rsync-3.2.0 (and backported to rsync-3.1.2-9 in RHEL7), it > will now copy NFS4 ACLs if we tell it to ignore the posix ACLs: > > rsync -X --filter '-x system.posix_acl' file-with-acl copy-with-acl > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From chair at spectrumscale.org Tue Jul 21 09:03:34 2020 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Tue, 21 Jul 2020 09:03:34 +0100 Subject: [gpfsug-discuss] https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-strategy-update/ Message-ID: <> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 1949 bytes Desc: not available URL: From joe at excelero.com Tue Jul 21 13:42:19 2020 From: joe at excelero.com (joe at excelero.com) Date: Tue, 21 Jul 2020 07:42:19 -0500 Subject: [gpfsug-discuss] Accepted: gpfsug-discuss Digest, Vol 102, Issue 9 Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reply.ics Type: application/ics Size: 0 bytes Desc: not available URL: From carlz at us.ibm.com Tue Jul 21 16:36:46 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Tue, 21 Jul 2020 15:36:46 +0000 Subject: [gpfsug-discuss] Quick survey on PTF frequency Message-ID: <5381ACF7-252C-4F1A-903A-5D9B79A71E3C@us.ibm.com> Folks, We?re gathering some data on how people consume PTFs for Scale. There is a very brief survey online, and we?d appreciate all responses. No identifying information is collected. Survey: https://www.surveygizmo.com/s3/5727746/47520248d614 Thanks, Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_884492198] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From carlz at us.ibm.com Wed Jul 22 22:13:25 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Wed, 22 Jul 2020 21:13:25 +0000 Subject: [gpfsug-discuss] Developer Edition upgraded to 5.0.5.1 Message-ID: Developer Edition 5.0.5.1 is now available for download Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_647541561] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From prasad.surampudi at theatsgroup.com Thu Jul 23 01:34:02 2020 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Thu, 23 Jul 2020 00:34:02 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Message-ID: Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: 1. What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? 2. If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? 3. Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). 4. We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Thu Jul 23 08:09:17 2020 From: YARD at il.ibm.com (Yaron Daniel) Date: Thu, 23 Jul 2020 10:09:17 +0300 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: Message-ID: Hi What is the output for: #mmlsconfig |grep -i verbs #ibstat Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Date: 07/23/2020 03:34 AM Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=3V12EzdqYBk1P235cOvncsD-pOXNf5e5vPp85RnNhP8&s=XxlITEUK0nSjIyiu9XY1DEbYiVzVbp5XHcvQPfFJ2NY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3776 bytes Desc: not available URL: From stockf at us.ibm.com Thu Jul 23 12:14:57 2020 From: stockf at us.ibm.com (Frederick Stock) Date: Thu, 23 Jul 2020 11:14:57 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: From prasad.surampudi at theatsgroup.com Thu Jul 23 14:33:13 2020 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Thu, 23 Jul 2020 13:33:13 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 102, Issue 12 In-Reply-To: References: Message-ID: Hi Yaron, Please see the outputs of mmlsconfig and ibstat below: sudo /usr/lpp/mmfs/bin/mmlsconfig |grep -i verbs verbsRdmasPerNode 192 verbsRdma enable verbsRdmaSend yes verbsRdmasPerConnection 48 verbsRdmasPerConnection 16 verbsPorts mlx5_4/1/1 mlx5_5/1/2 verbsPorts mlx4_0/1/0 mlx4_0/2/0 verbsPorts mlx5_0/1/1 mlx5_1/1/2 verbsPorts mlx5_0/1/1 mlx5_2/1/2 verbsPorts mlx5_2/1/1 mlx5_3/1/2 ?ibstat output on NSD server: CA 'mlx5_0' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0x506b4b03000fdb74 System image GUID: 0x506b4b03000fdb74 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0x526b4bfffe0fdb74 Link layer: Ethernet CA 'mlx5_1' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0x506b4b03000fdb75 System image GUID: 0x506b4b03000fdb74 Port 1: State: Down Physical state: Disabled Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0x526b4bfffe0fdb75 Link layer: Ethernet CA 'mlx5_2' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300a7e928 System image GUID: 0xec0d9a0300a7e928 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0x526b4bfffe0fdb74 Link layer: Ethernet CA 'mlx5_3' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300a7e929 System image GUID: 0xec0d9a0300a7e928 Port 1: State: Down Physical state: Disabled Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0xee0d9afffea7e929 Link layer: Ethernet CA 'mlx5_4' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300da5f92 System image GUID: 0xec0d9a0300da5f92 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 13 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xec0d9a0300da5f92 Link layer: InfiniBand CA 'mlx5_5' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300da5f93 System image GUID: 0xec0d9a0300da5f92 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 6 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xec0d9a0300da5f93 Link layer: InfiniBand ?ibstat output on CES server: CA 'mlx5_0' CA type: MT4115 Number of ports: 1 Firmware version: 12.22.4030 Hardware version: 0 Node GUID: 0xb88303ffff5ec6ec System image GUID: 0xb88303ffff5ec6ec Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 9 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xb88303ffff5ec6ec Link layer: InfiniBand CA 'mlx5_1' CA type: MT4115 Number of ports: 1 Firmware version: 12.22.4030 Hardware version: 0 Node GUID: 0xb88303ffff5ec6ed System image GUID: 0xb88303ffff5ec6ec Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 12 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xb88303ffff5ec6ed Link layer: InfiniBand Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: Thursday, July 23, 2020 3:09 AM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 102, Issue 12 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Spectrum Scale pagepool size with RDMA (Prasad Surampudi) 2. Re: Spectrum Scale pagepool size with RDMA (Yaron Daniel) ---------------------------------------------------------------------- Message: 1 Date: Thu, 23 Jul 2020 00:34:02 +0000 From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: 1. What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? 2. If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? 3. Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). 4. We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 23 Jul 2020 10:09:17 +0300 From: "Yaron Daniel" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi What is the output for: #mmlsconfig |grep -i verbs #ibstat Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Date: 07/23/2020 03:34 AM Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=3V12EzdqYBk1P235cOvncsD-pOXNf5e5vPp85RnNhP8&s=XxlITEUK0nSjIyiu9XY1DEbYiVzVbp5XHcvQPfFJ2NY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3776 bytes Desc: not available URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 102, Issue 12 *********************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Thu Jul 23 14:48:44 2020 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Thu, 23 Jul 2020 13:48:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: From olaf.weiser at de.ibm.com Thu Jul 23 14:48:44 2020 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Thu, 23 Jul 2020 13:48:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: From p.ward at nhm.ac.uk Thu Jul 2 13:00:41 2020 From: p.ward at nhm.ac.uk (Paul Ward) Date: Thu, 2 Jul 2020 12:00:41 +0000 Subject: [gpfsug-discuss] Change uidNumber and gidNumber for billions of files In-Reply-To: References: Message-ID: Sorry a bit behind the discussion... We were using GPFS's internal TBD2 method for UID and GID assignment (15 years ago GPFS was purchased for a single purpose with a handful of accounts) I have just been through 88 million files ADDING NFSv4 ACEs with UIDs and GIDs derived from AD RIDs. We have both the TBD2 and AD RID ACEs in the ACLs. This allowed us to do a single switch over between the authentication methods for all the data at once. The testing and prep work took months though. We have Spectrum protect and SP Space management with a tape library in the mix, so I needed to make sure ACL changes didn't cause a backup and recall then backup for migrated files. My scripts made use of mmgetacl and mmputacl. I had less than 50 unique ACEs to construct and I created a spreadsheet that auto created the commands. This could have been automated, but for that number it was just as quick for me to do by hand than learn to program it. I wrote my own scripts, with a lot of safety checks, as it went AWOL at one point and started changing permissions at the root for the GPFS file system, removing access for everyone. We had a mix of posix only and nfsv4 ACLs. Testing them revealed a lot of skeletons in the way some systems had been set up - allow a lot of time for unknowns if you have systems using GPFS as a back end. Some way into it to this, I discovered IBM have created code to do this - I didn't keep the link as it was too late for me. The switch over went seamlessly btw, it had to with all the prep work! Kindest regards, Paul Paul Ward TS Infrastructure Architect Natural History Museum T: 02079426450 E: p.ward at nhm.ac.uk [A picture containing drawing Description automatically generated] From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Lohit Valleru Sent: 08 June 2020 18:44 To: gpfsug main discussion list Subject: [gpfsug-discuss] Change uidNumber and gidNumber for billions of files Hello Everyone, We are planning to migrate from LDAP to AD, and one of the best solution was to change the uidNumber and gidNumber to what SSSD or Centrify would resolve. May I know, if anyone has come across a tool/tools that can change the uidNumbers and gidNumbers of billions of files efficiently and in a reliable manner? We could spend some time to write a custom script, but wanted to know if a tool already exists. Please do let me know, if any one else has come across a similar situation, and the steps/tools used to resolve the same. Regards, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 5356 bytes Desc: image001.jpg URL: From damir.krstic at gmail.com Tue Jul 7 14:37:46 2020 From: damir.krstic at gmail.com (Damir Krstic) Date: Tue, 7 Jul 2020 08:37:46 -0500 Subject: [gpfsug-discuss] dependent versus independent filesets Message-ID: We are deploying our new ESS and are considering moving to independent filesets. The snapshot per fileset feature appeals to us. Has anyone considered independent vs. dependent filesets and what was your reasoning to go with one as opposed to the other? Or perhaps you opted to have both on your filesystem, and if, what was the reasoning for it? Thank you. Damir -------------- next part -------------- An HTML attachment was scrubbed... URL: From skylar2 at uw.edu Tue Jul 7 14:59:58 2020 From: skylar2 at uw.edu (Skylar Thompson) Date: Tue, 7 Jul 2020 06:59:58 -0700 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: Message-ID: <20200707135958.leqp3q6f3rbtslji@illuin> We wanted to be able to snapshot and backup filesets separately with mmbackup, so went with independent filesets. On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > We are deploying our new ESS and are considering moving to independent > filesets. The snapshot per fileset feature appeals to us. > > Has anyone considered independent vs. dependent filesets and what was your > reasoning to go with one as opposed to the other? Or perhaps you opted to > have both on your filesystem, and if, what was the reasoning for it? > > Thank you. > Damir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department (UW Medicine), System Administrator -- Foege Building S046, (206)-685-7354 -- Pronouns: He/Him/His From chair at spectrumscale.org Tue Jul 7 15:52:19 2020 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Tue, 07 Jul 2020 15:52:19 +0100 Subject: [gpfsug-discuss] SSUG::Digital Talk 2 Message-ID: <1D2B20FD-257E-49C3-9D24-C63978758ED0@spectrumscale.org> Hi All, The next talk in the SSUG:: Digital series is taking place on Monday 13th July at 4pm BST. (Other time-zones are listed on the website!) Speaker: Lindsay Todd Topic: Best Practices for building a stretched cluster More details at: https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-best-practices-for-building-a-stretched-cluster/ (The next one after that will be 27th July) Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ewahl at osc.edu Tue Jul 7 15:44:16 2020 From: ewahl at osc.edu (Wahl, Edward) Date: Tue, 7 Jul 2020 14:44:16 +0000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: <20200707135958.leqp3q6f3rbtslji@illuin> References: <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: We also went with independent filesets for both backup (and quota) reasons for several years now, and have stuck with this across to 5.x. However we still maintain a minor number of dependent filesets for administrative use. Being able to mmbackup on many filesets at once can increase your parallelization _quite_ nicely! We create and delete the individual snaps before and after each backup, as you may expect. Just be aware that if you do massive numbers of fast snapshot deletes and creates you WILL reach a point where you will run into issues due to quiescing compute clients, and that certain types of workloads have issues with snapshotting in general. You have to more closely watch what you pre-allocate, and what you have left in the common metadata/inode pool. Once allocated, even if not being used, you cannot reduce the inode allocation without removing the fileset and re-creating. (say a fileset user had 5 million inodes and now only needs 500,000) Growth can also be an issue if you do NOT fully pre-allocate each space. This can be scary if you are not used to over-subscription in general. But I imagine that most sites have some decent % of oversubscription if they use filesets and quotas. Ed OSC -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Skylar Thompson Sent: Tuesday, July 7, 2020 10:00 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] dependent versus independent filesets We wanted to be able to snapshot and backup filesets separately with mmbackup, so went with independent filesets. On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > We are deploying our new ESS and are considering moving to independent > filesets. The snapshot per fileset feature appeals to us. > > Has anyone considered independent vs. dependent filesets and what was > your reasoning to go with one as opposed to the other? Or perhaps you > opted to have both on your filesystem, and if, what was the reasoning for it? > > Thank you. > Damir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug- > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY > vcGNh4M_no$ -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department (UW Medicine), System Administrator -- Foege Building S046, (206)-685-7354 -- Pronouns: He/Him/His _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$ From skylar2 at uw.edu Tue Jul 7 17:07:07 2020 From: skylar2 at uw.edu (Skylar Thompson) Date: Tue, 7 Jul 2020 09:07:07 -0700 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: <20200707160707.mk5e5hfspn7d6vnq@illuin> Ah, yes, I forgot about the quota rationale; we use independent filesets for that as well. We have run into confusion with inodes as one has to be careful to allocate inodes /and/ adjust a quota to expand a fileset. IIRC GPFS generates ENOSPC if it actually runs out of inodes, and EDQUOT if it hits a quota. We've also run into the quiescing issue but have been able to workaround it for now by increasing the splay between the different schedules. On Tue, Jul 07, 2020 at 02:44:16PM +0000, Wahl, Edward wrote: > We also went with independent filesets for both backup (and quota) reasons for several years now, and have stuck with this across to 5.x. However we still maintain a minor number of dependent filesets for administrative use. Being able to mmbackup on many filesets at once can increase your parallelization _quite_ nicely! We create and delete the individual snaps before and after each backup, as you may expect. Just be aware that if you do massive numbers of fast snapshot deletes and creates you WILL reach a point where you will run into issues due to quiescing compute clients, and that certain types of workloads have issues with snapshotting in general. > > You have to more closely watch what you pre-allocate, and what you have left in the common metadata/inode pool. Once allocated, even if not being used, you cannot reduce the inode allocation without removing the fileset and re-creating. (say a fileset user had 5 million inodes and now only needs 500,000) > > Growth can also be an issue if you do NOT fully pre-allocate each space. This can be scary if you are not used to over-subscription in general. But I imagine that most sites have some decent % of oversubscription if they use filesets and quotas. > > Ed > OSC > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Skylar Thompson > Sent: Tuesday, July 7, 2020 10:00 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] dependent versus independent filesets > > We wanted to be able to snapshot and backup filesets separately with mmbackup, so went with independent filesets. > > On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > > We are deploying our new ESS and are considering moving to independent > > filesets. The snapshot per fileset feature appeals to us. > > > > Has anyone considered independent vs. dependent filesets and what was > > your reasoning to go with one as opposed to the other? Or perhaps you > > opted to have both on your filesystem, and if, what was the reasoning for it? > > > > Thank you. > > Damir > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug- > > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY > > vcGNh4M_no$ > > > -- > -- Skylar Thompson (skylar2 at u.washington.edu) > -- Genome Sciences Department (UW Medicine), System Administrator > -- Foege Building S046, (206)-685-7354 > -- Pronouns: He/Him/His > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$ > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department (UW Medicine), System Administrator -- Foege Building S046, (206)-685-7354 -- Pronouns: He/Him/His From stockf at us.ibm.com Tue Jul 7 17:25:27 2020 From: stockf at us.ibm.com (Frederick Stock) Date: Tue, 7 Jul 2020 16:25:27 +0000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: , <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Tue Jul 7 19:19:51 2020 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Tue, 7 Jul 2020 18:19:51 +0000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: , , <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: An HTML attachment was scrubbed... URL: From leslie.james.elliott at gmail.com Wed Jul 8 00:19:20 2020 From: leslie.james.elliott at gmail.com (leslie elliott) Date: Wed, 8 Jul 2020 09:19:20 +1000 Subject: [gpfsug-discuss] dependent versus independent filesets In-Reply-To: References: <20200707135958.leqp3q6f3rbtslji@illuin> Message-ID: as long as your currently do not need more than 1000 on a filesystem On Wed, 8 Jul 2020 at 04:20, Daniel Kidger wrote: > It is worth noting that Independent Filesets are a relatively recent > addition to Spectrum Scale, compared to Dependant Filesets. They havesolved > some of the limitations of the former. > > > My view would be to always use Independent FIlesets unless there is a > particular reason to use Dependant ones. > > Daniel > > _________________________________________________________ > *Daniel Kidger Ph.D.* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum Discover and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "Frederick Stock" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Cc: gpfsug-discuss at spectrumscale.org > Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent > filesets > Date: Tue, Jul 7, 2020 17:25 > > One comment about inode preallocation. There was a time when inode > creation was performance challenged but in my opinion that is no longer the > case, unless you have need for file creates to complete at extreme speed. > In my experience it is the rare customer that requires extremely fast file > create times so pre-allocation is not truly necessary. As was noted once > an inode is allocated it cannot be deallocated. The more important item is > the maximum inodes defined for a fileset or file system. Yes, those do > need to be monitored so they can be increased if necessary to avoid out of > space errors. > > Fred > __________________________________________________ > Fred Stock | IBM Pittsburgh Lab | 720-430-8821 > stockf at us.ibm.com > > > > ----- Original message ----- > From: "Wahl, Edward" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent > filesets > Date: Tue, Jul 7, 2020 11:59 AM > > We also went with independent filesets for both backup (and quota) reasons > for several years now, and have stuck with this across to 5.x. However we > still maintain a minor number of dependent filesets for administrative use. > Being able to mmbackup on many filesets at once can increase your > parallelization _quite_ nicely! We create and delete the individual snaps > before and after each backup, as you may expect. Just be aware that if you > do massive numbers of fast snapshot deletes and creates you WILL reach a > point where you will run into issues due to quiescing compute clients, and > that certain types of workloads have issues with snapshotting in general. > > You have to more closely watch what you pre-allocate, and what you have > left in the common metadata/inode pool. Once allocated, even if not being > used, you cannot reduce the inode allocation without removing the fileset > and re-creating. (say a fileset user had 5 million inodes and now only > needs 500,000) > > Growth can also be an issue if you do NOT fully pre-allocate each space. > This can be scary if you are not used to over-subscription in general. But > I imagine that most sites have some decent % of oversubscription if they > use filesets and quotas. > > Ed > OSC > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Skylar Thompson > Sent: Tuesday, July 7, 2020 10:00 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] dependent versus independent filesets > > We wanted to be able to snapshot and backup filesets separately with > mmbackup, so went with independent filesets. > > On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote: > > We are deploying our new ESS and are considering moving to independent > > filesets. The snapshot per fileset feature appeals to us. > > > > Has anyone considered independent vs. dependent filesets and what was > > your reasoning to go with one as opposed to the other? Or perhaps you > > opted to have both on your filesystem, and if, what was the reasoning > for it? > > > > Thank you. > > Damir > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug- > > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY > > vcGNh4M_no$ > > > -- > -- Skylar Thompson (skylar2 at u.washington.edu) > -- Genome Sciences Department (UW Medicine), System Administrator > -- Foege Building S046, (206)-685-7354 > -- Pronouns: He/Him/His > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVYvcGNh4M_no$ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Jul 10 09:28:48 2020 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Fri, 10 Jul 2020 08:28:48 +0000 Subject: [gpfsug-discuss] SSUG::Digital Talk 2 Message-ID: Just a reminder that the next talk is on Monday. For some technical reasons, the link to join the event has changed, so if you?d added a calendar event with the link already, please update it to: https://ibm.webex.com/ibm/onstage/g.php?MTID=ed52933f6b6a9eee6d980d1a0807a8e5a The SSUG website has also been updated with the new event link already. Simon From: on behalf of "chair at spectrumscale.org" Reply to: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 7 July 2020 at 15:52 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] SSUG::Digital Talk 2 Hi All, The next talk in the SSUG:: Digital series is taking place on Monday 13th July at 4pm BST. (Other time-zones are listed on the website!) Speaker: Lindsay Todd Topic: Best Practices for building a stretched cluster More details at: https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-best-practices-for-building-a-stretched-cluster/ (The next one after that will be 27th July) Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Jul 15 16:15:21 2020 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 15 Jul 2020 17:15:21 +0200 Subject: [gpfsug-discuss] rsync NFS4 ACLs Message-ID: It looks like the old NFS4 ACL patch for rsync is no longer needed. Starting with rsync-3.2.0 (and backported to rsync-3.1.2-9 in RHEL7), it will now copy NFS4 ACLs if we tell it to ignore the posix ACLs: rsync -X --filter '-x system.posix_acl' file-with-acl copy-with-acl -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef.coene at docum.org Thu Jul 16 14:13:44 2020 From: stef.coene at docum.org (Stef Coene) Date: Thu, 16 Jul 2020 15:13:44 +0200 Subject: [gpfsug-discuss] GUI refresh task error Message-ID: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Hi, On brand new 5.0.5 cluster we have the following errors on all nodes: "The following GUI refresh task(s) failed: WATCHFOLDER" It also says "Failure reason: Command mmwatch all functional --list-clustered-status failed" Running mmwatch manually gives: mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. mmwatch: Command failed. Examine previous error messages to determine cause. How can I get rid of this error? I tried to disable the task with: chtask WATCHFOLDER --inactive EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. Stef From roland.schuemann at postbank.de Thu Jul 16 14:25:49 2020 From: roland.schuemann at postbank.de (Roland Schuemann) Date: Thu, 16 Jul 2020 13:25:49 +0000 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Message-ID: <975f874a066c4ba6a45c62f9b280efa2@postbank.de> Hi Stef, we already recognized this error too and opened a PMR/Case at IBM. You can set this task to inactive, but this is not persistent. After gui restart it comes again. This was the answer from IBM Support. >>>>>>>>>>>>>>>>> This will be fixed in the next release of 5.0.5.2, right now there is no work-around but will not cause issue besides the cosmetic task failed message. Is this OK for you? >>>>>>>>>>>>>>>>> So we ignore (Gui is still degraded) it and wait for the fix. Kind regards Roland Sch?mann Freundliche Gr??e / Kind regards Roland Sch?mann ____________________________________________ Roland Sch?mann Infrastructure Engineering (BTE) CIO PB Germany Deutsche Bank I Technology, Data and Innovation Postbank Systems AG -----Urspr?ngliche Nachricht----- Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von Stef Coene Gesendet: Donnerstag, 16. Juli 2020 15:14 An: gpfsug main discussion list Betreff: [gpfsug-discuss] GUI refresh task error Hi, On brand new 5.0.5 cluster we have the following errors on all nodes: "The following GUI refresh task(s) failed: WATCHFOLDER" It also says "Failure reason: Command mmwatch all functional --list-clustered-status failed" Running mmwatch manually gives: mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. mmwatch: Command failed. Examine previous error messages to determine cause. How can I get rid of this error? I tried to disable the task with: chtask WATCHFOLDER --inactive EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. Stef _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Die Europ?ische Kommission hat unter http://ec.europa.eu/consumers/odr/ eine Europ?ische Online-Streitbeilegungsplattform (OS-Plattform) errichtet. Verbraucher k?nnen die OS-Plattform f?r die au?ergerichtliche Beilegung von Streitigkeiten aus Online-Vertr?gen mit in der EU niedergelassenen Unternehmen nutzen. Informationen (einschlie?lich Pflichtangaben) zu einzelnen, innerhalb der EU t?tigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter https://www.deutsche-bank.de/Pflichtangaben. Diese E-Mail enth?lt vertrauliche und/ oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The European Commission has established a European online dispute resolution platform (OS platform) under http://ec.europa.eu/consumers/odr/. Consumers may use the OS platform to resolve disputes arising from online contracts with providers established in the EU. Please refer to https://www.db.com/disclosures for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. From Achim.Rehor at de.ibm.com Thu Jul 16 14:44:34 2020 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Thu, 16 Jul 2020 15:44:34 +0200 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Message-ID: you may want to replace the mmwatch command with a simple script like this #!/bin/ksh echo "dummy for removing gui error" exit 0 or install either 5.1.0.0 or 5.0.5.2 (when it gets available ..) Mit freundlichen Gr??en / Kind regards Achim Rehor Remote Technical Support Engineer Storage IBM Systems Storage Support - EMEA Storage Competence Center (ESCC) Spectrum Scale / Elastic Storage Server ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49-170-4521194 E-Mail: Achim.Rehor at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Sebastian Krause Gesch?ftsf?hrung: Gregor Pillen (Vorsitzender), Agnes Heftberger, Norbert Janzen, Markus Koerner, Christian Noll, Nicole Reimer Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 gpfsug-discuss-bounces at spectrumscale.org wrote on 16/07/2020 15:13:44: > From: Stef Coene > To: gpfsug main discussion list > Date: 16/07/2020 15:18 > Subject: [EXTERNAL] [gpfsug-discuss] GUI refresh task error > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > Hi, > > On brand new 5.0.5 cluster we have the following errors on all nodes: > "The following GUI refresh task(s) failed: WATCHFOLDER" > > It also says > "Failure reason: Command mmwatch all functional --list-clustered-status > failed" > > Running mmwatch manually gives: > mmwatch: The Clustered Watch Folder function is only available in the > IBM Spectrum Scale Advanced Edition > or the Data Management Edition. > mmwatch: Command failed. Examine previous error messages to determine cause. > > How can I get rid of this error? > > I tried to disable the task with: > chtask WATCHFOLDER --inactive > EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. > > > Stef > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url? > u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx- > siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2- > M&m=PJdg0uy6rMRzbLqiuOb4e2gUtNwAojhPfAupgPOi2nA&s=x3-02anMU4TcV4bTGAZoNJ8CvIfbqLXQJqyBpeyHuUk&e= > From macthev at gmail.com Thu Jul 16 15:13:31 2020 From: macthev at gmail.com (dale mac) Date: Fri, 17 Jul 2020 00:13:31 +1000 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> Message-ID: On Thu, 16 Jul 2020 at 23:44, Achim Rehor wrote: > you may want to replace the mmwatch command with a simple script like this > > > #!/bin/ksh > echo "dummy for removing gui error" > exit 0 > > or install either 5.1.0.0 or 5.0.5.2 (when it gets available ..) > > > Mit freundlichen Gr??en / Kind regards > > Achim Rehor > > Remote Technical Support Engineer Storage > IBM Systems Storage Support - EMEA Storage Competence Center (ESCC) > Spectrum Scale / Elastic Storage Server > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland > Am Weiher 24 > 65451 Kelsterbach > Phone: +49-170-4521194 > E-Mail: Achim.Rehor at de.ibm.com > > ------------------------------------------------------------------------------------------------------------------------------------------- > IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Sebastian Krause > Gesch?ftsf?hrung: Gregor Pillen (Vorsitzender), Agnes Heftberger, Norbert > Janzen, Markus Koerner, Christian Noll, Nicole Reimer > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, > HRB 14562 / WEEE-Reg.-Nr. DE 99369940 > > > gpfsug-discuss-bounces at spectrumscale.org wrote on 16/07/2020 15:13:44: > > > From: Stef Coene > > To: gpfsug main discussion list > > Date: 16/07/2020 15:18 > > Subject: [EXTERNAL] [gpfsug-discuss] GUI refresh task error > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hi, > > > > On brand new 5.0.5 cluster we have the following errors on all nodes: > > "The following GUI refresh task(s) failed: WATCHFOLDER" > > > > It also says > > "Failure reason: Command mmwatch all functional > --list-clustered-status > > failed" > > > > Running mmwatch manually gives: > > mmwatch: The Clustered Watch Folder function is only available in the > > IBM Spectrum Scale Advanced Edition > > or the Data Management Edition. > > mmwatch: Command failed. Examine previous error messages to determine > cause. > > > > How can I get rid of this error? > > > > I tried to disable the task with: > > chtask WATCHFOLDER --inactive > > EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. > > > > > > Stef > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > https://urldefense.proofpoint.com/v2/url? > > > > u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx- > > siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2- > > > > M&m=PJdg0uy6rMRzbLqiuOb4e2gUtNwAojhPfAupgPOi2nA&s=x3-02anMU4TcV4bTGAZoNJ8CvIfbqLXQJqyBpeyHuUk&e= > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Regards Dale -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Thu Jul 16 15:28:28 2020 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Thu, 16 Jul 2020 14:28:28 +0000 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <975f874a066c4ba6a45c62f9b280efa2@postbank.de> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org>, <975f874a066c4ba6a45c62f9b280efa2@postbank.de> Message-ID: I can?t speak for you, but that would not be OK for me. We monitor the mmhealth command and it?s fairly inconvenient to have portions of it broken/have to be worked around on the alerts side rather than the GPFS side. I see others here have provided better solutions for that. -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' On Jul 16, 2020, at 09:33, Roland Schuemann wrote: ?Hi Stef, we already recognized this error too and opened a PMR/Case at IBM. You can set this task to inactive, but this is not persistent. After gui restart it comes again. This was the answer from IBM Support. This will be fixed in the next release of 5.0.5.2, right now there is no work-around but will not cause issue besides the cosmetic task failed message. Is this OK for you? So we ignore (Gui is still degraded) it and wait for the fix. Kind regards Roland Sch?mann Freundliche Gr??e / Kind regards Roland Sch?mann ____________________________________________ Roland Sch?mann Infrastructure Engineering (BTE) CIO PB Germany Deutsche Bank I Technology, Data and Innovation Postbank Systems AG -----Urspr?ngliche Nachricht----- Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von Stef Coene Gesendet: Donnerstag, 16. Juli 2020 15:14 An: gpfsug main discussion list Betreff: [gpfsug-discuss] GUI refresh task error Hi, On brand new 5.0.5 cluster we have the following errors on all nodes: "The following GUI refresh task(s) failed: WATCHFOLDER" It also says "Failure reason: Command mmwatch all functional --list-clustered-status failed" Running mmwatch manually gives: mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. mmwatch: Command failed. Examine previous error messages to determine cause. How can I get rid of this error? I tried to disable the task with: chtask WATCHFOLDER --inactive EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. Stef _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Die Europ?ische Kommission hat unter http://ec.europa.eu/consumers/odr/ eine Europ?ische Online-Streitbeilegungsplattform (OS-Plattform) errichtet. Verbraucher k?nnen die OS-Plattform f?r die au?ergerichtliche Beilegung von Streitigkeiten aus Online-Vertr?gen mit in der EU niedergelassenen Unternehmen nutzen. Informationen (einschlie?lich Pflichtangaben) zu einzelnen, innerhalb der EU t?tigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter https://www.deutsche-bank.de/Pflichtangaben. Diese E-Mail enth?lt vertrauliche und/ oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. The European Commission has established a European online dispute resolution platform (OS platform) under http://ec.europa.eu/consumers/odr/. Consumers may use the OS platform to resolve disputes arising from online contracts with providers established in the EU. Please refer to https://www.db.com/disclosures for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From stef.coene at docum.org Thu Jul 16 14:47:18 2020 From: stef.coene at docum.org (Stef Coene) Date: Thu, 16 Jul 2020 15:47:18 +0200 Subject: [gpfsug-discuss] GUI refresh task error In-Reply-To: <975f874a066c4ba6a45c62f9b280efa2@postbank.de> References: <72d50b96-c6a3-f075-8f47-84bf2346f0ae@docum.org> <975f874a066c4ba6a45c62f9b280efa2@postbank.de> Message-ID: Ok, thanx for the answer. I will wait for the fix. Stef On 2020-07-16 15:25, Roland Schuemann wrote: > Hi Stef, > > we already recognized this error too and opened a PMR/Case at IBM. > You can set this task to inactive, but this is not persistent. After gui restart it comes again. > > This was the answer from IBM Support. >>>>>>>>>>>>>>>>>> > This will be fixed in the next release of 5.0.5.2, right now there is no work-around but will not cause issue besides the cosmetic task failed message. > Is this OK for you? >>>>>>>>>>>>>>>>>> > > So we ignore (Gui is still degraded) it and wait for the fix. > > Kind regards > Roland Sch?mann > > > Freundliche Gr??e / Kind regards > Roland Sch?mann > > ____________________________________________ > > Roland Sch?mann > Infrastructure Engineering (BTE) > CIO PB Germany > > Deutsche Bank I Technology, Data and Innovation > Postbank Systems AG > > > -----Urspr?ngliche Nachricht----- > Von: gpfsug-discuss-bounces at spectrumscale.org Im Auftrag von Stef Coene > Gesendet: Donnerstag, 16. Juli 2020 15:14 > An: gpfsug main discussion list > Betreff: [gpfsug-discuss] GUI refresh task error > > Hi, > > On brand new 5.0.5 cluster we have the following errors on all nodes: > "The following GUI refresh task(s) failed: WATCHFOLDER" > > It also says > "Failure reason: Command mmwatch all functional --list-clustered-status > failed" > > Running mmwatch manually gives: > mmwatch: The Clustered Watch Folder function is only available in the IBM Spectrum Scale Advanced Edition or the Data Management Edition. > mmwatch: Command failed. Examine previous error messages to determine cause. > > How can I get rid of this error? > > I tried to disable the task with: > chtask WATCHFOLDER --inactive > EFSSG1811C The task with the name WATCHFOLDER is already not scheduled. > > > Stef > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > Die Europ?ische Kommission hat unter http://ec.europa.eu/consumers/odr/ eine Europ?ische Online-Streitbeilegungsplattform (OS-Plattform) errichtet. Verbraucher k?nnen die OS-Plattform f?r die au?ergerichtliche Beilegung von Streitigkeiten aus Online-Vertr?gen mit in der EU niedergelassenen Unternehmen nutzen. > > Informationen (einschlie?lich Pflichtangaben) zu einzelnen, innerhalb der EU t?tigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche Bank finden Sie unter https://www.deutsche-bank.de/Pflichtangaben. Diese E-Mail enth?lt vertrauliche und/ oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet. > > The European Commission has established a European online dispute resolution platform (OS platform) under http://ec.europa.eu/consumers/odr/. Consumers may use the OS platform to resolve disputes arising from online contracts with providers established in the EU. > > Please refer to https://www.db.com/disclosures for information (including mandatory corporate particulars) on selected Deutsche Bank branches and group companies registered or incorporated in the European Union. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and delete this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From scale at us.ibm.com Fri Jul 17 18:34:36 2020 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 17 Jul 2020 23:04:36 +0530 Subject: [gpfsug-discuss] rsync NFS4 ACLs In-Reply-To: References: Message-ID: Hi Jan-Frode, Do you have a specific question on this or is this sent just for informing others. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Jan-Frode Myklebust To: gpfsug main discussion list Date: 15-07-2020 08.44 PM Subject: [EXTERNAL] [gpfsug-discuss] rsync NFS4 ACLs Sent by: gpfsug-discuss-bounces at spectrumscale.org It looks like the old NFS4 ACL patch for rsync is no longer needed. Starting with rsync-3.2.0 (and backported to rsync-3.1.2-9 in RHEL7), it will now copy NFS4 ACLs if we tell it to ignore the posix ACLs: rsync -X --filter '-x system.posix_acl' file-with-acl copy-with-acl _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=GEVuZDFyUhFpvxxYM6W6ts3YvduD9Vu6oIQPJFta6eo&s=MydZiOHO7AFkY1MRBL5kY5vFGTeCYvzJBwMt-14T-8Y&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From janfrode at tanso.net Fri Jul 17 20:50:53 2020 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Fri, 17 Jul 2020 21:50:53 +0200 Subject: [gpfsug-discuss] rsync NFS4 ACLs In-Reply-To: References: Message-ID: It was sent to inform others. Meant to write a bit more, but mistakingly hit send too soon :-) So, again. Starting with rsync v3.2.0 and backported to v3.1.2-9 in RHEL7, it now handles NFS4 ACLs on GPFS. The syntax to get it working is: rsync -X --filter '-x system.posix_acl' And it works on at least v3.5 filesystems and later. Didn?t try earlier than v3.5. -jf fre. 17. jul. 2020 kl. 20:31 skrev IBM Spectrum Scale : > Hi Jan-Frode, > > Do you have a specific question on this or is this sent just for informing > others. > > Regards, The Spectrum Scale (GPFS) team > > > ------------------------------------------------------------------------------------------------------------------ > If you feel that your question can benefit other users of Spectrum Scale > (GPFS), then please post it to the public IBM developerWroks Forum at > https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. > > > If your query concerns a potential software error in Spectrum Scale (GPFS) > and you have an IBM software maintenance contract please contact > 1-800-237-5511 in the United States or your local IBM Service Center in > other countries. > > The forum is informally monitored as time permits and should not be used > for priority messages to the Spectrum Scale (GPFS) team. > > [image: Inactive hide details for Jan-Frode Myklebust ---15-07-2020 > 08.44.49 PM---It looks like the old NFS4 ACL patch for rsync is no]Jan-Frode > Myklebust ---15-07-2020 08.44.49 PM---It looks like the old NFS4 ACL patch > for rsync is no longer needed. Starting with rsync-3.2.0 (and b > > > > From: Jan-Frode Myklebust > To: gpfsug main discussion list > Date: 15-07-2020 08.44 PM > Subject: [EXTERNAL] [gpfsug-discuss] rsync NFS4 ACLs > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > It looks like the old NFS4 ACL patch for rsync is no longer needed. > Starting with rsync-3.2.0 (and backported to rsync-3.1.2-9 in RHEL7), it > will now copy NFS4 ACLs if we tell it to ignore the posix ACLs: > > rsync -X --filter '-x system.posix_acl' file-with-acl copy-with-acl > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From chair at spectrumscale.org Tue Jul 21 09:03:34 2020 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Tue, 21 Jul 2020 09:03:34 +0100 Subject: [gpfsug-discuss] https://www.spectrumscaleug.org/event/ssugdigital-spectrum-scale-expert-talk-strategy-update/ Message-ID: <> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: meeting.ics Type: text/calendar Size: 1949 bytes Desc: not available URL: From joe at excelero.com Tue Jul 21 13:42:19 2020 From: joe at excelero.com (joe at excelero.com) Date: Tue, 21 Jul 2020 07:42:19 -0500 Subject: [gpfsug-discuss] Accepted: gpfsug-discuss Digest, Vol 102, Issue 9 Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: reply.ics Type: application/ics Size: 0 bytes Desc: not available URL: From carlz at us.ibm.com Tue Jul 21 16:36:46 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Tue, 21 Jul 2020 15:36:46 +0000 Subject: [gpfsug-discuss] Quick survey on PTF frequency Message-ID: <5381ACF7-252C-4F1A-903A-5D9B79A71E3C@us.ibm.com> Folks, We?re gathering some data on how people consume PTFs for Scale. There is a very brief survey online, and we?d appreciate all responses. No identifying information is collected. Survey: https://www.surveygizmo.com/s3/5727746/47520248d614 Thanks, Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_884492198] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From carlz at us.ibm.com Wed Jul 22 22:13:25 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Wed, 22 Jul 2020 21:13:25 +0000 Subject: [gpfsug-discuss] Developer Edition upgraded to 5.0.5.1 Message-ID: Developer Edition 5.0.5.1 is now available for download Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_647541561] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From prasad.surampudi at theatsgroup.com Thu Jul 23 01:34:02 2020 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Thu, 23 Jul 2020 00:34:02 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Message-ID: Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: 1. What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? 2. If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? 3. Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). 4. We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Thu Jul 23 08:09:17 2020 From: YARD at il.ibm.com (Yaron Daniel) Date: Thu, 23 Jul 2020 10:09:17 +0300 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: Message-ID: Hi What is the output for: #mmlsconfig |grep -i verbs #ibstat Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Date: 07/23/2020 03:34 AM Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=3V12EzdqYBk1P235cOvncsD-pOXNf5e5vPp85RnNhP8&s=XxlITEUK0nSjIyiu9XY1DEbYiVzVbp5XHcvQPfFJ2NY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3776 bytes Desc: not available URL: From stockf at us.ibm.com Thu Jul 23 12:14:57 2020 From: stockf at us.ibm.com (Frederick Stock) Date: Thu, 23 Jul 2020 11:14:57 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: From prasad.surampudi at theatsgroup.com Thu Jul 23 14:33:13 2020 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Thu, 23 Jul 2020 13:33:13 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 102, Issue 12 In-Reply-To: References: Message-ID: Hi Yaron, Please see the outputs of mmlsconfig and ibstat below: sudo /usr/lpp/mmfs/bin/mmlsconfig |grep -i verbs verbsRdmasPerNode 192 verbsRdma enable verbsRdmaSend yes verbsRdmasPerConnection 48 verbsRdmasPerConnection 16 verbsPorts mlx5_4/1/1 mlx5_5/1/2 verbsPorts mlx4_0/1/0 mlx4_0/2/0 verbsPorts mlx5_0/1/1 mlx5_1/1/2 verbsPorts mlx5_0/1/1 mlx5_2/1/2 verbsPorts mlx5_2/1/1 mlx5_3/1/2 ?ibstat output on NSD server: CA 'mlx5_0' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0x506b4b03000fdb74 System image GUID: 0x506b4b03000fdb74 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0x526b4bfffe0fdb74 Link layer: Ethernet CA 'mlx5_1' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0x506b4b03000fdb75 System image GUID: 0x506b4b03000fdb74 Port 1: State: Down Physical state: Disabled Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0x526b4bfffe0fdb75 Link layer: Ethernet CA 'mlx5_2' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300a7e928 System image GUID: 0xec0d9a0300a7e928 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0x526b4bfffe0fdb74 Link layer: Ethernet CA 'mlx5_3' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300a7e929 System image GUID: 0xec0d9a0300a7e928 Port 1: State: Down Physical state: Disabled Rate: 40 Base lid: 0 LMC: 0 SM lid: 0 Capability mask: 0x00010000 Port GUID: 0xee0d9afffea7e929 Link layer: Ethernet CA 'mlx5_4' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300da5f92 System image GUID: 0xec0d9a0300da5f92 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 13 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xec0d9a0300da5f92 Link layer: InfiniBand CA 'mlx5_5' CA type: MT4115 Number of ports: 1 Firmware version: 12.25.1020 Hardware version: 0 Node GUID: 0xec0d9a0300da5f93 System image GUID: 0xec0d9a0300da5f92 Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 6 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xec0d9a0300da5f93 Link layer: InfiniBand ?ibstat output on CES server: CA 'mlx5_0' CA type: MT4115 Number of ports: 1 Firmware version: 12.22.4030 Hardware version: 0 Node GUID: 0xb88303ffff5ec6ec System image GUID: 0xb88303ffff5ec6ec Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 9 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xb88303ffff5ec6ec Link layer: InfiniBand CA 'mlx5_1' CA type: MT4115 Number of ports: 1 Firmware version: 12.22.4030 Hardware version: 0 Node GUID: 0xb88303ffff5ec6ed System image GUID: 0xb88303ffff5ec6ec Port 1: State: Active Physical state: LinkUp Rate: 100 Base lid: 12 LMC: 0 SM lid: 1 Capability mask: 0x2651e848 Port GUID: 0xb88303ffff5ec6ed Link layer: InfiniBand Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: Thursday, July 23, 2020 3:09 AM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 102, Issue 12 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Spectrum Scale pagepool size with RDMA (Prasad Surampudi) 2. Re: Spectrum Scale pagepool size with RDMA (Yaron Daniel) ---------------------------------------------------------------------- Message: 1 Date: Thu, 23 Jul 2020 00:34:02 +0000 From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: 1. What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? 2. If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? 3. Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). 4. We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 23 Jul 2020 10:09:17 +0300 From: "Yaron Daniel" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Message-ID: Content-Type: text/plain; charset="iso-8859-1" Hi What is the output for: #mmlsconfig |grep -i verbs #ibstat Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect ? IL Lab Services (Storage) Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com Webex: https://ibm.webex.com/meet/yard IBM Israel From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Date: 07/23/2020 03:34 AM Subject: [EXTERNAL] [gpfsug-discuss] Spectrum Scale pagepool size with RDMA Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB ( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. Occasionally we see the mmfsd process memory usage reaches 90% on NSD servers and CES nodes and stays there until GPFS is recycled. I have couple of questions in this scenario: What are the general recommendations of pagepool size for nodes with RDMA enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool must be at least 64 GB." So, does this mean that the pagepool can't be more than half of real memory with RDMA enabled? Also, Is this the reason why mmfsd memory usage exceeds pagepool size and spikes to almost 90%? If we dont want to see high mmfsd process memory usage on NSD/CES nodes, should we decrease the pagepool size? Can we tune log_num_mtt parameter to limit the memory usage? Currently its set to 0 for both NSD (ppc64_le) and CES (x86_64). We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no matching port found" . Any idea what this message indicate? I dont see any Verbs RDMA enabled message after these warning messages. Does it get enabled automatically? Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=Bn1XE9uK2a9CZQ8qKnJE3Q&m=3V12EzdqYBk1P235cOvncsD-pOXNf5e5vPp85RnNhP8&s=XxlITEUK0nSjIyiu9XY1DEbYiVzVbp5XHcvQPfFJ2NY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 3776 bytes Desc: not available URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 102, Issue 12 *********************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Thu Jul 23 14:48:44 2020 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Thu, 23 Jul 2020 13:48:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: From olaf.weiser at de.ibm.com Thu Jul 23 14:48:44 2020 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Thu, 23 Jul 2020 13:48:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA In-Reply-To: References: , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_08807E58088078CC00274C4DC22585AE.gif Type: image/gif Size: 1114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E57D6F80E57D2E000274C4DC22585AE.gif Type: image/gif Size: 4105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57D9040E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3847 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DB100E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 4266 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DD1C0E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3747 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57DF280E57D2E000274C4DC22585AE.jpg Type: image/jpeg Size: 3793 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E57E5AC0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 4301 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E4480E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3739 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58E6540E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3855 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._1_0E58E8600E57E19400274C4DC22585AE.gif Type: image/gif Size: 4084 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image._2_0E58EA6C0E57E19400274C4DC22585AE.jpg Type: image/jpeg Size: 3776 bytes Desc: not available URL: