From renata at slac.stanford.edu Tue Dec 1 18:32:39 2020 From: renata at slac.stanford.edu (Renata Maria Dart) Date: Tue, 1 Dec 2020 10:32:39 -0800 (PST) Subject: [gpfsug-discuss] memory needed for gpfs clients Message-ID: Hi, some of our gpfs clients will get stale file handles for gpfs mounts and it seems to be related to memory depletion. Even after the memory is freed though gpfs will continue be unavailable and df will hang. I have read about setting vm.min_free_kbytes as a possible fix for this, but wasn't sure if it was meant for a gpfs server or if a gpfs client would also benefit, and what value should be set. Thanks for any insights, Renata From cblack at nygenome.org Tue Dec 1 19:07:58 2020 From: cblack at nygenome.org (Christopher Black) Date: Tue, 1 Dec 2020 19:07:58 +0000 Subject: [gpfsug-discuss] memory needed for gpfs clients In-Reply-To: References: Message-ID: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> We tune vm-related sysctl values on our gpfs clients. These are values we use for 256GB+ mem hpc nodes: vm.min_free_kbytes=2097152 vm.dirty_bytes = 3435973836 vm.dirty_background_bytes = 1717986918 The vm.dirty parameters are to prevent NFS from buffering huge amounts of writes and then pushing them over the network all at once flooding out gpfs traffic. I'd also recommend checking client gpfs parameters pagepool and/or pagepoolMaxPhysMemPct to ensure you have a reasonable and understood limit for how much memory mmfsd will use. Best, Chris ?On 12/1/20, 1:32 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Renata Maria Dart" wrote: Hi, some of our gpfs clients will get stale file handles for gpfs mounts and it seems to be related to memory depletion. Even after the memory is freed though gpfs will continue be unavailable and df will hang. I have read about setting vm.min_free_kbytes as a possible fix for this, but wasn't sure if it was meant for a gpfs server or if a gpfs client would also benefit, and what value should be set. Thanks for any insights, Renata _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!C6sPl7C9qQ!H08HlNmBIkQRBOJKSHohzKHL6r39gAhQ3XTTczWoSmvffRFmQMcpJo8OyjMP7j-g$ ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. From bbanister at jumptrading.com Tue Dec 1 19:00:12 2020 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 1 Dec 2020 19:00:12 +0000 Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Message-ID: Hey all... Hope all your clusters are up and performing well... Got a new RFE (I searched and didn't find anything like it) for your consideration. The ability to change the name of an existing NSD: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=147125 We embed information into the NSD name, which sometimes needs to be updated. However there isn't a way to simply change the NSD name. You can update the NSD ServerList, but not the name. You can remove the NSD from a file system, delete it, then recreate with a new name and add it back into the file system, but there are obvious risks and serious space and performance impacts to production file systems when performing these operations. Thanks! -Bryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From renata at slac.stanford.edu Tue Dec 1 19:17:33 2020 From: renata at slac.stanford.edu (Renata Maria Dart) Date: Tue, 1 Dec 2020 11:17:33 -0800 (PST) Subject: [gpfsug-discuss] memory needed for gpfs clients In-Reply-To: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> References: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> Message-ID: Thanks very much for your feedback Chris. Renata On Tue, 1 Dec 2020, Christopher Black wrote: >We tune vm-related sysctl values on our gpfs clients. >These are values we use for 256GB+ mem hpc nodes: >vm.min_free_kbytes=2097152 >vm.dirty_bytes = 3435973836 >vm.dirty_background_bytes = 1717986918 > >The vm.dirty parameters are to prevent NFS from buffering huge amounts of writes and then pushing them over the network all at once flooding out gpfs traffic. > >I'd also recommend checking client gpfs parameters pagepool and/or pagepoolMaxPhysMemPct to ensure you have a reasonable and understood limit for how much memory mmfsd will use. > >Best, >Chris > >On 12/1/20, 1:32 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Renata Maria Dart" wrote: > > Hi, some of our gpfs clients will get stale file handles for gpfs > mounts and it seems to be related to memory depletion. Even after the > memory is freed though gpfs will continue be unavailable and df will > hang. I have read about setting vm.min_free_kbytes as a possible fix > for this, but wasn't sure if it was meant for a gpfs server or if a > gpfs client would also benefit, and what value should be set. > > Thanks for any insights, > > Renata > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!C6sPl7C9qQ!H08HlNmBIkQRBOJKSHohzKHL6r39gAhQ3XTTczWoSmvffRFmQMcpJo8OyjMP7j-g$ > >________________________________ > >This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. > From jonathan.buzzard at strath.ac.uk Tue Dec 1 19:30:21 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 1 Dec 2020 19:30:21 +0000 Subject: [gpfsug-discuss] memory needed for gpfs clients In-Reply-To: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> References: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> Message-ID: <03389b6f-1b69-29a1-9aff-58dc490b2431@strath.ac.uk> On 01/12/2020 19:07, Christopher Black wrote: > CAUTION: This email originated outside the University. Check before clicking links or attachments. > > We tune vm-related sysctl values on our gpfs clients. > These are values we use for 256GB+ mem hpc nodes: > vm.min_free_kbytes=2097152 > vm.dirty_bytes = 3435973836 > vm.dirty_background_bytes = 1717986918 > > The vm.dirty parameters are to prevent NFS from buffering huge > amounts of writes and then pushing them over the network all at once > flooding out gpfs traffic. > > I'd also recommend checking client gpfs parameters pagepool and/or > pagepoolMaxPhysMemPct to ensure you have a reasonable and understood > limit for how much memory mmfsd will use. > We take a different approach and tackle it from the other end. Basically we use slurm to limit user processes to 4GB per core which we find is more than enough for 99% of jobs. For people needing more then there are some dedicated large memory nodes with 3TB of RAM. We have seen well over 1TB of RAM being used by a single user on occasion (generating large meshes usually). I don't think there is any limit on RAM on those nodes The compute nodes are dual Xeon 6138 with 192GB of RAM, which works out at 4.8GB of RAM. Basically it stops the machines running out of RAM for *any* administrative tasks not just GPFS. We did originally try running it closer to the wire but it appears anecdotally cgroups is not perfect and it is possible for users to get a bit over their limits, so we lowered it back down to 4GB per core. Noting that is what the tender for the machine was, but due to number of DIMM slots and and cores in the CPU, we ended up with a bit more RAM per core. We have had no memory starvation issues now in ~2 years since we went down to 4GB per core for jobs. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From cblack at nygenome.org Tue Dec 1 19:26:25 2020 From: cblack at nygenome.org (Christopher Black) Date: Tue, 1 Dec 2020 19:26:25 +0000 Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Message-ID: <7FEC2C4A-1FBC-495E-BE6F-D3E17B47C63E@nygenome.org> +1 from me. Someone did a building block install for us and named a couple io nodes with initial upper case (unlike all other unix hostnames in our env which are all lowercase). For a while it just bothered us, and we complained occasionally to hear that it was not easy to change. Over two years after install a case-sensitive bug in call home hit us on those two io nodes. Best, Chris From: on behalf of Bryan Banister Reply-To: gpfsug main discussion list Date: Tuesday, December 1, 2020 at 2:16 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Hey all? Hope all your clusters are up and performing well? Got a new RFE (I searched and didn?t find anything like it) for your consideration. The ability to change the name of an existing NSD: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=147125 We embed information into the NSD name, which sometimes needs to be updated. However there isn?t a way to simply change the NSD name. You can update the NSD ServerList, but not the name. You can remove the NSD from a file system, delete it, then recreate with a new name and add it back into the file system, but there are obvious risks and serious space and performance impacts to production file systems when performing these operations. Thanks! -Bryan ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue Dec 1 22:09:01 2020 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 1 Dec 2020 22:09:01 +0000 Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE In-Reply-To: <7FEC2C4A-1FBC-495E-BE6F-D3E17B47C63E@nygenome.org> References: <7FEC2C4A-1FBC-495E-BE6F-D3E17B47C63E@nygenome.org> Message-ID: Just for clarification, this RFE is for changing the name of the Network Shared Disk device used to store data for file systems, not a NSD I/O server node name, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Christopher Black Sent: Tuesday, December 1, 2020 1:26 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE [EXTERNAL EMAIL] +1 from me. Someone did a building block install for us and named a couple io nodes with initial upper case (unlike all other unix hostnames in our env which are all lowercase). For a while it just bothered us, and we complained occasionally to hear that it was not easy to change. Over two years after install a case-sensitive bug in call home hit us on those two io nodes. Best, Chris From: > on behalf of Bryan Banister > Reply-To: gpfsug main discussion list > Date: Tuesday, December 1, 2020 at 2:16 PM To: gpfsug main discussion list > Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Hey all? Hope all your clusters are up and performing well? Got a new RFE (I searched and didn?t find anything like it) for your consideration. The ability to change the name of an existing NSD: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=147125 We embed information into the NSD name, which sometimes needs to be updated. However there isn?t a way to simply change the NSD name. You can update the NSD ServerList, but not the name. You can remove the NSD from a file system, delete it, then recreate with a new name and add it back into the file system, but there are obvious risks and serious space and performance impacts to production file systems when performing these operations. Thanks! -Bryan ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Tue Dec 1 22:41:49 2020 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Tue, 1 Dec 2020 17:41:49 -0500 Subject: [gpfsug-discuss] internal details on GPFS inode expansion In-Reply-To: References: Message-ID: Dave Johnson at ddj at brown.edu asks: When GPFS needs to add inodes to the filesystem, it seems to pre-create about 4 million of them. Judging by the logs, it seems it only takes a few (13 maybe) seconds to do this. However we are suspecting that this might only be to request the additional inodes and that there is some background activity for some time afterwards. Would someone who has knowledge of the actual internals be willing to confirm or deny this, and if there is background activity, is it on all nodes in the cluster, NSD nodes, "default worker nodes"? Inodes are typically 4KB and reside ondisk in full blocks in the "inode 0 file". For every inode there is also an entry in the "inode allocation map" which indicates the inode's status (eg free, inuse). To add inodes we have to add data to both. First we determine how many inodes to add (eg always add full blocks of inodes, etc), then how many "passes" will it take to add them (the "passes" are an artifact of the inode map layout). Adding the inodes themselves involves writing blocks of free inodes. This is multi-threaded on a single node. Adding to the inode map, may involve adding more inode map "segments" or just using free space in the current segments. If adding segments these are formatted and written by multiple threads on a single node, Once the on-disk data structures are complete we update the in-memory structures to reflect that all of the new inodes are free and we update the "stripe group descriptor" and broadcast it to all the nodes that have the file system mounted. In old code - say pre 4.1 or 4.2 -- we went through another step to reread all of the inode allocation map back into memory to recompute the number of free inodes. That would have been in parallel on all the nodes that had the file system mounted. Around 4.2 or so this was changed to simply update the in-memory counters (since we know how many inodes were added, there is no need to recount them). So, adding 4M inodes involves writing a little more than 16 GB of metadata to the disk, then cycle through the in-memory data structures. Writing 16 GB in 13 seconds works out to a little more than 1 GB/s. Sounds reasonable. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From dugan at bu.edu Fri Dec 4 14:54:07 2020 From: dugan at bu.edu (Dugan, Michael J) Date: Fri, 4 Dec 2020 14:54:07 +0000 Subject: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? In-Reply-To: References: <1388247256.209171.1605555854969@privateemail.com> , Message-ID: I have a cluster with two filesystems and I need to migrate a fileset from one to the other. I would normally do this with tar and rsync but I decided to experiment with AFM following the document below. In my test setup I'm finding that hardlinks are not preserved by the migration. Is that expected or am I doing something wrong? I'm using gpfs-5.0.5.4. Thanks. --Mike -- Michael J. Dugan Manager of Systems Programming and Administration Research Computing Services | IS&T | Boston University 617-358-0030 dugan at bu.edu http://www.bu.edu/tech ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Venkateswara R Puvvada Sent: Monday, November 23, 2020 9:41 PM To: gpfsug main discussion list Cc: gpfsug-discuss-bounces at spectrumscale.org Subject: Re: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? AFM provides near zero downtime for migration. As of today, AFM migration does not support ACLs or other EAs migration from non scale (GPFS) source. https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_uc_migrationusingafmmigrationenhancements.htm ~Venkat (vpuvvada at in.ibm.com) From: "Frederick Stock" To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Date: 11/17/2020 03:14 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Have you considered using the AFM feature of Spectrum Scale? I doubt it will provide any speed improvement but it would allow for data to be accessed as it was being migrated. Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com ----- Original message ----- From: Andi Christiansen Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? Date: Mon, Nov 16, 2020 2:44 PM Hi all, i have got a case where a customer wants 700TB migrated from isilon to Scale and the only way for him is exporting the same directory on NFS from two different nodes... as of now we are using multiple rsync processes on different parts of folders within the main directory. this is really slow and will take forever.. right now 14 rsync processes spread across 3 nodes fetching from 2.. does anyone know of a way to speed it up? right now we see from 1Gbit to 3Gbit if we are lucky(total bandwidth) and there is a total of 30Gbit from scale nodes and 20Gbits from isilon so we should be able to reach just under 20Gbit... if anyone have any ideas they are welcome! Thanks in advance Andi Christiansen _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Sun Dec 6 11:16:13 2020 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Sun, 06 Dec 2020 11:16:13 +0000 Subject: [gpfsug-discuss] SSUG Quick survey Message-ID: <1DDA0629-30F4-4533-9E04-63ECB2ED17ED@spectrumscale.org> On Friday in the webinar, we did some live polling of the attendees. I?m still interested in people filling in the questions ? it isn?t long and will help us with planning UG events as well. I thought it would expire when the 24 hour period was up, but it looks like in survey mode, you can still complete it: https://ahaslides.com/SSUG2020 I?ll take it down at 17:00 GMT on Wednesday 9th December, so please take 5 minutes to fill in ? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From andi at christiansen.xxx Mon Dec 7 20:15:23 2020 From: andi at christiansen.xxx (Andi Christiansen) Date: Mon, 7 Dec 2020 21:15:23 +0100 (CET) Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Message-ID: <429895590.51808.1607372123687@privateemail.com> Hi All, Merry christmas to everyone listening in! I hope someone can shed some light on this as its starting to annoy me that i cant get any information other than whats in the documentation which for this part is not very fullfilling.. atleast for me it isnt.. I am currently discussing with IBM Support about the Spectrum Scale Object install procedure for v5.1.0.1 because alot of dependencies is missing when trying to install it. Link to the documentation: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_manualobjinstall.htm And as i could read in the documentation before i contacted IBM and what they said to me after i created the ticket is that "The installation of spectrum-scale-object requires that the repositories for the OpenStack packages and their dependencies are available, which are typically the openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 repositories" And here is the funny part, i dont like the word "Typically" as if we are to guess where to find the dependencies.. i get the idea of moving the packages away from the spectrum scale package to be sure they are up-to-date when installed rather than old versions lying around until a new .x version releases.. but any who, even trying to enable those two repos have proved difficult as they are simply not available on my system.. hence why i still have a lot of dependencies missing.. My theory is that to have those repos shown to my system i would need another redhat license than the "server license" i already have? propably some sort of Redhat Openstack license? Can any one confirm if this is the case? If it is i guess that means that IBM is now pushing a new license ontop of customers if they want to use the new Object release with the 5.1.0.1 version... and that will be it for me.. ill look some where else then for the object/s3 part.. Sorry if i come across as angry but im starting to get alittle annoyed at IBM :) We were using S3 on the previous release but in the end could'nt really use it because of the limitations of the old way they implemented it and we're told there was a new backend coming which had all the features needed but then they pulled it from the .0 version without notice and we had already upgraded from 4.x.x.x to 5.1.x.x and had to find out the hard way.. most of you propably read the old discussion i started about an alternative to scale object/s3.. Best Regards Andi Christiansen -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Dec 7 22:37:43 2020 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Mon, 7 Dec 2020 22:37:43 +0000 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. In-Reply-To: <429895590.51808.1607372123687@privateemail.com> References: <429895590.51808.1607372123687@privateemail.com> Message-ID: Codeready I think you can just enable with subscription-manager, but it is disabled by default. RHOSP is an additional license. But as it says ?typically?, one might assume using the community releases is also possible, e.g. : http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/ There were some statements last year about IBM support for openstack (https://www.spectrumscaleug.org/wp-content/uploads/2019/11/SC19-IBM-Spectrum-Scale-ESS-Update.pdf slide 26, though that mentions cinder). I believe it is still expected to work, but that support would be via Red Hat subscription, or community support via the community repos as above. Carl or someone can probably give the IBM statement on this ? Simon From: on behalf of "andi at christiansen.xxx" Reply to: "gpfsug-discuss at spectrumscale.org" Date: Monday, 7 December 2020 at 20:15 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Hi All, Merry christmas to everyone listening in! I hope someone can shed some light on this as its starting to annoy me that i cant get any information other than whats in the documentation which for this part is not very fullfilling.. atleast for me it isnt.. I am currently discussing with IBM Support about the Spectrum Scale Object install procedure for v5.1.0.1 because alot of dependencies is missing when trying to install it. Link to the documentation: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_manualobjinstall.htm And as i could read in the documentation before i contacted IBM and what they said to me after i created the ticket is that "The installation of spectrum-scale-object requires that the repositories for the OpenStack packages and their dependencies are available, which are typically the openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 repositories" And here is the funny part, i dont like the word "Typically" as if we are to guess where to find the dependencies.. i get the idea of moving the packages away from the spectrum scale package to be sure they are up-to-date when installed rather than old versions lying around until a new .x version releases.. but any who, even trying to enable those two repos have proved difficult as they are simply not available on my system.. hence why i still have a lot of dependencies missing.. My theory is that to have those repos shown to my system i would need another redhat license than the "server license" i already have? propably some sort of Redhat Openstack license? Can any one confirm if this is the case? If it is i guess that means that IBM is now pushing a new license ontop of customers if they want to use the new Object release with the 5.1.0.1 version... and that will be it for me.. ill look some where else then for the object/s3 part.. Sorry if i come across as angry but im starting to get alittle annoyed at IBM :) We were using S3 on the previous release but in the end could'nt really use it because of the limitations of the old way they implemented it and we're told there was a new backend coming which had all the features needed but then they pulled it from the .0 version without notice and we had already upgraded from 4.x.x.x to 5.1.x.x and had to find out the hard way.. most of you propably read the old discussion i started about an alternative to scale object/s3.. Best Regards Andi Christiansen -------------- next part -------------- An HTML attachment was scrubbed... URL: From brnelson at us.ibm.com Tue Dec 8 01:07:46 2020 From: brnelson at us.ibm.com (Brian Nelson) Date: Mon, 7 Dec 2020 19:07:46 -0600 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Message-ID: The Spectrum Scale releases prior to 5.1 included all of the dependent packages needed by OpenStack along with the Object protocol. Although initially done because the platform repos did not have the necessary dependent packages, eventually it introduced significant difficulties in terms of keeping the growing number of dependent packages current with the latest functionality and security fixes. To ensure that bug and security fixes can be delivered as soon as possible, the switch was made to use the platform-specific repos for the dependencies rather than including them with the Scale installer. Unfortunately, this has made the install more complicated as these system repos need to be configured on the system. The subscription pool with the OpenStack repos is typically not enabled by default. To see if your subscription has the necessary repos, use the command "subscription-manager list --all --available" and search for OpenStack. If found, use the Pool ID to add the subscription to your system with the command: "subscription-manager attach --pool=PoolID". Once the pool has been added, then the repos openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 should be able to be added to the subscription-manager. If the subscription list does not show any subscriptions with OpenStack resources, then it may be necessary to add an applicable subscription, such as the "Red Hat OpenStack Platform" subscription. -Brian =================================== Brian Nelson 512-286-7735 (T/L) 363-7735 IBM Spectrum Scale brnelson at us.ibm.com ----- Forwarded by Brian Nelson/Austin/IBM on 12/07/2020 06:06 PM ----- ----- Original message ----- From: Andi Christiansen Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Date: Mon, Dec 7, 2020 3:15 PM Hi All, Merry christmas to everyone listening in! I hope someone can shed some light on this as its starting to annoy me that i cant get any information other than whats in the documentation which for this part is not very fullfilling.. atleast for me it isnt.. I am currently discussing with IBM Support about the Spectrum Scale Object install procedure for v5.1.0.1 because alot of dependencies is missing when trying to install it. Link to the documentation: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_manualobjinstall.htm And as i could read in the documentation before i contacted IBM and what they said to me after i created the ticket is that "The installation of spectrum-scale-object requires that the repositories for the OpenStack packages and their dependencies are available, which are typically the openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 repositories" And here is the funny part, i dont like the word "Typically" as if we are to guess where to find the dependencies.. i get the idea of moving the packages away from the spectrum scale package to be sure they are up-to-date when installed rather than old versions lying around until a new .x version releases.. but any who, even trying to enable those two repos have proved difficult as they are simply not available on my system.. hence why i still have a lot of dependencies missing.. My theory is that to have those repos shown to my system i would need another redhat license than the "server license" i already have? propably some sort of Redhat Openstack license? Can any one confirm if this is the case? If it is i guess that means that IBM is now pushing a new license ontop of customers if they want to use the new Object release with the 5.1.0.1 version... and that will be it for me.. ill look some where else then for the object/s3 part.. Sorry if i come across as angry but im starting to get alittle annoyed at IBM :) We were using S3 on the previous release but in the end could'nt really use it because of the limitations of the old way they implemented it and we're told there was a new backend coming which had all the features needed but then they pulled it from the .0 version without notice and we had already upgraded from 4.x.x.x to 5.1.x.x and had to find out the hard way.. most of you propably read the old discussion i started about an alternative to scale object/s3.. Best Regards Andi Christiansen _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.vieser at 1und1.de Tue Dec 8 10:42:31 2020 From: christian.vieser at 1und1.de (Christian Vieser) Date: Tue, 8 Dec 2020 11:42:31 +0100 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. In-Reply-To: References: <429895590.51808.1607372123687@privateemail.com> Message-ID: Hi all, yesterday I just had the same thoughts as Andi. Monday morning, and very happy to see the long awaited 5.1.0.1 release on FixCentral. And then: WTF! First there is no object in 5.1.0.0 at all, and then in 5.1.0.1 all dependencies are missing! And not one single sentence about this in release notes or Readme. Nothing! No explanation that they are missing, why they are missing and where to find the officials repo for them. Today Simon saved my day: Simon Thompson wrote: > > Codeready I think you can just enable with subscription-manager, but > it is disabled by default. RHOSP is an additional license. But as it > says ?typically?, one might assume using the community releases is > also possible, > > e.g. : http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/ > Since IBM support told me months ago, that 5.1 will be based on the train release, I added the repo http://mirror.centos.org/centos/8/cloud/x86_64/openstack-train/ on my test server and now the 5.1.0.1 object rpms installed successfully. Question remains, if we should stay on the Train packages or if we can / should use the newer packages from Openstack Victoria. But now I read the upgrade instructions at https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_updateobj424.htm and all hope is gone. No rolling upgrade if your cluster is running object protocol. You have to upgrade to RHEL8 / CentOS8 first, for upgrading the Spectrum Scale object packages a downtime for the object service has to be scheduled. And yes, here, hided in the upgrade instructions we can find the information about the needed repos: Ensure that the following system repositories are enabled. |openstack-16-for-rhel-8-x86_64-rpms codeready-builder-for-rhel-8-x86_64-rpms| So, I'm very curious now, if I can manage to do a rolling upgrade of my test cluster from CentOS 7 to CentOS 8 and Spectrum Scale 5.0.5 to 5.1.0.1 core + NFS and then upgrading the object part while having all other services up and running. I will report here. Regards, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue Dec 8 18:14:20 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 8 Dec 2020 18:14:20 +0000 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. In-Reply-To: References: <429895590.51808.1607372123687@privateemail.com> Message-ID: <1aaaa8c1-0c4e-e78f-d9b3-9f1a4c56f9d1@strath.ac.uk> On 07/12/2020 22:37, Simon Thompson wrote: > CAUTION: This email originated outside the University. Check before > clicking links or attachments. > > Codeready I think you can just enable with subscription-manager, but it > is disabled by default. RHOSP is an additional license. But as it says > ?typically?, one might assume using the community releases is also > possible, > If you have not already seen the bomb shell that is the end of CentOS (or at least it's transformation into the alpha version of the next RHEL beta) that's not going to work for much longer. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From mutantllama at gmail.com Wed Dec 9 01:08:02 2020 From: mutantllama at gmail.com (Carl) Date: Wed, 9 Dec 2020 12:08:02 +1100 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos Message-ID: Hi all, With the announcement of Centos 8 moving to stream https://blog.centos.org/2020/12/future-is-centos-stream/ Will Centos still be considered a clone OS? https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html#linuxclone What does this mean for the future for support for folk that are running Centos? Cheers, Carl. From carlz at us.ibm.com Wed Dec 9 14:02:27 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Wed, 9 Dec 2020 14:02:27 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos Message-ID: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> We don?t have an official statement yet, however I did want to give you all an indication of our early thinking on this. Our initial reaction is that this won?t change Scale?s support position on CentOS, as documented in the FAQ: it?s not officially supported, we?ll make best effort to support you where issues are not specific to the distro, but we reserve the right to ask for replication on a supported OS (typically RHEL). In particular, those of you using CentOS will need to pay close attention to the version of the kernel you are running, and ensure that it?s a supported one. We?ll share more as soon as we know it ourselves. Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_1774123721] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From jonathan.buzzard at strath.ac.uk Wed Dec 9 15:35:04 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 9 Dec 2020 15:35:04 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos In-Reply-To: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> References: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> Message-ID: <7e18ed6a-0b0e-d1c4-402d-2ca39f73e84e@strath.ac.uk> On 09/12/2020 14:02, Carl Zetie - carlz at us.ibm.com wrote: > CAUTION: This email originated outside the University. Check before > clicking links or attachments. > > We don?t have an official statement yet, however I did want to give you > all an indication of our early thinking on this. Er yes we do, from an IBM employee, because remember RedHat is now IBM owned, and the majority of the people making this decision are RedHat and thus IBM employees. So I quote "If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options." Or translated bend over and get the lube out. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Wed Dec 9 16:22:26 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 9 Dec 2020 16:22:26 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos In-Reply-To: References: Message-ID: <71881295-d7f3-cc9a-abd6-b855dc2f9e5d@strath.ac.uk> On 09/12/2020 01:08, Carl wrote: > CAUTION: This email originated outside the University. Check before clicking links or attachments. > > Hi all, > > With the announcement of Centos 8 moving to stream > https://blog.centos.org/2020/12/future-is-centos-stream> > Will Centos still be considered a clone OS? > https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html#linuxclone> > What does this mean for the future for support for folk that are running Centos? > https://centos.rip/ -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jnason at redlineperf.com Wed Dec 9 16:36:50 2020 From: jnason at redlineperf.com (Jill Nason) Date: Wed, 9 Dec 2020 11:36:50 -0500 Subject: [gpfsug-discuss] Job Opportunity: HPC Storage Engineer at NASA Goddard (DC) Message-ID: Good morning everyone. We have an extraordinary opportunity for an HPC Storage Engineer at NASA Goddard. This is a great opportunity for someone with a passion for IBM Spectrum Scale and NASA. Another great advantage of this opportunity is being a stone's throw from Washington D.C. Learn more about this opportunity and the required skill set by clicking the job posting below. If you have any specific questions please feel free to reach out to me. HPC Storage Engineer -- Jill Nason RedLine Performance Solutions, LLC jnason at redlineperf.com (301)685-5949 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed Dec 9 21:27:34 2020 From: ulmer at ulmer.org (Stephen Ulmer) Date: Wed, 9 Dec 2020 16:27:34 -0500 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos In-Reply-To: <7e18ed6a-0b0e-d1c4-402d-2ca39f73e84e@strath.ac.uk> References: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> <7e18ed6a-0b0e-d1c4-402d-2ca39f73e84e@strath.ac.uk> Message-ID: <6D3E6378-9062-4A53-888C-7609BAC1BBBE@ulmer.org> I have some hope about this? not a lot, but there is one path where it could go well: In particular, I?m hoping that after CentOS goes stream-only RHEL goes release-only, with regular (weekly?) minor release that are actually versioned together (as opposed to ?here are some fixes for RHEL 8.x, good luck explaining where you are without a complete package version map?). The entire idea of a ?stream? for enterprise customers is ludicrous. If you are using the CentOS stream, there should be nothing preventing you from locking in at whatever package versions are in the RHEL release you want to be like. If those get published we?re not entirely in the same spot as before, but not completely screwed. TO say it another way, I hope that CentOS Stream will replace RHEL 8 Stream, and that RHEL 8 Stream will go away. Hopefully that works out, otherwise the RHEL install base will begin shrinking because there will be no free place to start. I am not employed by, and do not speak for IBM (or even myself if my wife is in the room). -- Stephen > On Dec 9, 2020, at 10:35 AM, Jonathan Buzzard wrote: > > On 09/12/2020 14:02, Carl Zetie - carlz at us.ibm.com wrote: >> CAUTION: This email originated outside the University. Check before clicking links or attachments. >> We don?t have an official statement yet, however I did want to give you all an indication of our early thinking on this. > > Er yes we do, from an IBM employee, because remember RedHat is now IBM owned, and the majority of the people making this decision are RedHat and thus IBM employees. So I quote > > "If you are using CentOS Linux 8 in a production environment, and are > concerned that CentOS Stream will not meet your needs, we encourage > you to contact Red Hat about options." > > Or translated bend over and get the lube out. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Wed Dec 9 22:24:28 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Wed, 9 Dec 2020 22:24:28 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos Message-ID: <6F193169-FC50-48BD-9314-76354AC2F7F8@us.ibm.com> >> We don?t have an official statement yet, however I did want to give you >> all an indication of our early thinking on this. >Er yes we do, from an IBM employee, because remember RedHat is now IBM >owned, and the majority of the people making this decision are RedHat >and thus IBM employees. ?We? meaning Spectrum Scale development. To reiterate, so far we don?t think this changes Spectrum Scale?s existing policy on CentOS support. Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_1992429596] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From leslie.james.elliott at gmail.com Wed Dec 9 22:45:22 2020 From: leslie.james.elliott at gmail.com (leslie elliott) Date: Thu, 10 Dec 2020 08:45:22 +1000 Subject: [gpfsug-discuss] Protocol limits Message-ID: hi all we run a large number of shares from CES servers connected to a single scale cluster we understand the current supported limit is 1000 SMB shares, we run the same number of NFS shares we also understand that using external CES cluster to increase that limit is not supported based on the documentation, we use the same authentication for all shares, we do have additional use cases for sharing where this pathway would be attractive going forward so the question becomes if we need to run 20000 SMB and NFS shares off a scale cluster is there any hardware design we can use to do this whilst maintaining support I have submitted a support request to ask if this can be done but thought I would ask the collective good if this has already been solved thanks leslie -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Dec 9 23:21:03 2020 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 10 Dec 2020 00:21:03 +0100 Subject: [gpfsug-discuss] Protocol limits In-Reply-To: References: Message-ID: My understanding of these limits are that they are to limit the configuration files from becoming too large, which makes changing/processing them somewhat slow. For SMB shares, you might be able to limit the number of configured shares by using wildcards in the config (%U). These wildcarded entries counts as one share.. Don?t know if simimar tricks can be done for NFS.. -jf ons. 9. des. 2020 kl. 23:45 skrev leslie elliott < leslie.james.elliott at gmail.com>: > > hi all > > we run a large number of shares from CES servers connected to a single > scale cluster > we understand the current supported limit is 1000 SMB shares, we run the > same number of NFS shares > > we also understand that using external CES cluster to increase that limit > is not supported based on the documentation, we use the same authentication > for all shares, we do have additional use cases for sharing where this > pathway would be attractive going forward > > so the question becomes if we need to run 20000 SMB and NFS shares off a > scale cluster is there any hardware design we can use to do this whilst > maintaining support > > I have submitted a support request to ask if this can be done but thought > I would ask the collective good if this has already been solved > > thanks > > leslie > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eboyd at us.ibm.com Thu Dec 10 14:41:04 2020 From: eboyd at us.ibm.com (Edward Boyd) Date: Thu, 10 Dec 2020 14:41:04 +0000 Subject: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13 In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Thu Dec 10 21:59:04 2020 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Thu, 10 Dec 2020 21:59:04 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Contents_of_gpfsug-discuss_Digest=2C_V?= =?utf-8?q?ol_107=2C=09Issue_13?= In-Reply-To: Message-ID: Thanks Ed, The UQ team are well aware of the current limits published in the FAQ. However the issue is not the number of physical nodes or the concurrent user sessions, but rather the number of SMB / NFS export mounts that Spectrum Scale supports from a single cluster or even remote mount protocol clusters is no longer enough for their research environment. The current total number of Exports can not exceed 1000, which is an issue when they have multiple thousands of research project ID?s with users needing access to every project ID with its relevant security permissions. Grouping Project ID?s under a single export isn?t a viable option as there is no simple way to identify which research group / user is going to request a new project ID, new project ID?s are automatically created and allocated when a request for storage allocation is fulfilled. Projects ID?s (independent file sets) are published not only as SMB exports, but are also mounted using multiple AFM cache clusters to high performance instrument clusters, multiple HPC clusters or up to 5 different campus access points, including remote universities. The data workflow is not a simple linear workflow And the mixture of different types of users with requests for storage, and storage provisioning has resulted in the University creating their own provisioning portal which interacts with the Spectrum Scale data fabric (multiple Spectrum Scale clusters in single global namespace, connected via 100GB Ethernet over AFM) in multiple points to deliver the project ID provisioning at the relevant locations specified by the user / research group. One point of data surfacing, in this data fabric, is the Spectrum Scale Protocols cluster that Les manages, which provides the central user access point via SMB or NFS, all research users across the university who want to access one or more of their storage allocations do so via the SMB / NFS mount points from this specific storage cluster. Regards, Andrew Beattie File & Object Storage - Technical Lead IBM Australia & New Zealand Sent from my iPhone > On 11 Dec 2020, at 00:41, Edward Boyd wrote: > > ? > Please review the CES limits in the FAQ which states > > Q5.2: > What are some scaling considerations for the protocols function? > A5.2: > Scaling considerations for the protocols function include: > The number of protocol nodes. > If you are using SMB in any combination of other protocols you can configure only up to 16 protocol nodes. This is a hard limit and SMB cannot be enabled if there are more protocol nodes. If only NFS and Object are enabled, you can have 32 nodes configured as protocol nodes. > > The number of client connections. > A maximum of 3,000 SMB connections is recommended per protocol node with a maximum of 20,000 SMB connections per cluster. A maximum of 4,000 NFS connections per protocol node is recommended. A maximum of 2,000 Object connections per protocol nodes is recommended. The maximum number of connections depends on the amount of memory configured and sufficient CPU. We recommend a minimum of 64GB of memory for only Object or only NFS use cases. If you have multiple protocols enabled or if you have SMB enabled we recommend 128GB of memory on the system. > > https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html?view=kc#maxproto > Edward L. Boyd ( Ed ) > IBM Certified Client Technical Specialist, Level 2 Expert > Open Foundation, Master Certified Technical Specialist > IBM Systems, Storage Solutions > US Federal > 407-271-9210 Office / Cell / Office / Text > eboyd at us.ibm.com email > > -----gpfsug-discuss-bounces at spectrumscale.org wrote: ----- > To: gpfsug-discuss at spectrumscale.org > From: gpfsug-discuss-request at spectrumscale.org > Sent by: gpfsug-discuss-bounces at spectrumscale.org > Date: 12/10/2020 07:00AM > Subject: [EXTERNAL] gpfsug-discuss Digest, Vol 107, Issue 13 > > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Protocol limits (leslie elliott) > 2. Re: Protocol limits (Jan-Frode Myklebust) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 10 Dec 2020 08:45:22 +1000 > From: leslie elliott > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Protocol limits > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > hi all > > we run a large number of shares from CES servers connected to a single > scale cluster > we understand the current supported limit is 1000 SMB shares, we run the > same number of NFS shares > > we also understand that using external CES cluster to increase that limit > is not supported based on the documentation, we use the same authentication > for all shares, we do have additional use cases for sharing where this > pathway would be attractive going forward > > so the question becomes if we need to run 20000 SMB and NFS shares off a > scale cluster is there any hardware design we can use to do this whilst > maintaining support > > I have submitted a support request to ask if this can be done but thought I > would ask the collective good if this has already been solved > > thanks > > leslie > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Thu, 10 Dec 2020 00:21:03 +0100 > From: Jan-Frode Myklebust > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Protocol limits > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > My understanding of these limits are that they are to limit the > configuration files from becoming too large, which makes > changing/processing them somewhat slow. > > For SMB shares, you might be able to limit the number of configured shares > by using wildcards in the config (%U). These wildcarded entries counts as > one share.. Don?t know if simimar tricks can be done for NFS.. > > > > -jf > > ons. 9. des. 2020 kl. 23:45 skrev leslie elliott < > leslie.james.elliott at gmail.com>: > > > > > hi all > > > > we run a large number of shares from CES servers connected to a single > > scale cluster > > we understand the current supported limit is 1000 SMB shares, we run the > > same number of NFS shares > > > > we also understand that using external CES cluster to increase that limit > > is not supported based on the documentation, we use the same authentication > > for all shares, we do have additional use cases for sharing where this > > pathway would be attractive going forward > > > > so the question becomes if we need to run 20000 SMB and NFS shares off a > > scale cluster is there any hardware design we can use to do this whilst > > maintaining support > > > > I have submitted a support request to ask if this can be done but thought > > I would ask the collective good if this has already been solved > > > > thanks > > > > leslie > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 107, Issue 13 > *********************************************** > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Fri Dec 11 00:25:59 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Fri, 11 Dec 2020 00:25:59 +0000 Subject: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13 In-Reply-To: References: Message-ID: <44ae4273-a1aa-0206-9cf0-5971eab2efa6@strath.ac.uk> On 10/12/2020 21:59, Andrew Beattie wrote: > CAUTION: This email originated outside the University. Check before > clicking links or attachments. > Thanks Ed, > > The UQ team are well aware of the current limits published in the FAQ. > > However the issue is not the number of physical nodes or the concurrent > user sessions, but rather the number of SMB / NFS export mounts that > Spectrum Scale supports from a single cluster or even remote mount > protocol clusters is no longer enough for their research environment. > > The current total number of Exports can not exceed 1000, which is an > issue when they have multiple thousands of research project ID?s with > users needing access to every project ID with its relevant security > permissions. > > Grouping Project ID?s under a single export isn?t a viable option as > there is no simple way to identify which research group / user is going > to request a new project ID, new project ID?s are automatically created > and allocated when a request for storage allocation is fulfilled. > > Projects ID?s (independent file sets) are published not only as SMB > exports, but are also mounted using multiple AFM cache clusters to high > performance instrument clusters, multiple HPC clusters or up to 5 > different campus access points, including remote universities. > > The data workflow is not a simple linear workflow > And the mixture of different types of users with requests for storage, > and storage provisioning has resulted in the University creating their > own provisioning portal which interacts with the Spectrum Scale data > fabric (multiple Spectrum Scale clusters in single global namespace, > connected via 100GB Ethernet over AFM) in multiple points to deliver the > project ID provisioning at the relevant locations specified by the user > / research group. > > One point of data surfacing, in this data fabric, is the Spectrum Scale > Protocols cluster that Les manages, which provides the central user > access point via SMB or NFS, all research users across the university > who want to access one or more of their storage allocations do so via > the SMB / NFS mount points from this specific storage cluster. I am not sure thousands of SMB exports is ever a good idea. I suspect Windows Server would keel over and die too in that scenario My suggestion would be to looking into some consolidated SMB exports and then mask it all with DFS. Though this presumes that they are not handing out "project" security credentials that are shared between multiple users. That would be very bad...... JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From hoov at us.ibm.com Thu Dec 17 18:46:40 2020 From: hoov at us.ibm.com (Theodore Hoover Jr) Date: Thu, 17 Dec 2020 18:46:40 +0000 Subject: [gpfsug-discuss] Spectrum Scale Cloud Online Survey Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.16082105961220.jpg Type: image/jpeg Size: 6839 bytes Desc: not available URL: From gongwbj at cn.ibm.com Wed Dec 23 06:44:16 2020 From: gongwbj at cn.ibm.com (Wei G Gong) Date: Wed, 23 Dec 2020 14:44:16 +0800 Subject: [gpfsug-discuss] Latest Technical Blogs/Papers on IBM Spectrum Scale (2H 2020) In-Reply-To: References: Message-ID: Dear User Group Members, In continuation to this email thread, here are list of development blogs/Redpaper in the past half year . We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to this list. What's New in Spectrum Scale 5.1.0? https://www.spectrumscaleug.org/event/ssugdigital-what-is-new-in-spectrum-scale-5-1/ Spectrum Scale User Group Digital (SSUG::Digital) https://www.spectrumscaleug.org/introducing-ssugdigital/ Cloudera Data Platform Private Cloud Base with IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5608.html?Open Implementation Guide for IBM Elastic Storage System 5000 http://www.redbooks.ibm.com/abstracts/sg248498.html?Open IBM Spectrum Scale and IBM Elastic Storage System Network Guide http://www.redbooks.ibm.com/abstracts/redp5484.html?Open Deployment and Usage Guide for Running AI Workloads on Red Hat OpenShift and NVIDIA DGX Systems with IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5610.html?Open Privileged Access Management for Secure Storage Administration: IBM Spectrum Scale with IBM Security Verify Privilege Vault http://www.redbooks.ibm.com/abstracts/redp5625.html?Open IBM Storage Solutions for SAS Analytics using IBM Spectrum Scale and IBM Elastic Storage System 3000 Version 1 Release 1 http://www.redbooks.ibm.com/abstracts/redp5609.html?Open IBM Spectrum Scale configuration for sudo based administration on defined set of administrative nodes https://community.ibm.com/community/user/storage/blogs/sandeep-patil1/2020/07/27/ibm-spectrum-scale-configuration-for-sudo-based-administration-on-defined-set-of-administrative-nodes Its a containerized world - AI with IBM Spectrum Scale and NVIDIA https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/01/its-a-containerized-world Optimize running NVIDIA GPU-enabled AI workloads with data orchestration solution https://community.ibm.com/community/user/storage/blogs/pallavi-galgali1/2020/10/05/optimize-running-nvidia-gpu-enabled-ai-workloads-w Building a better and more flexible data silo should NOT be the goal of storage or considered good https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/07/building-a-better-and-more-flexible-silo-is-not-mo Do you have a strategy to solve BIG DATA problems with an AI information architecture? https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/07/are-you-solving-big-problems IBM Storage a Leader in 2020 Magic Quadrant for Distributed File Systems and Object Storage https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/21/ibm-storage-a-leader-in-2020-magic-quadrant-for-di Containerized IBM Spectrum Scale brings native supercomputer performance data access to Red Hat OpenShift https://community.ibm.com/community/user/storage/blogs/matthew-geiser1/2020/10/27/containerized-ibm-spectrum-scale Cloudera now supports IBM Spectrum Scale with high performance analytics https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/30/cloudera-spectrumscale IBM Storage at Supercomputing 2020 https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/11/03/ibm-storage-at-supercomputing-2020 Empower innovation in the hybrid cloud https://community.ibm.com/community/user/storage/blogs/iliana-garcia-espinosa1/2020/11/17/empower-innovation-in-the-hybrid-cloud HPCwire Chooses University of Birmingham as Best Use of High Performance Data Analytics and AI https://community.ibm.com/community/user/storage/blogs/peter-basmajian/2020/11/18/hpcwire-chooses-university-of-birmingham-as-best-u I/O Workflow of Hadoop workloads with IBM Spectrum Scale and HDFS Transparency https://community.ibm.com/community/user/storage/blogs/chinmaya-mishra1/2020/11/19/io-workflow-hadoop-hdfs-with-ibm-spectrum-scale Workflow of a Hadoop Mapreduce job with HDFS Transparency & IBM Spectrum Scale https://community.ibm.com/community/user/storage/blogs/chinmaya-mishra1/2020/11/23/workflow-of-a-mapreduce-job-with-hdfs-transparency Hybrid cloud data sharing and collaboration with IBM Spectrum Scale Active File Management https://community.ibm.com/community/user/storage/blogs/nils-haustein1/2020/12/08/hybridcloud-usecases-with-spectrumscale-afm NOW certified: IBM Software Defined Storage for IBM Cloud Pak for Data https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/12/11/ibm-cloud-paks-now Resolving OpenStack dependencies required by the Object protocol in versions 5.1 and higher https://community.ibm.com/community/user/storage/blogs/brian-nelson1/2020/12/15/resolving-openstack-dependencies-needed-by-object Benefits and implementation of IBM Spectrum Scale\u2122 sudo wrappers https://community.ibm.com/community/user/storage/blogs/nils-haustein1/2020/12/17/spectrum-scale-sudo-wrappers Introducing Storage Suite Starter for Containers https://community.ibm.com/community/user/storage/blogs/sam-werner1/2020/12/17/storage-suite-starter-for-containers User Group Presentations: https://www.spectrumscale.org/presentations/ From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 2020/08/17 13:51 Subject: Re: Latest Technical Blogs/Papers on IBM Spectrum Scale (Q2 2020) Dear User Group Members, In continuation to this email thread, here are list of development blogs/Redpaper in the past quarter . We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to this list. What?s New in Spectrum Scale 5.0.5? https://community.ibm.com/community/user/storage/blogs/ismael-solis-moreno1/2020/07/06/whats-new-in-spectrum-scale-505 Implementation Guide for IBM Elastic Storage System 3000 http://www.redbooks.ibm.com/abstracts/sg248443.html?Open Spectrum Scale File Audit Logging (FAL) and Watch Folder(WF) Document and Demo https://developer.ibm.com/storage/2020/05/27/spectrum-scale-file-audit-logging-fal-and-watch-folderwf-document-and-demo/ IBM Spectrum Scale with IBM QRadar - Internal Threat Detection (5 mins Demo) https://www.youtube.com/watch?v=Zyw84dvoFR8&t=1s IBM Spectrum Scale Information Lifecycle Management Policies - Practical guide https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102642 Example: https://github.com/nhaustein/spectrum-scale-policy-scripts IBM Spectrum Scale configuration for sudo based administration on defined set of administrative nodes., https://developer.ibm.com/storage/2020/07/27/ibm-spectrum-scale-configuration-for-sudo-based-administration-on-defined-set-of-administrative-nodes/ IBM Spectrum Scale Erasure Code Edition in Stretched Cluster https://developer.ibm.com/storage/2020/07/10/ibm-spectrum-scale-erasure-code-edition-in-streched-cluster/ IBM Spectrum Scale installation toolkit ? extended FQDN enhancement over releases ? 5.0.5.0 https://developer.ibm.com/storage/2020/06/12/ibm-spectrum-scale-installation-toolkit-extended-fqdn-enhancement-over-releases-5-0-5-0/ IBM Spectrum Scale Security Posture with Kibana for Visualization https://developer.ibm.com/storage/2020/05/22/ibm-spectrum-scale-security-posture-with-kibana-for-visualization/ How to Visualize IBM Spectrum Scale Security Posture on Canvas https://developer.ibm.com/storage/2020/05/22/how-to-visualize-ibm-spectrum-scale-security-posture-on-canvas/ How to add Linux machine as Active Directory client to access IBM Spectrum Scale?? https://developer.ibm.com/storage/2020/04/29/how-to-add-linux-machine-as-active-directory-client-to-access-ibm-spectrum-scale/ Enabling Kerberos Authentication in IBM Spectrum Scale HDFS Transparency without Ambari https://developer.ibm.com/storage/2020/04/17/enabling-kerberos-authentication-in-ibm-spectrum-scale-hdfs-transparency-without-ambari/ Configuring Spectrum Scale File Systems for Reliability https://developer.ibm.com/storage/2020/04/08/configuring-spectrum-scale-file-systems-for-reliability/ Spectrum Scale Tuning for Large Linux Clusters https://developer.ibm.com/storage/2020/04/03/spectrum-scale-tuning-for-large-linux-clusters/ Spectrum Scale Tuning for Power Architecture https://developer.ibm.com/storage/2020/03/30/spectrum-scale-tuning-for-power-architecture/ Spectrum Scale operating system and network tuning https://developer.ibm.com/storage/2020/03/27/spectrum-scale-operating-system-and-network-tuning/ How to have granular and selective secure data at rest and in motion for workloads https://developer.ibm.com/storage/2020/03/24/how-to-have-granular-and-selective-secure-data-at-rest-and-in-motion-for-workloads/ Multiprotocol File Sharing on IBM Spectrum Scalewithout an AD or LDAP server https://www.ibm.com/downloads/cas/AN9BR9NJ Securing Data on Threat Detection Using IBM Spectrum Scale and IBM QRadar: An Enhanced Cyber Resiliency Solution http://www.redbooks.ibm.com/abstracts/redp5560.html?Open For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 03/17/2020 01:37 PM Subject: Re: Latest Technical Blogs/Papers on IBM Spectrum Scale (Q3 2019 - Q1 2020) Dear User Group Members, In continuation to this email thread, here are list of development blogs/Redpaper in the past 2 quarters . We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to this list. Redpaper HIPAA Compliance for Healthcare Workloads on IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5591.html?Open IBM Spectrum Scale CSI Driver For Container Persistent Storage http://www.redbooks.ibm.com/redpieces/abstracts/redp5589.html?Open Cyber Resiliency Solution for IBM Spectrum Scale , Blueprint http://www.redbooks.ibm.com/abstracts/redp5559.html?Open Enhanced Cyber Security with IBM Spectrum Scale and IBM QRadar http://www.redbooks.ibm.com/abstracts/redp5560.html?Open Monitoring and Managing the IBM Elastic Storage Server Using the GUI http://www.redbooks.ibm.com/abstracts/redp5471.html?Open IBM Hybrid Solution for Scalable Data Solutions using IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5549.html?Open IBM Spectrum Discover: Metadata Management for Deep Insight of Unstructured Storage http://www.redbooks.ibm.com/abstracts/redp5550.html?Open Monitoring and Managing IBM Spectrum Scale Using the GUI http://www.redbooks.ibm.com/abstracts/redp5458.html?Open IBM Reference Architecture for High Performance Data and AI in Healthcare and Life Sciences, http://www.redbooks.ibm.com/abstracts/redp5481.html?Open Blogs: Why Storage and HIPAA Compliance for AI & Analytics Workloads for Healthcare https://developer.ibm.com/storage/2020/03/17/why-storage-and-hipaa-compliance-for-ai-analytics-workloads-for-healthcare/ Innovation via Integration ? Proactively Securing Your Unstructured Data from Cyber Threats & Attacks --> This was done based on your inputs (as a part of Security Survey) last year on need for Spectrum Scale integrayion with IDS a https://developer.ibm.com/storage/2020/02/24/innovation-via-integration-proactively-securing-your-unstructured-data-from-cyber-threats-attacks/ IBM Spectrum Scale CES HDFS Transparency support https://developer.ibm.com/storage/2020/02/03/ces-hdfs-transparency-support/ How to set up a remote cluster with IBM Spectrum Scale ? steps, limitations and troubleshooting https://developer.ibm.com/storage/2020/01/27/how-to-set-up-a-remote-cluster-with-ibm-spectrum-scale-steps-limitations-and-troubleshooting/ How to use IBM Spectrum Scale with CSI Operator 1.0 on Openshift 4.2 ? sample usage scenario with Tensorflow deployment https://developer.ibm.com/storage/2020/01/20/how-to-use-ibm-spectrum-scale-with-csi-operator-1-0-on-openshift-4-2-sample-usage-scenario-with-tensorflow-deployment/ Achieving WORM like functionality from NFS/SMB clients for data on Spectrum Scale https://developer.ibm.com/storage/2020/01/10/achieving-worm-like-functionality-from-nfs-smb-clients-for-data-on-spectrum-scale/ IBM Spectrum Scale CSI driver video blogs, https://developer.ibm.com/storage/2019/12/26/ibm-spectrum-scale-csi-driver-video-blogs/ IBM Spectrum Scale CSI Driver v1.0.0 released https://developer.ibm.com/storage/2019/12/10/ibm-spectrum-scale-csi-driver-v1-0-0-released/ Now configure IBM? Spectrum Scale with Overlapping UNIXMAP ranges https://developer.ibm.com/storage/2019/11/12/now-configure-ibm-spectrum-scale-with-overlapping-unixmap-ranges/ ?mmadquery?, a Powerful tool helps check AD settings from Spectrum Scale https://developer.ibm.com/storage/2019/11/11/mmadquery-a-powerful-tool-helps-check-ad-settings-from-spectrum-scale/ Spectrum Scale Data Security Modes, https://developer.ibm.com/storage/2019/10/31/spectrum-scale-data-security-modes/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.4 ? https://developer.ibm.com/storage/2019/10/25/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-4/ IBM Spectrum Scale installation toolkit ? enhancements over releases ? 5.0.4.0 https://developer.ibm.com/storage/2019/10/18/ibm-spectrum-scale-installation-toolkit-enhancements-over-releases-5-0-4-0/ IBM Spectrum Scale CSI driver beta on GitHub, https://developer.ibm.com/storage/2019/09/26/ibm-spectrum-scale-csi-driver-on-github/ Help Article: Care to be taken when configuring AD with RFC2307 https://developer.ibm.com/storage/2019/09/18/help-article-care-to-be-taken-when-configuring-ad-with-rfc2307/ IBM Spectrum Scale Erasure Code Edition (ECE): Installation Demonstration https://developer.ibm.com/storage/2019/09/10/ibm-spectrum-scale-erasure-code-edition-ece-installation-demonstration/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 09/03/2019 10:58 AM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q2 2019) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q2 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Redpaper : IBM Power Systems Enterprise AI Solutions (W/ SPECTRUM SCALE) http://www.redbooks.ibm.com/redpieces/abstracts/redp5556.html?Open IBM Spectrum Scale Erasure Code Edition (ECE): Installation Demonstration https://www.youtube.com/watch?v=6If50EvgP-U Blogs: Using IBM Spectrum Scale as platform storage for running containerized Hadoop/Spark workloads https://developer.ibm.com/storage/2019/08/27/using-ibm-spectrum-scale-as-platform-storage-for-running-containerized-hadoop-spark-workloads/ Useful Tools for Spectrum Scale CES NFS https://developer.ibm.com/storage/2019/07/22/useful-tools-for-spectrum-scale-ces-nfs/ How to ensure NFS uses strong encryption algorithms for secure data in motion ? https://developer.ibm.com/storage/2019/07/19/how-to-ensure-nfs-uses-strong-encryption-algorithms-for-secure-data-in-motion/ Introducing IBM Spectrum Scale Erasure Code Edition https://developer.ibm.com/storage/2019/07/07/introducing-ibm-spectrum-scale-erasure-code-edition/ Spectrum Scale: Which Filesystem Encryption Algo to Consider ? https://developer.ibm.com/storage/2019/07/01/spectrum-scale-which-filesystem-encryption-algo-to-consider/ IBM Spectrum Scale HDFS Transparency Apache Hadoop 3.1.x Support https://developer.ibm.com/storage/2019/06/24/ibm-spectrum-scale-hdfs-transparency-apache-hadoop-3-0-x-support/ Enhanced features in Elastic Storage Server (ESS) 5.3.4 https://developer.ibm.com/storage/2019/06/19/enhanced-features-in-elastic-storage-server-ess-5-3-4/ Upgrading IBM Spectrum Scale Erasure Code Edition using installation toolkit https://developer.ibm.com/storage/2019/06/09/upgrading-ibm-spectrum-scale-erasure-code-edition-using-installation-toolkit/ Upgrading IBM Spectrum Scale sync replication / stretch cluster setup in PureApp https://developer.ibm.com/storage/2019/06/06/upgrading-ibm-spectrum-scale-sync-replication-stretch-cluster-setup/ GPFS config remote access with multiple network definitions https://developer.ibm.com/storage/2019/05/30/gpfs-config-remote-access-with-multiple-network-definitions/ IBM Spectrum Scale Erasure Code Edition Fault Tolerance https://developer.ibm.com/storage/2019/05/30/ibm-spectrum-scale-erasure-code-edition-fault-tolerance/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.3 ? https://developer.ibm.com/storage/2019/05/02/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-3/ Understanding and Solving WBC_ERR_DOMAIN_NOT_FOUND error with Spectrum?Scale https://crk10.wordpress.com/2019/07/21/solving-the-wbc-err-domain-not-found-nt-status-none-mapped-glitch-in-ibm-spectrum-scale/ Understanding and Solving NT_STATUS_INVALID_SID issue for SMB access with Spectrum?Scale https://crk10.wordpress.com/2019/07/24/solving-nt_status_invalid_sid-for-smb-share-access-in-ibm-spectrum-scale/ mmadquery primer (apparatus to query Active Directory from IBM Spectrum?Scale) https://crk10.wordpress.com/2019/07/27/mmadquery-primer-apparatus-to-query-active-directory-from-ibm-spectrum-scale/ How to configure RHEL host as Active Directory Client using?SSSD https://crk10.wordpress.com/2019/07/28/configure-rhel-machine-as-active-directory-client-using-sssd/ How to configure RHEL host as LDAP client using?nslcd https://crk10.wordpress.com/2019/07/28/configure-rhel-machine-as-ldap-client-using-nslcd/ Solving NFSv4 AUTH_SYS nobody ownership?issue https://crk10.wordpress.com/2019/07/29/nfsv4-auth_sys-nobody-ownership-and-idmapd/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list of all blogs and collaterals. https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 04/29/2019 12:12 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q1 2019) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q1 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Spectrum Scale 5.0.3 https://developer.ibm.com/storage/2019/04/24/spectrum-scale-5-0-3/ IBM Spectrum Scale HDFS Transparency Ranger Support https://developer.ibm.com/storage/2019/04/01/ibm-spectrum-scale-hdfs-transparency-ranger-support/ Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and Sharing Files Globally, http://www.redbooks.ibm.com/abstracts/redp5527.html?Open Spectrum Scale user group in Singapore, 2019 https://developer.ibm.com/storage/2019/03/14/spectrum-scale-user-group-in-singapore-2019/ 7 traits to use Spectrum Scale to run container workload https://developer.ibm.com/storage/2019/02/26/7-traits-to-use-spectrum-scale-to-run-container-workload/ Health Monitoring of IBM Spectrum Scale Cluster via External Monitoring Framework https://developer.ibm.com/storage/2019/01/22/health-monitoring-of-ibm-spectrum-scale-cluster-via-external-monitoring-framework/ Migrating data from native HDFS to IBM Spectrum Scale based shared storage https://developer.ibm.com/storage/2019/01/18/migrating-data-from-native-hdfs-to-ibm-spectrum-scale-based-shared-storage/ Bulk File Creation useful for Test on Filesystems https://developer.ibm.com/storage/2019/01/16/bulk-file-creation-useful-for-test-on-filesystems/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 01/14/2019 06:24 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q4 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q4 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Redpaper: IBM Spectrum Scale and IBM StoredIQ: Identifying and securing your business data to support regulatory requirements http://www.redbooks.ibm.com/abstracts/redp5525.html?Open IBM Spectrum Scale Memory Usage https://www.slideshare.net/tomerperry/ibm-spectrum-scale-memory-usage?qid=50a1dfda-3102-484f-b9d0-14b69fc4800b&v=&b=&from_search=2 Spectrum Scale and Containers https://developer.ibm.com/storage/2018/12/20/spectrum-scale-and-containers/ IBM Elastic Storage Server Performance Graphical Visualization with Grafana https://developer.ibm.com/storage/2018/12/18/ibm-elastic-storage-server-performance-graphical-visualization-with-grafana/ Hadoop Performance for disaggregated compute and storage configurations based on IBM Spectrum Scale Storage https://developer.ibm.com/storage/2018/12/13/hadoop-performance-for-disaggregated-compute-and-storage-configurations-based-on-ibm-spectrum-scale-storage/ EMS HA in ESS LE (Little Endian) environment https://developer.ibm.com/storage/2018/12/07/ems-ha-in-ess-le-little-endian-environment/ What?s new in ESS 5.3.2 https://developer.ibm.com/storage/2018/12/04/whats-new-in-ess-5-3-2/ Administer your Spectrum Scale cluster easily https://developer.ibm.com/storage/2018/11/13/administer-your-spectrum-scale-cluster-easily/ Disaster Recovery using Spectrum Scale?s Active File Management https://developer.ibm.com/storage/2018/11/13/disaster-recovery-using-spectrum-scales-active-file-management/ Recovery Group Failover Procedure of IBM Elastic Storage Server (ESS) https://developer.ibm.com/storage/2018/10/08/recovery-group-failover-procedure-ibm-elastic-storage-server-ess/ Whats new in IBM Elastic Storage Server (ESS) Version 5.3.1 and 5.3.1.1 https://developer.ibm.com/storage/2018/10/04/whats-new-ibm-elastic-storage-server-ess-version-5-3-1-5-3-1-1/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 10/03/2018 08:48 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q3 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q3 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. How NFS exports became more dynamic with Spectrum Scale 5.0.2 https://developer.ibm.com/storage/2018/10/02/nfs-exports-became-dynamic-spectrum-scale-5-0-2/ HPC storage on AWS (IBM Spectrum Scale) https://developer.ibm.com/storage/2018/10/02/hpc-storage-aws-ibm-spectrum-scale/ Upgrade with Excluding the node(s) using Install-toolkit https://developer.ibm.com/storage/2018/09/30/upgrade-excluding-nodes-using-install-toolkit/ Offline upgrade using Install-toolkit https://developer.ibm.com/storage/2018/09/30/offline-upgrade-using-install-toolkit/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/21/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-2/ What?s New in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/15/whats-new-ibm-spectrum-scale-5-0-2/ Starting IBM Spectrum Scale 5.0.2 release, the installation toolkit supports upgrade rerun if fresh upgrade fails. https://developer.ibm.com/storage/2018/09/15/starting-ibm-spectrum-scale-5-0-2-release-installation-toolkit-supports-upgrade-rerun-fresh-upgrade-fails/ IBM Spectrum Scale installation toolkit ? enhancements over releases ? 5.0.2.0 https://developer.ibm.com/storage/2018/09/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases-5-0-2-0/ Announcing HDP 3.0 support with IBM Spectrum Scale https://developer.ibm.com/storage/2018/08/31/announcing-hdp-3-0-support-ibm-spectrum-scale/ IBM Spectrum Scale Tuning Overview for Hadoop Workload https://developer.ibm.com/storage/2018/08/20/ibm-spectrum-scale-tuning-overview-hadoop-workload/ Making the Most of Multicloud Storage https://developer.ibm.com/storage/2018/08/13/making-multicloud-storage/ Disaster Recovery for Transparent Cloud Tiering using SOBAR https://developer.ibm.com/storage/2018/08/13/disaster-recovery-transparent-cloud-tiering-using-sobar/ Your Optimal Choice of AI Storage for Today and Tomorrow https://developer.ibm.com/storage/2018/08/10/spectrum-scale-ai-workloads/ Analyze IBM Spectrum Scale File Access Audit with ELK Stack https://developer.ibm.com/storage/2018/07/30/analyze-ibm-spectrum-scale-file-access-audit-elk-stack/ Mellanox SX1710 40G switch MLAG configuration for IBM ESS https://developer.ibm.com/storage/2018/07/12/mellanox-sx1710-40g-switcher-mlag-configuration/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? SMB and NFS Access issues https://developer.ibm.com/storage/2018/07/10/protocol-problem-determination-guide-ibm-spectrum-scale-smb-nfs-access-issues/ Access Control in IBM Spectrum Scale Object https://developer.ibm.com/storage/2018/07/06/access-control-ibm-spectrum-scale-object/ IBM Spectrum Scale HDFS Transparency Docker support https://developer.ibm.com/storage/2018/07/06/ibm-spectrum-scale-hdfs-transparency-docker-support/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? Log Collection https://developer.ibm.com/storage/2018/07/04/protocol-problem-determination-guide-ibm-spectrum-scale-log-collection/ Redpapers IBM Spectrum Scale Immutability Introduction, Configuration Guidance, and Use Cases http://www.redbooks.ibm.com/abstracts/redp5507.html?Open Certifications Assessment of the immutability function of IBM Spectrum Scale Version 5.0 in accordance to US SEC17a-4f, EU GDPR Article 21 Section 1, German and Swiss laws and regulations in collaboration with KPMG. Certificate: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?DE968667B47544FF83F6CCDCF37E5FB5 Full assessment report: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?B290411BE1224F5A9B4D24663BCD3C5D For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 07/03/2018 12:13 AM Subject: Re: Latest Technical Blogs on Spectrum Scale (Q2 2018) Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q2 2018). We now have over 100+ developer blogs. As discussed in User Groups, passing it along: IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ IBM Spectrum Scale ILM Policies https://developer.ibm.com/storage/2018/06/02/ibm-spectrum-scale-ilm-policies/ IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ Management GUI enhancements in IBM Spectrum Scale release 5.0.1 https://developer.ibm.com/storage/2018/05/18/management-gui-enhancements-in-ibm-spectrum-scale-release-5-0-1/ Managing IBM Spectrum Scale services through GUI https://developer.ibm.com/storage/2018/05/18/managing-ibm-spectrum-scale-services-through-gui/ Use AWS CLI with IBM Spectrum Scale? object storage https://developer.ibm.com/storage/2018/05/16/use-awscli-with-ibm-spectrum-scale-object-storage/ Hadoop Storage Tiering with IBM Spectrum Scale https://developer.ibm.com/storage/2018/05/09/hadoop-storage-tiering-ibm-spectrum-scale/ How many Files on my Filesystem? https://developer.ibm.com/storage/2018/05/07/many-files-filesystem/ Recording Spectrum Scale Object Stats for Potential Billing like Purpose using Elasticsearch https://developer.ibm.com/storage/2018/05/04/spectrum-scale-object-stats-for-billing-using-elasticsearch/ New features in IBM Elastic Storage Server (ESS) Version 5.3 https://developer.ibm.com/storage/2018/04/09/new-features-ibm-elastic-storage-server-ess-version-5-3/ Using IBM Spectrum Scale for storage in IBM Cloud Private (Missed to send earlier) https://medium.com/ibm-cloud/ibm-spectrum-scale-with-ibm-cloud-private-8bf801796f19 Redpapers Hortonworks Data Platform with IBM Spectrum Scale: Reference Guide for Building an Integrated Solution http://www.redbooks.ibm.com/redpieces/abstracts/redp5448.html, Enabling Hybrid Cloud Storage for IBM Spectrum Scale Using Transparent Cloud Tiering http://www.redbooks.ibm.com/abstracts/redp5411.html?Open SAP HANA and ESS: A Winning Combination (Update) http://www.redbooks.ibm.com/abstracts/redp5436.html?Open Others IBM Spectrum Scale Software Version Recommendation Preventive Service Planning (Updated) http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009703, IDC Infobrief: A Modular Approach to Genomics Infrastructure at Scale in HCLS https://www.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=37016937USEN& For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 03/27/2018 05:23 PM Subject: Re: Latest Technical Blogs on Spectrum Scale Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q1 2018). As discussed in User Groups, passing it along: GDPR Compliance and Unstructured Data Storage https://developer.ibm.com/storage/2018/03/27/gdpr-compliance-unstructure-data-storage/ IBM Spectrum Scale for Linux on IBM Z ? Release 5.0 features and highlights https://developer.ibm.com/storage/2018/03/09/ibm-spectrum-scale-linux-ibm-z-release-5-0-features-highlights/ Management GUI enhancements in IBM Spectrum Scale release 5.0.0 https://developer.ibm.com/storage/2018/01/18/gui-enhancements-in-spectrum-scale-release-5-0-0/ IBM Spectrum Scale 5.0.0 ? What?s new in NFS? https://developer.ibm.com/storage/2018/01/18/ibm-spectrum-scale-5-0-0-whats-new-nfs/ Benefits and implementation of Spectrum Scale sudo wrappers https://developer.ibm.com/storage/2018/01/15/benefits-implementation-spectrum-scale-sudo-wrappers/ IBM Spectrum Scale: Big Data and Analytics Solution Brief https://developer.ibm.com/storage/2018/01/15/ibm-spectrum-scale-big-data-analytics-solution-brief/ Variant Sub-blocks in Spectrum Scale 5.0 https://developer.ibm.com/storage/2018/01/11/spectrum-scale-variant-sub-blocks/ Compression support in Spectrum Scale 5.0.0 https://developer.ibm.com/storage/2018/01/11/compression-support-spectrum-scale-5-0-0/ IBM Spectrum Scale Versus Apache Hadoop HDFS https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/ ESS Fault Tolerance https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/ Genomic Workloads ? How To Get it Right From Infrastructure Point Of View. https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/ IBM Spectrum Scale On AWS Cloud : This video explains how to deploy IBM Spectrum Scale on AWS. This solution helps the users who require highly available access to a shared name space across multiple instances with good -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From renata at slac.stanford.edu Tue Dec 1 18:32:39 2020 From: renata at slac.stanford.edu (Renata Maria Dart) Date: Tue, 1 Dec 2020 10:32:39 -0800 (PST) Subject: [gpfsug-discuss] memory needed for gpfs clients Message-ID: Hi, some of our gpfs clients will get stale file handles for gpfs mounts and it seems to be related to memory depletion. Even after the memory is freed though gpfs will continue be unavailable and df will hang. I have read about setting vm.min_free_kbytes as a possible fix for this, but wasn't sure if it was meant for a gpfs server or if a gpfs client would also benefit, and what value should be set. Thanks for any insights, Renata From cblack at nygenome.org Tue Dec 1 19:07:58 2020 From: cblack at nygenome.org (Christopher Black) Date: Tue, 1 Dec 2020 19:07:58 +0000 Subject: [gpfsug-discuss] memory needed for gpfs clients In-Reply-To: References: Message-ID: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> We tune vm-related sysctl values on our gpfs clients. These are values we use for 256GB+ mem hpc nodes: vm.min_free_kbytes=2097152 vm.dirty_bytes = 3435973836 vm.dirty_background_bytes = 1717986918 The vm.dirty parameters are to prevent NFS from buffering huge amounts of writes and then pushing them over the network all at once flooding out gpfs traffic. I'd also recommend checking client gpfs parameters pagepool and/or pagepoolMaxPhysMemPct to ensure you have a reasonable and understood limit for how much memory mmfsd will use. Best, Chris ?On 12/1/20, 1:32 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Renata Maria Dart" wrote: Hi, some of our gpfs clients will get stale file handles for gpfs mounts and it seems to be related to memory depletion. Even after the memory is freed though gpfs will continue be unavailable and df will hang. I have read about setting vm.min_free_kbytes as a possible fix for this, but wasn't sure if it was meant for a gpfs server or if a gpfs client would also benefit, and what value should be set. Thanks for any insights, Renata _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!C6sPl7C9qQ!H08HlNmBIkQRBOJKSHohzKHL6r39gAhQ3XTTczWoSmvffRFmQMcpJo8OyjMP7j-g$ ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. From bbanister at jumptrading.com Tue Dec 1 19:00:12 2020 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 1 Dec 2020 19:00:12 +0000 Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Message-ID: Hey all... Hope all your clusters are up and performing well... Got a new RFE (I searched and didn't find anything like it) for your consideration. The ability to change the name of an existing NSD: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=147125 We embed information into the NSD name, which sometimes needs to be updated. However there isn't a way to simply change the NSD name. You can update the NSD ServerList, but not the name. You can remove the NSD from a file system, delete it, then recreate with a new name and add it back into the file system, but there are obvious risks and serious space and performance impacts to production file systems when performing these operations. Thanks! -Bryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From renata at slac.stanford.edu Tue Dec 1 19:17:33 2020 From: renata at slac.stanford.edu (Renata Maria Dart) Date: Tue, 1 Dec 2020 11:17:33 -0800 (PST) Subject: [gpfsug-discuss] memory needed for gpfs clients In-Reply-To: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> References: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> Message-ID: Thanks very much for your feedback Chris. Renata On Tue, 1 Dec 2020, Christopher Black wrote: >We tune vm-related sysctl values on our gpfs clients. >These are values we use for 256GB+ mem hpc nodes: >vm.min_free_kbytes=2097152 >vm.dirty_bytes = 3435973836 >vm.dirty_background_bytes = 1717986918 > >The vm.dirty parameters are to prevent NFS from buffering huge amounts of writes and then pushing them over the network all at once flooding out gpfs traffic. > >I'd also recommend checking client gpfs parameters pagepool and/or pagepoolMaxPhysMemPct to ensure you have a reasonable and understood limit for how much memory mmfsd will use. > >Best, >Chris > >On 12/1/20, 1:32 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Renata Maria Dart" wrote: > > Hi, some of our gpfs clients will get stale file handles for gpfs > mounts and it seems to be related to memory depletion. Even after the > memory is freed though gpfs will continue be unavailable and df will > hang. I have read about setting vm.min_free_kbytes as a possible fix > for this, but wasn't sure if it was meant for a gpfs server or if a > gpfs client would also benefit, and what value should be set. > > Thanks for any insights, > > Renata > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!C6sPl7C9qQ!H08HlNmBIkQRBOJKSHohzKHL6r39gAhQ3XTTczWoSmvffRFmQMcpJo8OyjMP7j-g$ > >________________________________ > >This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. > From jonathan.buzzard at strath.ac.uk Tue Dec 1 19:30:21 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 1 Dec 2020 19:30:21 +0000 Subject: [gpfsug-discuss] memory needed for gpfs clients In-Reply-To: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> References: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> Message-ID: <03389b6f-1b69-29a1-9aff-58dc490b2431@strath.ac.uk> On 01/12/2020 19:07, Christopher Black wrote: > CAUTION: This email originated outside the University. Check before clicking links or attachments. > > We tune vm-related sysctl values on our gpfs clients. > These are values we use for 256GB+ mem hpc nodes: > vm.min_free_kbytes=2097152 > vm.dirty_bytes = 3435973836 > vm.dirty_background_bytes = 1717986918 > > The vm.dirty parameters are to prevent NFS from buffering huge > amounts of writes and then pushing them over the network all at once > flooding out gpfs traffic. > > I'd also recommend checking client gpfs parameters pagepool and/or > pagepoolMaxPhysMemPct to ensure you have a reasonable and understood > limit for how much memory mmfsd will use. > We take a different approach and tackle it from the other end. Basically we use slurm to limit user processes to 4GB per core which we find is more than enough for 99% of jobs. For people needing more then there are some dedicated large memory nodes with 3TB of RAM. We have seen well over 1TB of RAM being used by a single user on occasion (generating large meshes usually). I don't think there is any limit on RAM on those nodes The compute nodes are dual Xeon 6138 with 192GB of RAM, which works out at 4.8GB of RAM. Basically it stops the machines running out of RAM for *any* administrative tasks not just GPFS. We did originally try running it closer to the wire but it appears anecdotally cgroups is not perfect and it is possible for users to get a bit over their limits, so we lowered it back down to 4GB per core. Noting that is what the tender for the machine was, but due to number of DIMM slots and and cores in the CPU, we ended up with a bit more RAM per core. We have had no memory starvation issues now in ~2 years since we went down to 4GB per core for jobs. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From cblack at nygenome.org Tue Dec 1 19:26:25 2020 From: cblack at nygenome.org (Christopher Black) Date: Tue, 1 Dec 2020 19:26:25 +0000 Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Message-ID: <7FEC2C4A-1FBC-495E-BE6F-D3E17B47C63E@nygenome.org> +1 from me. Someone did a building block install for us and named a couple io nodes with initial upper case (unlike all other unix hostnames in our env which are all lowercase). For a while it just bothered us, and we complained occasionally to hear that it was not easy to change. Over two years after install a case-sensitive bug in call home hit us on those two io nodes. Best, Chris From: on behalf of Bryan Banister Reply-To: gpfsug main discussion list Date: Tuesday, December 1, 2020 at 2:16 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Hey all? Hope all your clusters are up and performing well? Got a new RFE (I searched and didn?t find anything like it) for your consideration. The ability to change the name of an existing NSD: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=147125 We embed information into the NSD name, which sometimes needs to be updated. However there isn?t a way to simply change the NSD name. You can update the NSD ServerList, but not the name. You can remove the NSD from a file system, delete it, then recreate with a new name and add it back into the file system, but there are obvious risks and serious space and performance impacts to production file systems when performing these operations. Thanks! -Bryan ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue Dec 1 22:09:01 2020 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 1 Dec 2020 22:09:01 +0000 Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE In-Reply-To: <7FEC2C4A-1FBC-495E-BE6F-D3E17B47C63E@nygenome.org> References: <7FEC2C4A-1FBC-495E-BE6F-D3E17B47C63E@nygenome.org> Message-ID: Just for clarification, this RFE is for changing the name of the Network Shared Disk device used to store data for file systems, not a NSD I/O server node name, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Christopher Black Sent: Tuesday, December 1, 2020 1:26 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE [EXTERNAL EMAIL] +1 from me. Someone did a building block install for us and named a couple io nodes with initial upper case (unlike all other unix hostnames in our env which are all lowercase). For a while it just bothered us, and we complained occasionally to hear that it was not easy to change. Over two years after install a case-sensitive bug in call home hit us on those two io nodes. Best, Chris From: > on behalf of Bryan Banister > Reply-To: gpfsug main discussion list > Date: Tuesday, December 1, 2020 at 2:16 PM To: gpfsug main discussion list > Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Hey all? Hope all your clusters are up and performing well? Got a new RFE (I searched and didn?t find anything like it) for your consideration. The ability to change the name of an existing NSD: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=147125 We embed information into the NSD name, which sometimes needs to be updated. However there isn?t a way to simply change the NSD name. You can update the NSD ServerList, but not the name. You can remove the NSD from a file system, delete it, then recreate with a new name and add it back into the file system, but there are obvious risks and serious space and performance impacts to production file systems when performing these operations. Thanks! -Bryan ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Tue Dec 1 22:41:49 2020 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Tue, 1 Dec 2020 17:41:49 -0500 Subject: [gpfsug-discuss] internal details on GPFS inode expansion In-Reply-To: References: Message-ID: Dave Johnson at ddj at brown.edu asks: When GPFS needs to add inodes to the filesystem, it seems to pre-create about 4 million of them. Judging by the logs, it seems it only takes a few (13 maybe) seconds to do this. However we are suspecting that this might only be to request the additional inodes and that there is some background activity for some time afterwards. Would someone who has knowledge of the actual internals be willing to confirm or deny this, and if there is background activity, is it on all nodes in the cluster, NSD nodes, "default worker nodes"? Inodes are typically 4KB and reside ondisk in full blocks in the "inode 0 file". For every inode there is also an entry in the "inode allocation map" which indicates the inode's status (eg free, inuse). To add inodes we have to add data to both. First we determine how many inodes to add (eg always add full blocks of inodes, etc), then how many "passes" will it take to add them (the "passes" are an artifact of the inode map layout). Adding the inodes themselves involves writing blocks of free inodes. This is multi-threaded on a single node. Adding to the inode map, may involve adding more inode map "segments" or just using free space in the current segments. If adding segments these are formatted and written by multiple threads on a single node, Once the on-disk data structures are complete we update the in-memory structures to reflect that all of the new inodes are free and we update the "stripe group descriptor" and broadcast it to all the nodes that have the file system mounted. In old code - say pre 4.1 or 4.2 -- we went through another step to reread all of the inode allocation map back into memory to recompute the number of free inodes. That would have been in parallel on all the nodes that had the file system mounted. Around 4.2 or so this was changed to simply update the in-memory counters (since we know how many inodes were added, there is no need to recount them). So, adding 4M inodes involves writing a little more than 16 GB of metadata to the disk, then cycle through the in-memory data structures. Writing 16 GB in 13 seconds works out to a little more than 1 GB/s. Sounds reasonable. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From dugan at bu.edu Fri Dec 4 14:54:07 2020 From: dugan at bu.edu (Dugan, Michael J) Date: Fri, 4 Dec 2020 14:54:07 +0000 Subject: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? In-Reply-To: References: <1388247256.209171.1605555854969@privateemail.com> , Message-ID: I have a cluster with two filesystems and I need to migrate a fileset from one to the other. I would normally do this with tar and rsync but I decided to experiment with AFM following the document below. In my test setup I'm finding that hardlinks are not preserved by the migration. Is that expected or am I doing something wrong? I'm using gpfs-5.0.5.4. Thanks. --Mike -- Michael J. Dugan Manager of Systems Programming and Administration Research Computing Services | IS&T | Boston University 617-358-0030 dugan at bu.edu http://www.bu.edu/tech ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Venkateswara R Puvvada Sent: Monday, November 23, 2020 9:41 PM To: gpfsug main discussion list Cc: gpfsug-discuss-bounces at spectrumscale.org Subject: Re: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? AFM provides near zero downtime for migration. As of today, AFM migration does not support ACLs or other EAs migration from non scale (GPFS) source. https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_uc_migrationusingafmmigrationenhancements.htm ~Venkat (vpuvvada at in.ibm.com) From: "Frederick Stock" To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Date: 11/17/2020 03:14 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Have you considered using the AFM feature of Spectrum Scale? I doubt it will provide any speed improvement but it would allow for data to be accessed as it was being migrated. Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com ----- Original message ----- From: Andi Christiansen Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? Date: Mon, Nov 16, 2020 2:44 PM Hi all, i have got a case where a customer wants 700TB migrated from isilon to Scale and the only way for him is exporting the same directory on NFS from two different nodes... as of now we are using multiple rsync processes on different parts of folders within the main directory. this is really slow and will take forever.. right now 14 rsync processes spread across 3 nodes fetching from 2.. does anyone know of a way to speed it up? right now we see from 1Gbit to 3Gbit if we are lucky(total bandwidth) and there is a total of 30Gbit from scale nodes and 20Gbits from isilon so we should be able to reach just under 20Gbit... if anyone have any ideas they are welcome! Thanks in advance Andi Christiansen _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Sun Dec 6 11:16:13 2020 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Sun, 06 Dec 2020 11:16:13 +0000 Subject: [gpfsug-discuss] SSUG Quick survey Message-ID: <1DDA0629-30F4-4533-9E04-63ECB2ED17ED@spectrumscale.org> On Friday in the webinar, we did some live polling of the attendees. I?m still interested in people filling in the questions ? it isn?t long and will help us with planning UG events as well. I thought it would expire when the 24 hour period was up, but it looks like in survey mode, you can still complete it: https://ahaslides.com/SSUG2020 I?ll take it down at 17:00 GMT on Wednesday 9th December, so please take 5 minutes to fill in ? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From andi at christiansen.xxx Mon Dec 7 20:15:23 2020 From: andi at christiansen.xxx (Andi Christiansen) Date: Mon, 7 Dec 2020 21:15:23 +0100 (CET) Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Message-ID: <429895590.51808.1607372123687@privateemail.com> Hi All, Merry christmas to everyone listening in! I hope someone can shed some light on this as its starting to annoy me that i cant get any information other than whats in the documentation which for this part is not very fullfilling.. atleast for me it isnt.. I am currently discussing with IBM Support about the Spectrum Scale Object install procedure for v5.1.0.1 because alot of dependencies is missing when trying to install it. Link to the documentation: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_manualobjinstall.htm And as i could read in the documentation before i contacted IBM and what they said to me after i created the ticket is that "The installation of spectrum-scale-object requires that the repositories for the OpenStack packages and their dependencies are available, which are typically the openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 repositories" And here is the funny part, i dont like the word "Typically" as if we are to guess where to find the dependencies.. i get the idea of moving the packages away from the spectrum scale package to be sure they are up-to-date when installed rather than old versions lying around until a new .x version releases.. but any who, even trying to enable those two repos have proved difficult as they are simply not available on my system.. hence why i still have a lot of dependencies missing.. My theory is that to have those repos shown to my system i would need another redhat license than the "server license" i already have? propably some sort of Redhat Openstack license? Can any one confirm if this is the case? If it is i guess that means that IBM is now pushing a new license ontop of customers if they want to use the new Object release with the 5.1.0.1 version... and that will be it for me.. ill look some where else then for the object/s3 part.. Sorry if i come across as angry but im starting to get alittle annoyed at IBM :) We were using S3 on the previous release but in the end could'nt really use it because of the limitations of the old way they implemented it and we're told there was a new backend coming which had all the features needed but then they pulled it from the .0 version without notice and we had already upgraded from 4.x.x.x to 5.1.x.x and had to find out the hard way.. most of you propably read the old discussion i started about an alternative to scale object/s3.. Best Regards Andi Christiansen -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Dec 7 22:37:43 2020 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Mon, 7 Dec 2020 22:37:43 +0000 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. In-Reply-To: <429895590.51808.1607372123687@privateemail.com> References: <429895590.51808.1607372123687@privateemail.com> Message-ID: Codeready I think you can just enable with subscription-manager, but it is disabled by default. RHOSP is an additional license. But as it says ?typically?, one might assume using the community releases is also possible, e.g. : http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/ There were some statements last year about IBM support for openstack (https://www.spectrumscaleug.org/wp-content/uploads/2019/11/SC19-IBM-Spectrum-Scale-ESS-Update.pdf slide 26, though that mentions cinder). I believe it is still expected to work, but that support would be via Red Hat subscription, or community support via the community repos as above. Carl or someone can probably give the IBM statement on this ? Simon From: on behalf of "andi at christiansen.xxx" Reply to: "gpfsug-discuss at spectrumscale.org" Date: Monday, 7 December 2020 at 20:15 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Hi All, Merry christmas to everyone listening in! I hope someone can shed some light on this as its starting to annoy me that i cant get any information other than whats in the documentation which for this part is not very fullfilling.. atleast for me it isnt.. I am currently discussing with IBM Support about the Spectrum Scale Object install procedure for v5.1.0.1 because alot of dependencies is missing when trying to install it. Link to the documentation: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_manualobjinstall.htm And as i could read in the documentation before i contacted IBM and what they said to me after i created the ticket is that "The installation of spectrum-scale-object requires that the repositories for the OpenStack packages and their dependencies are available, which are typically the openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 repositories" And here is the funny part, i dont like the word "Typically" as if we are to guess where to find the dependencies.. i get the idea of moving the packages away from the spectrum scale package to be sure they are up-to-date when installed rather than old versions lying around until a new .x version releases.. but any who, even trying to enable those two repos have proved difficult as they are simply not available on my system.. hence why i still have a lot of dependencies missing.. My theory is that to have those repos shown to my system i would need another redhat license than the "server license" i already have? propably some sort of Redhat Openstack license? Can any one confirm if this is the case? If it is i guess that means that IBM is now pushing a new license ontop of customers if they want to use the new Object release with the 5.1.0.1 version... and that will be it for me.. ill look some where else then for the object/s3 part.. Sorry if i come across as angry but im starting to get alittle annoyed at IBM :) We were using S3 on the previous release but in the end could'nt really use it because of the limitations of the old way they implemented it and we're told there was a new backend coming which had all the features needed but then they pulled it from the .0 version without notice and we had already upgraded from 4.x.x.x to 5.1.x.x and had to find out the hard way.. most of you propably read the old discussion i started about an alternative to scale object/s3.. Best Regards Andi Christiansen -------------- next part -------------- An HTML attachment was scrubbed... URL: From brnelson at us.ibm.com Tue Dec 8 01:07:46 2020 From: brnelson at us.ibm.com (Brian Nelson) Date: Mon, 7 Dec 2020 19:07:46 -0600 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Message-ID: The Spectrum Scale releases prior to 5.1 included all of the dependent packages needed by OpenStack along with the Object protocol. Although initially done because the platform repos did not have the necessary dependent packages, eventually it introduced significant difficulties in terms of keeping the growing number of dependent packages current with the latest functionality and security fixes. To ensure that bug and security fixes can be delivered as soon as possible, the switch was made to use the platform-specific repos for the dependencies rather than including them with the Scale installer. Unfortunately, this has made the install more complicated as these system repos need to be configured on the system. The subscription pool with the OpenStack repos is typically not enabled by default. To see if your subscription has the necessary repos, use the command "subscription-manager list --all --available" and search for OpenStack. If found, use the Pool ID to add the subscription to your system with the command: "subscription-manager attach --pool=PoolID". Once the pool has been added, then the repos openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 should be able to be added to the subscription-manager. If the subscription list does not show any subscriptions with OpenStack resources, then it may be necessary to add an applicable subscription, such as the "Red Hat OpenStack Platform" subscription. -Brian =================================== Brian Nelson 512-286-7735 (T/L) 363-7735 IBM Spectrum Scale brnelson at us.ibm.com ----- Forwarded by Brian Nelson/Austin/IBM on 12/07/2020 06:06 PM ----- ----- Original message ----- From: Andi Christiansen Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Date: Mon, Dec 7, 2020 3:15 PM Hi All, Merry christmas to everyone listening in! I hope someone can shed some light on this as its starting to annoy me that i cant get any information other than whats in the documentation which for this part is not very fullfilling.. atleast for me it isnt.. I am currently discussing with IBM Support about the Spectrum Scale Object install procedure for v5.1.0.1 because alot of dependencies is missing when trying to install it. Link to the documentation: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_manualobjinstall.htm And as i could read in the documentation before i contacted IBM and what they said to me after i created the ticket is that "The installation of spectrum-scale-object requires that the repositories for the OpenStack packages and their dependencies are available, which are typically the openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 repositories" And here is the funny part, i dont like the word "Typically" as if we are to guess where to find the dependencies.. i get the idea of moving the packages away from the spectrum scale package to be sure they are up-to-date when installed rather than old versions lying around until a new .x version releases.. but any who, even trying to enable those two repos have proved difficult as they are simply not available on my system.. hence why i still have a lot of dependencies missing.. My theory is that to have those repos shown to my system i would need another redhat license than the "server license" i already have? propably some sort of Redhat Openstack license? Can any one confirm if this is the case? If it is i guess that means that IBM is now pushing a new license ontop of customers if they want to use the new Object release with the 5.1.0.1 version... and that will be it for me.. ill look some where else then for the object/s3 part.. Sorry if i come across as angry but im starting to get alittle annoyed at IBM :) We were using S3 on the previous release but in the end could'nt really use it because of the limitations of the old way they implemented it and we're told there was a new backend coming which had all the features needed but then they pulled it from the .0 version without notice and we had already upgraded from 4.x.x.x to 5.1.x.x and had to find out the hard way.. most of you propably read the old discussion i started about an alternative to scale object/s3.. Best Regards Andi Christiansen _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.vieser at 1und1.de Tue Dec 8 10:42:31 2020 From: christian.vieser at 1und1.de (Christian Vieser) Date: Tue, 8 Dec 2020 11:42:31 +0100 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. In-Reply-To: References: <429895590.51808.1607372123687@privateemail.com> Message-ID: Hi all, yesterday I just had the same thoughts as Andi. Monday morning, and very happy to see the long awaited 5.1.0.1 release on FixCentral. And then: WTF! First there is no object in 5.1.0.0 at all, and then in 5.1.0.1 all dependencies are missing! And not one single sentence about this in release notes or Readme. Nothing! No explanation that they are missing, why they are missing and where to find the officials repo for them. Today Simon saved my day: Simon Thompson wrote: > > Codeready I think you can just enable with subscription-manager, but > it is disabled by default. RHOSP is an additional license. But as it > says ?typically?, one might assume using the community releases is > also possible, > > e.g. : http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/ > Since IBM support told me months ago, that 5.1 will be based on the train release, I added the repo http://mirror.centos.org/centos/8/cloud/x86_64/openstack-train/ on my test server and now the 5.1.0.1 object rpms installed successfully. Question remains, if we should stay on the Train packages or if we can / should use the newer packages from Openstack Victoria. But now I read the upgrade instructions at https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_updateobj424.htm and all hope is gone. No rolling upgrade if your cluster is running object protocol. You have to upgrade to RHEL8 / CentOS8 first, for upgrading the Spectrum Scale object packages a downtime for the object service has to be scheduled. And yes, here, hided in the upgrade instructions we can find the information about the needed repos: Ensure that the following system repositories are enabled. |openstack-16-for-rhel-8-x86_64-rpms codeready-builder-for-rhel-8-x86_64-rpms| So, I'm very curious now, if I can manage to do a rolling upgrade of my test cluster from CentOS 7 to CentOS 8 and Spectrum Scale 5.0.5 to 5.1.0.1 core + NFS and then upgrading the object part while having all other services up and running. I will report here. Regards, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue Dec 8 18:14:20 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 8 Dec 2020 18:14:20 +0000 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. In-Reply-To: References: <429895590.51808.1607372123687@privateemail.com> Message-ID: <1aaaa8c1-0c4e-e78f-d9b3-9f1a4c56f9d1@strath.ac.uk> On 07/12/2020 22:37, Simon Thompson wrote: > CAUTION: This email originated outside the University. Check before > clicking links or attachments. > > Codeready I think you can just enable with subscription-manager, but it > is disabled by default. RHOSP is an additional license. But as it says > ?typically?, one might assume using the community releases is also > possible, > If you have not already seen the bomb shell that is the end of CentOS (or at least it's transformation into the alpha version of the next RHEL beta) that's not going to work for much longer. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From mutantllama at gmail.com Wed Dec 9 01:08:02 2020 From: mutantllama at gmail.com (Carl) Date: Wed, 9 Dec 2020 12:08:02 +1100 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos Message-ID: Hi all, With the announcement of Centos 8 moving to stream https://blog.centos.org/2020/12/future-is-centos-stream/ Will Centos still be considered a clone OS? https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html#linuxclone What does this mean for the future for support for folk that are running Centos? Cheers, Carl. From carlz at us.ibm.com Wed Dec 9 14:02:27 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Wed, 9 Dec 2020 14:02:27 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos Message-ID: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> We don?t have an official statement yet, however I did want to give you all an indication of our early thinking on this. Our initial reaction is that this won?t change Scale?s support position on CentOS, as documented in the FAQ: it?s not officially supported, we?ll make best effort to support you where issues are not specific to the distro, but we reserve the right to ask for replication on a supported OS (typically RHEL). In particular, those of you using CentOS will need to pay close attention to the version of the kernel you are running, and ensure that it?s a supported one. We?ll share more as soon as we know it ourselves. Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_1774123721] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From jonathan.buzzard at strath.ac.uk Wed Dec 9 15:35:04 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 9 Dec 2020 15:35:04 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos In-Reply-To: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> References: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> Message-ID: <7e18ed6a-0b0e-d1c4-402d-2ca39f73e84e@strath.ac.uk> On 09/12/2020 14:02, Carl Zetie - carlz at us.ibm.com wrote: > CAUTION: This email originated outside the University. Check before > clicking links or attachments. > > We don?t have an official statement yet, however I did want to give you > all an indication of our early thinking on this. Er yes we do, from an IBM employee, because remember RedHat is now IBM owned, and the majority of the people making this decision are RedHat and thus IBM employees. So I quote "If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options." Or translated bend over and get the lube out. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Wed Dec 9 16:22:26 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 9 Dec 2020 16:22:26 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos In-Reply-To: References: Message-ID: <71881295-d7f3-cc9a-abd6-b855dc2f9e5d@strath.ac.uk> On 09/12/2020 01:08, Carl wrote: > CAUTION: This email originated outside the University. Check before clicking links or attachments. > > Hi all, > > With the announcement of Centos 8 moving to stream > https://blog.centos.org/2020/12/future-is-centos-stream> > Will Centos still be considered a clone OS? > https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html#linuxclone> > What does this mean for the future for support for folk that are running Centos? > https://centos.rip/ -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jnason at redlineperf.com Wed Dec 9 16:36:50 2020 From: jnason at redlineperf.com (Jill Nason) Date: Wed, 9 Dec 2020 11:36:50 -0500 Subject: [gpfsug-discuss] Job Opportunity: HPC Storage Engineer at NASA Goddard (DC) Message-ID: Good morning everyone. We have an extraordinary opportunity for an HPC Storage Engineer at NASA Goddard. This is a great opportunity for someone with a passion for IBM Spectrum Scale and NASA. Another great advantage of this opportunity is being a stone's throw from Washington D.C. Learn more about this opportunity and the required skill set by clicking the job posting below. If you have any specific questions please feel free to reach out to me. HPC Storage Engineer -- Jill Nason RedLine Performance Solutions, LLC jnason at redlineperf.com (301)685-5949 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed Dec 9 21:27:34 2020 From: ulmer at ulmer.org (Stephen Ulmer) Date: Wed, 9 Dec 2020 16:27:34 -0500 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos In-Reply-To: <7e18ed6a-0b0e-d1c4-402d-2ca39f73e84e@strath.ac.uk> References: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> <7e18ed6a-0b0e-d1c4-402d-2ca39f73e84e@strath.ac.uk> Message-ID: <6D3E6378-9062-4A53-888C-7609BAC1BBBE@ulmer.org> I have some hope about this? not a lot, but there is one path where it could go well: In particular, I?m hoping that after CentOS goes stream-only RHEL goes release-only, with regular (weekly?) minor release that are actually versioned together (as opposed to ?here are some fixes for RHEL 8.x, good luck explaining where you are without a complete package version map?). The entire idea of a ?stream? for enterprise customers is ludicrous. If you are using the CentOS stream, there should be nothing preventing you from locking in at whatever package versions are in the RHEL release you want to be like. If those get published we?re not entirely in the same spot as before, but not completely screwed. TO say it another way, I hope that CentOS Stream will replace RHEL 8 Stream, and that RHEL 8 Stream will go away. Hopefully that works out, otherwise the RHEL install base will begin shrinking because there will be no free place to start. I am not employed by, and do not speak for IBM (or even myself if my wife is in the room). -- Stephen > On Dec 9, 2020, at 10:35 AM, Jonathan Buzzard wrote: > > On 09/12/2020 14:02, Carl Zetie - carlz at us.ibm.com wrote: >> CAUTION: This email originated outside the University. Check before clicking links or attachments. >> We don?t have an official statement yet, however I did want to give you all an indication of our early thinking on this. > > Er yes we do, from an IBM employee, because remember RedHat is now IBM owned, and the majority of the people making this decision are RedHat and thus IBM employees. So I quote > > "If you are using CentOS Linux 8 in a production environment, and are > concerned that CentOS Stream will not meet your needs, we encourage > you to contact Red Hat about options." > > Or translated bend over and get the lube out. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Wed Dec 9 22:24:28 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Wed, 9 Dec 2020 22:24:28 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos Message-ID: <6F193169-FC50-48BD-9314-76354AC2F7F8@us.ibm.com> >> We don?t have an official statement yet, however I did want to give you >> all an indication of our early thinking on this. >Er yes we do, from an IBM employee, because remember RedHat is now IBM >owned, and the majority of the people making this decision are RedHat >and thus IBM employees. ?We? meaning Spectrum Scale development. To reiterate, so far we don?t think this changes Spectrum Scale?s existing policy on CentOS support. Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_1992429596] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From leslie.james.elliott at gmail.com Wed Dec 9 22:45:22 2020 From: leslie.james.elliott at gmail.com (leslie elliott) Date: Thu, 10 Dec 2020 08:45:22 +1000 Subject: [gpfsug-discuss] Protocol limits Message-ID: hi all we run a large number of shares from CES servers connected to a single scale cluster we understand the current supported limit is 1000 SMB shares, we run the same number of NFS shares we also understand that using external CES cluster to increase that limit is not supported based on the documentation, we use the same authentication for all shares, we do have additional use cases for sharing where this pathway would be attractive going forward so the question becomes if we need to run 20000 SMB and NFS shares off a scale cluster is there any hardware design we can use to do this whilst maintaining support I have submitted a support request to ask if this can be done but thought I would ask the collective good if this has already been solved thanks leslie -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Dec 9 23:21:03 2020 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 10 Dec 2020 00:21:03 +0100 Subject: [gpfsug-discuss] Protocol limits In-Reply-To: References: Message-ID: My understanding of these limits are that they are to limit the configuration files from becoming too large, which makes changing/processing them somewhat slow. For SMB shares, you might be able to limit the number of configured shares by using wildcards in the config (%U). These wildcarded entries counts as one share.. Don?t know if simimar tricks can be done for NFS.. -jf ons. 9. des. 2020 kl. 23:45 skrev leslie elliott < leslie.james.elliott at gmail.com>: > > hi all > > we run a large number of shares from CES servers connected to a single > scale cluster > we understand the current supported limit is 1000 SMB shares, we run the > same number of NFS shares > > we also understand that using external CES cluster to increase that limit > is not supported based on the documentation, we use the same authentication > for all shares, we do have additional use cases for sharing where this > pathway would be attractive going forward > > so the question becomes if we need to run 20000 SMB and NFS shares off a > scale cluster is there any hardware design we can use to do this whilst > maintaining support > > I have submitted a support request to ask if this can be done but thought > I would ask the collective good if this has already been solved > > thanks > > leslie > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eboyd at us.ibm.com Thu Dec 10 14:41:04 2020 From: eboyd at us.ibm.com (Edward Boyd) Date: Thu, 10 Dec 2020 14:41:04 +0000 Subject: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13 In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Thu Dec 10 21:59:04 2020 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Thu, 10 Dec 2020 21:59:04 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Contents_of_gpfsug-discuss_Digest=2C_V?= =?utf-8?q?ol_107=2C=09Issue_13?= In-Reply-To: Message-ID: Thanks Ed, The UQ team are well aware of the current limits published in the FAQ. However the issue is not the number of physical nodes or the concurrent user sessions, but rather the number of SMB / NFS export mounts that Spectrum Scale supports from a single cluster or even remote mount protocol clusters is no longer enough for their research environment. The current total number of Exports can not exceed 1000, which is an issue when they have multiple thousands of research project ID?s with users needing access to every project ID with its relevant security permissions. Grouping Project ID?s under a single export isn?t a viable option as there is no simple way to identify which research group / user is going to request a new project ID, new project ID?s are automatically created and allocated when a request for storage allocation is fulfilled. Projects ID?s (independent file sets) are published not only as SMB exports, but are also mounted using multiple AFM cache clusters to high performance instrument clusters, multiple HPC clusters or up to 5 different campus access points, including remote universities. The data workflow is not a simple linear workflow And the mixture of different types of users with requests for storage, and storage provisioning has resulted in the University creating their own provisioning portal which interacts with the Spectrum Scale data fabric (multiple Spectrum Scale clusters in single global namespace, connected via 100GB Ethernet over AFM) in multiple points to deliver the project ID provisioning at the relevant locations specified by the user / research group. One point of data surfacing, in this data fabric, is the Spectrum Scale Protocols cluster that Les manages, which provides the central user access point via SMB or NFS, all research users across the university who want to access one or more of their storage allocations do so via the SMB / NFS mount points from this specific storage cluster. Regards, Andrew Beattie File & Object Storage - Technical Lead IBM Australia & New Zealand Sent from my iPhone > On 11 Dec 2020, at 00:41, Edward Boyd wrote: > > ? > Please review the CES limits in the FAQ which states > > Q5.2: > What are some scaling considerations for the protocols function? > A5.2: > Scaling considerations for the protocols function include: > The number of protocol nodes. > If you are using SMB in any combination of other protocols you can configure only up to 16 protocol nodes. This is a hard limit and SMB cannot be enabled if there are more protocol nodes. If only NFS and Object are enabled, you can have 32 nodes configured as protocol nodes. > > The number of client connections. > A maximum of 3,000 SMB connections is recommended per protocol node with a maximum of 20,000 SMB connections per cluster. A maximum of 4,000 NFS connections per protocol node is recommended. A maximum of 2,000 Object connections per protocol nodes is recommended. The maximum number of connections depends on the amount of memory configured and sufficient CPU. We recommend a minimum of 64GB of memory for only Object or only NFS use cases. If you have multiple protocols enabled or if you have SMB enabled we recommend 128GB of memory on the system. > > https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html?view=kc#maxproto > Edward L. Boyd ( Ed ) > IBM Certified Client Technical Specialist, Level 2 Expert > Open Foundation, Master Certified Technical Specialist > IBM Systems, Storage Solutions > US Federal > 407-271-9210 Office / Cell / Office / Text > eboyd at us.ibm.com email > > -----gpfsug-discuss-bounces at spectrumscale.org wrote: ----- > To: gpfsug-discuss at spectrumscale.org > From: gpfsug-discuss-request at spectrumscale.org > Sent by: gpfsug-discuss-bounces at spectrumscale.org > Date: 12/10/2020 07:00AM > Subject: [EXTERNAL] gpfsug-discuss Digest, Vol 107, Issue 13 > > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Protocol limits (leslie elliott) > 2. Re: Protocol limits (Jan-Frode Myklebust) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 10 Dec 2020 08:45:22 +1000 > From: leslie elliott > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Protocol limits > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > hi all > > we run a large number of shares from CES servers connected to a single > scale cluster > we understand the current supported limit is 1000 SMB shares, we run the > same number of NFS shares > > we also understand that using external CES cluster to increase that limit > is not supported based on the documentation, we use the same authentication > for all shares, we do have additional use cases for sharing where this > pathway would be attractive going forward > > so the question becomes if we need to run 20000 SMB and NFS shares off a > scale cluster is there any hardware design we can use to do this whilst > maintaining support > > I have submitted a support request to ask if this can be done but thought I > would ask the collective good if this has already been solved > > thanks > > leslie > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Thu, 10 Dec 2020 00:21:03 +0100 > From: Jan-Frode Myklebust > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Protocol limits > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > My understanding of these limits are that they are to limit the > configuration files from becoming too large, which makes > changing/processing them somewhat slow. > > For SMB shares, you might be able to limit the number of configured shares > by using wildcards in the config (%U). These wildcarded entries counts as > one share.. Don?t know if simimar tricks can be done for NFS.. > > > > -jf > > ons. 9. des. 2020 kl. 23:45 skrev leslie elliott < > leslie.james.elliott at gmail.com>: > > > > > hi all > > > > we run a large number of shares from CES servers connected to a single > > scale cluster > > we understand the current supported limit is 1000 SMB shares, we run the > > same number of NFS shares > > > > we also understand that using external CES cluster to increase that limit > > is not supported based on the documentation, we use the same authentication > > for all shares, we do have additional use cases for sharing where this > > pathway would be attractive going forward > > > > so the question becomes if we need to run 20000 SMB and NFS shares off a > > scale cluster is there any hardware design we can use to do this whilst > > maintaining support > > > > I have submitted a support request to ask if this can be done but thought > > I would ask the collective good if this has already been solved > > > > thanks > > > > leslie > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 107, Issue 13 > *********************************************** > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Fri Dec 11 00:25:59 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Fri, 11 Dec 2020 00:25:59 +0000 Subject: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13 In-Reply-To: References: Message-ID: <44ae4273-a1aa-0206-9cf0-5971eab2efa6@strath.ac.uk> On 10/12/2020 21:59, Andrew Beattie wrote: > CAUTION: This email originated outside the University. Check before > clicking links or attachments. > Thanks Ed, > > The UQ team are well aware of the current limits published in the FAQ. > > However the issue is not the number of physical nodes or the concurrent > user sessions, but rather the number of SMB / NFS export mounts that > Spectrum Scale supports from a single cluster or even remote mount > protocol clusters is no longer enough for their research environment. > > The current total number of Exports can not exceed 1000, which is an > issue when they have multiple thousands of research project ID?s with > users needing access to every project ID with its relevant security > permissions. > > Grouping Project ID?s under a single export isn?t a viable option as > there is no simple way to identify which research group / user is going > to request a new project ID, new project ID?s are automatically created > and allocated when a request for storage allocation is fulfilled. > > Projects ID?s (independent file sets) are published not only as SMB > exports, but are also mounted using multiple AFM cache clusters to high > performance instrument clusters, multiple HPC clusters or up to 5 > different campus access points, including remote universities. > > The data workflow is not a simple linear workflow > And the mixture of different types of users with requests for storage, > and storage provisioning has resulted in the University creating their > own provisioning portal which interacts with the Spectrum Scale data > fabric (multiple Spectrum Scale clusters in single global namespace, > connected via 100GB Ethernet over AFM) in multiple points to deliver the > project ID provisioning at the relevant locations specified by the user > / research group. > > One point of data surfacing, in this data fabric, is the Spectrum Scale > Protocols cluster that Les manages, which provides the central user > access point via SMB or NFS, all research users across the university > who want to access one or more of their storage allocations do so via > the SMB / NFS mount points from this specific storage cluster. I am not sure thousands of SMB exports is ever a good idea. I suspect Windows Server would keel over and die too in that scenario My suggestion would be to looking into some consolidated SMB exports and then mask it all with DFS. Though this presumes that they are not handing out "project" security credentials that are shared between multiple users. That would be very bad...... JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From hoov at us.ibm.com Thu Dec 17 18:46:40 2020 From: hoov at us.ibm.com (Theodore Hoover Jr) Date: Thu, 17 Dec 2020 18:46:40 +0000 Subject: [gpfsug-discuss] Spectrum Scale Cloud Online Survey Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.16082105961220.jpg Type: image/jpeg Size: 6839 bytes Desc: not available URL: From gongwbj at cn.ibm.com Wed Dec 23 06:44:16 2020 From: gongwbj at cn.ibm.com (Wei G Gong) Date: Wed, 23 Dec 2020 14:44:16 +0800 Subject: [gpfsug-discuss] Latest Technical Blogs/Papers on IBM Spectrum Scale (2H 2020) In-Reply-To: References: Message-ID: Dear User Group Members, In continuation to this email thread, here are list of development blogs/Redpaper in the past half year . We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to this list. What's New in Spectrum Scale 5.1.0? https://www.spectrumscaleug.org/event/ssugdigital-what-is-new-in-spectrum-scale-5-1/ Spectrum Scale User Group Digital (SSUG::Digital) https://www.spectrumscaleug.org/introducing-ssugdigital/ Cloudera Data Platform Private Cloud Base with IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5608.html?Open Implementation Guide for IBM Elastic Storage System 5000 http://www.redbooks.ibm.com/abstracts/sg248498.html?Open IBM Spectrum Scale and IBM Elastic Storage System Network Guide http://www.redbooks.ibm.com/abstracts/redp5484.html?Open Deployment and Usage Guide for Running AI Workloads on Red Hat OpenShift and NVIDIA DGX Systems with IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5610.html?Open Privileged Access Management for Secure Storage Administration: IBM Spectrum Scale with IBM Security Verify Privilege Vault http://www.redbooks.ibm.com/abstracts/redp5625.html?Open IBM Storage Solutions for SAS Analytics using IBM Spectrum Scale and IBM Elastic Storage System 3000 Version 1 Release 1 http://www.redbooks.ibm.com/abstracts/redp5609.html?Open IBM Spectrum Scale configuration for sudo based administration on defined set of administrative nodes https://community.ibm.com/community/user/storage/blogs/sandeep-patil1/2020/07/27/ibm-spectrum-scale-configuration-for-sudo-based-administration-on-defined-set-of-administrative-nodes Its a containerized world - AI with IBM Spectrum Scale and NVIDIA https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/01/its-a-containerized-world Optimize running NVIDIA GPU-enabled AI workloads with data orchestration solution https://community.ibm.com/community/user/storage/blogs/pallavi-galgali1/2020/10/05/optimize-running-nvidia-gpu-enabled-ai-workloads-w Building a better and more flexible data silo should NOT be the goal of storage or considered good https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/07/building-a-better-and-more-flexible-silo-is-not-mo Do you have a strategy to solve BIG DATA problems with an AI information architecture? https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/07/are-you-solving-big-problems IBM Storage a Leader in 2020 Magic Quadrant for Distributed File Systems and Object Storage https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/21/ibm-storage-a-leader-in-2020-magic-quadrant-for-di Containerized IBM Spectrum Scale brings native supercomputer performance data access to Red Hat OpenShift https://community.ibm.com/community/user/storage/blogs/matthew-geiser1/2020/10/27/containerized-ibm-spectrum-scale Cloudera now supports IBM Spectrum Scale with high performance analytics https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/30/cloudera-spectrumscale IBM Storage at Supercomputing 2020 https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/11/03/ibm-storage-at-supercomputing-2020 Empower innovation in the hybrid cloud https://community.ibm.com/community/user/storage/blogs/iliana-garcia-espinosa1/2020/11/17/empower-innovation-in-the-hybrid-cloud HPCwire Chooses University of Birmingham as Best Use of High Performance Data Analytics and AI https://community.ibm.com/community/user/storage/blogs/peter-basmajian/2020/11/18/hpcwire-chooses-university-of-birmingham-as-best-u I/O Workflow of Hadoop workloads with IBM Spectrum Scale and HDFS Transparency https://community.ibm.com/community/user/storage/blogs/chinmaya-mishra1/2020/11/19/io-workflow-hadoop-hdfs-with-ibm-spectrum-scale Workflow of a Hadoop Mapreduce job with HDFS Transparency & IBM Spectrum Scale https://community.ibm.com/community/user/storage/blogs/chinmaya-mishra1/2020/11/23/workflow-of-a-mapreduce-job-with-hdfs-transparency Hybrid cloud data sharing and collaboration with IBM Spectrum Scale Active File Management https://community.ibm.com/community/user/storage/blogs/nils-haustein1/2020/12/08/hybridcloud-usecases-with-spectrumscale-afm NOW certified: IBM Software Defined Storage for IBM Cloud Pak for Data https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/12/11/ibm-cloud-paks-now Resolving OpenStack dependencies required by the Object protocol in versions 5.1 and higher https://community.ibm.com/community/user/storage/blogs/brian-nelson1/2020/12/15/resolving-openstack-dependencies-needed-by-object Benefits and implementation of IBM Spectrum Scale\u2122 sudo wrappers https://community.ibm.com/community/user/storage/blogs/nils-haustein1/2020/12/17/spectrum-scale-sudo-wrappers Introducing Storage Suite Starter for Containers https://community.ibm.com/community/user/storage/blogs/sam-werner1/2020/12/17/storage-suite-starter-for-containers User Group Presentations: https://www.spectrumscale.org/presentations/ From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 2020/08/17 13:51 Subject: Re: Latest Technical Blogs/Papers on IBM Spectrum Scale (Q2 2020) Dear User Group Members, In continuation to this email thread, here are list of development blogs/Redpaper in the past quarter . We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to this list. What?s New in Spectrum Scale 5.0.5? https://community.ibm.com/community/user/storage/blogs/ismael-solis-moreno1/2020/07/06/whats-new-in-spectrum-scale-505 Implementation Guide for IBM Elastic Storage System 3000 http://www.redbooks.ibm.com/abstracts/sg248443.html?Open Spectrum Scale File Audit Logging (FAL) and Watch Folder(WF) Document and Demo https://developer.ibm.com/storage/2020/05/27/spectrum-scale-file-audit-logging-fal-and-watch-folderwf-document-and-demo/ IBM Spectrum Scale with IBM QRadar - Internal Threat Detection (5 mins Demo) https://www.youtube.com/watch?v=Zyw84dvoFR8&t=1s IBM Spectrum Scale Information Lifecycle Management Policies - Practical guide https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102642 Example: https://github.com/nhaustein/spectrum-scale-policy-scripts IBM Spectrum Scale configuration for sudo based administration on defined set of administrative nodes., https://developer.ibm.com/storage/2020/07/27/ibm-spectrum-scale-configuration-for-sudo-based-administration-on-defined-set-of-administrative-nodes/ IBM Spectrum Scale Erasure Code Edition in Stretched Cluster https://developer.ibm.com/storage/2020/07/10/ibm-spectrum-scale-erasure-code-edition-in-streched-cluster/ IBM Spectrum Scale installation toolkit ? extended FQDN enhancement over releases ? 5.0.5.0 https://developer.ibm.com/storage/2020/06/12/ibm-spectrum-scale-installation-toolkit-extended-fqdn-enhancement-over-releases-5-0-5-0/ IBM Spectrum Scale Security Posture with Kibana for Visualization https://developer.ibm.com/storage/2020/05/22/ibm-spectrum-scale-security-posture-with-kibana-for-visualization/ How to Visualize IBM Spectrum Scale Security Posture on Canvas https://developer.ibm.com/storage/2020/05/22/how-to-visualize-ibm-spectrum-scale-security-posture-on-canvas/ How to add Linux machine as Active Directory client to access IBM Spectrum Scale?? https://developer.ibm.com/storage/2020/04/29/how-to-add-linux-machine-as-active-directory-client-to-access-ibm-spectrum-scale/ Enabling Kerberos Authentication in IBM Spectrum Scale HDFS Transparency without Ambari https://developer.ibm.com/storage/2020/04/17/enabling-kerberos-authentication-in-ibm-spectrum-scale-hdfs-transparency-without-ambari/ Configuring Spectrum Scale File Systems for Reliability https://developer.ibm.com/storage/2020/04/08/configuring-spectrum-scale-file-systems-for-reliability/ Spectrum Scale Tuning for Large Linux Clusters https://developer.ibm.com/storage/2020/04/03/spectrum-scale-tuning-for-large-linux-clusters/ Spectrum Scale Tuning for Power Architecture https://developer.ibm.com/storage/2020/03/30/spectrum-scale-tuning-for-power-architecture/ Spectrum Scale operating system and network tuning https://developer.ibm.com/storage/2020/03/27/spectrum-scale-operating-system-and-network-tuning/ How to have granular and selective secure data at rest and in motion for workloads https://developer.ibm.com/storage/2020/03/24/how-to-have-granular-and-selective-secure-data-at-rest-and-in-motion-for-workloads/ Multiprotocol File Sharing on IBM Spectrum Scalewithout an AD or LDAP server https://www.ibm.com/downloads/cas/AN9BR9NJ Securing Data on Threat Detection Using IBM Spectrum Scale and IBM QRadar: An Enhanced Cyber Resiliency Solution http://www.redbooks.ibm.com/abstracts/redp5560.html?Open For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 03/17/2020 01:37 PM Subject: Re: Latest Technical Blogs/Papers on IBM Spectrum Scale (Q3 2019 - Q1 2020) Dear User Group Members, In continuation to this email thread, here are list of development blogs/Redpaper in the past 2 quarters . We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to this list. Redpaper HIPAA Compliance for Healthcare Workloads on IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5591.html?Open IBM Spectrum Scale CSI Driver For Container Persistent Storage http://www.redbooks.ibm.com/redpieces/abstracts/redp5589.html?Open Cyber Resiliency Solution for IBM Spectrum Scale , Blueprint http://www.redbooks.ibm.com/abstracts/redp5559.html?Open Enhanced Cyber Security with IBM Spectrum Scale and IBM QRadar http://www.redbooks.ibm.com/abstracts/redp5560.html?Open Monitoring and Managing the IBM Elastic Storage Server Using the GUI http://www.redbooks.ibm.com/abstracts/redp5471.html?Open IBM Hybrid Solution for Scalable Data Solutions using IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5549.html?Open IBM Spectrum Discover: Metadata Management for Deep Insight of Unstructured Storage http://www.redbooks.ibm.com/abstracts/redp5550.html?Open Monitoring and Managing IBM Spectrum Scale Using the GUI http://www.redbooks.ibm.com/abstracts/redp5458.html?Open IBM Reference Architecture for High Performance Data and AI in Healthcare and Life Sciences, http://www.redbooks.ibm.com/abstracts/redp5481.html?Open Blogs: Why Storage and HIPAA Compliance for AI & Analytics Workloads for Healthcare https://developer.ibm.com/storage/2020/03/17/why-storage-and-hipaa-compliance-for-ai-analytics-workloads-for-healthcare/ Innovation via Integration ? Proactively Securing Your Unstructured Data from Cyber Threats & Attacks --> This was done based on your inputs (as a part of Security Survey) last year on need for Spectrum Scale integrayion with IDS a https://developer.ibm.com/storage/2020/02/24/innovation-via-integration-proactively-securing-your-unstructured-data-from-cyber-threats-attacks/ IBM Spectrum Scale CES HDFS Transparency support https://developer.ibm.com/storage/2020/02/03/ces-hdfs-transparency-support/ How to set up a remote cluster with IBM Spectrum Scale ? steps, limitations and troubleshooting https://developer.ibm.com/storage/2020/01/27/how-to-set-up-a-remote-cluster-with-ibm-spectrum-scale-steps-limitations-and-troubleshooting/ How to use IBM Spectrum Scale with CSI Operator 1.0 on Openshift 4.2 ? sample usage scenario with Tensorflow deployment https://developer.ibm.com/storage/2020/01/20/how-to-use-ibm-spectrum-scale-with-csi-operator-1-0-on-openshift-4-2-sample-usage-scenario-with-tensorflow-deployment/ Achieving WORM like functionality from NFS/SMB clients for data on Spectrum Scale https://developer.ibm.com/storage/2020/01/10/achieving-worm-like-functionality-from-nfs-smb-clients-for-data-on-spectrum-scale/ IBM Spectrum Scale CSI driver video blogs, https://developer.ibm.com/storage/2019/12/26/ibm-spectrum-scale-csi-driver-video-blogs/ IBM Spectrum Scale CSI Driver v1.0.0 released https://developer.ibm.com/storage/2019/12/10/ibm-spectrum-scale-csi-driver-v1-0-0-released/ Now configure IBM? Spectrum Scale with Overlapping UNIXMAP ranges https://developer.ibm.com/storage/2019/11/12/now-configure-ibm-spectrum-scale-with-overlapping-unixmap-ranges/ ?mmadquery?, a Powerful tool helps check AD settings from Spectrum Scale https://developer.ibm.com/storage/2019/11/11/mmadquery-a-powerful-tool-helps-check-ad-settings-from-spectrum-scale/ Spectrum Scale Data Security Modes, https://developer.ibm.com/storage/2019/10/31/spectrum-scale-data-security-modes/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.4 ? https://developer.ibm.com/storage/2019/10/25/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-4/ IBM Spectrum Scale installation toolkit ? enhancements over releases ? 5.0.4.0 https://developer.ibm.com/storage/2019/10/18/ibm-spectrum-scale-installation-toolkit-enhancements-over-releases-5-0-4-0/ IBM Spectrum Scale CSI driver beta on GitHub, https://developer.ibm.com/storage/2019/09/26/ibm-spectrum-scale-csi-driver-on-github/ Help Article: Care to be taken when configuring AD with RFC2307 https://developer.ibm.com/storage/2019/09/18/help-article-care-to-be-taken-when-configuring-ad-with-rfc2307/ IBM Spectrum Scale Erasure Code Edition (ECE): Installation Demonstration https://developer.ibm.com/storage/2019/09/10/ibm-spectrum-scale-erasure-code-edition-ece-installation-demonstration/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 09/03/2019 10:58 AM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q2 2019) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q2 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Redpaper : IBM Power Systems Enterprise AI Solutions (W/ SPECTRUM SCALE) http://www.redbooks.ibm.com/redpieces/abstracts/redp5556.html?Open IBM Spectrum Scale Erasure Code Edition (ECE): Installation Demonstration https://www.youtube.com/watch?v=6If50EvgP-U Blogs: Using IBM Spectrum Scale as platform storage for running containerized Hadoop/Spark workloads https://developer.ibm.com/storage/2019/08/27/using-ibm-spectrum-scale-as-platform-storage-for-running-containerized-hadoop-spark-workloads/ Useful Tools for Spectrum Scale CES NFS https://developer.ibm.com/storage/2019/07/22/useful-tools-for-spectrum-scale-ces-nfs/ How to ensure NFS uses strong encryption algorithms for secure data in motion ? https://developer.ibm.com/storage/2019/07/19/how-to-ensure-nfs-uses-strong-encryption-algorithms-for-secure-data-in-motion/ Introducing IBM Spectrum Scale Erasure Code Edition https://developer.ibm.com/storage/2019/07/07/introducing-ibm-spectrum-scale-erasure-code-edition/ Spectrum Scale: Which Filesystem Encryption Algo to Consider ? https://developer.ibm.com/storage/2019/07/01/spectrum-scale-which-filesystem-encryption-algo-to-consider/ IBM Spectrum Scale HDFS Transparency Apache Hadoop 3.1.x Support https://developer.ibm.com/storage/2019/06/24/ibm-spectrum-scale-hdfs-transparency-apache-hadoop-3-0-x-support/ Enhanced features in Elastic Storage Server (ESS) 5.3.4 https://developer.ibm.com/storage/2019/06/19/enhanced-features-in-elastic-storage-server-ess-5-3-4/ Upgrading IBM Spectrum Scale Erasure Code Edition using installation toolkit https://developer.ibm.com/storage/2019/06/09/upgrading-ibm-spectrum-scale-erasure-code-edition-using-installation-toolkit/ Upgrading IBM Spectrum Scale sync replication / stretch cluster setup in PureApp https://developer.ibm.com/storage/2019/06/06/upgrading-ibm-spectrum-scale-sync-replication-stretch-cluster-setup/ GPFS config remote access with multiple network definitions https://developer.ibm.com/storage/2019/05/30/gpfs-config-remote-access-with-multiple-network-definitions/ IBM Spectrum Scale Erasure Code Edition Fault Tolerance https://developer.ibm.com/storage/2019/05/30/ibm-spectrum-scale-erasure-code-edition-fault-tolerance/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.3 ? https://developer.ibm.com/storage/2019/05/02/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-3/ Understanding and Solving WBC_ERR_DOMAIN_NOT_FOUND error with Spectrum?Scale https://crk10.wordpress.com/2019/07/21/solving-the-wbc-err-domain-not-found-nt-status-none-mapped-glitch-in-ibm-spectrum-scale/ Understanding and Solving NT_STATUS_INVALID_SID issue for SMB access with Spectrum?Scale https://crk10.wordpress.com/2019/07/24/solving-nt_status_invalid_sid-for-smb-share-access-in-ibm-spectrum-scale/ mmadquery primer (apparatus to query Active Directory from IBM Spectrum?Scale) https://crk10.wordpress.com/2019/07/27/mmadquery-primer-apparatus-to-query-active-directory-from-ibm-spectrum-scale/ How to configure RHEL host as Active Directory Client using?SSSD https://crk10.wordpress.com/2019/07/28/configure-rhel-machine-as-active-directory-client-using-sssd/ How to configure RHEL host as LDAP client using?nslcd https://crk10.wordpress.com/2019/07/28/configure-rhel-machine-as-ldap-client-using-nslcd/ Solving NFSv4 AUTH_SYS nobody ownership?issue https://crk10.wordpress.com/2019/07/29/nfsv4-auth_sys-nobody-ownership-and-idmapd/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list of all blogs and collaterals. https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 04/29/2019 12:12 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q1 2019) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q1 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Spectrum Scale 5.0.3 https://developer.ibm.com/storage/2019/04/24/spectrum-scale-5-0-3/ IBM Spectrum Scale HDFS Transparency Ranger Support https://developer.ibm.com/storage/2019/04/01/ibm-spectrum-scale-hdfs-transparency-ranger-support/ Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and Sharing Files Globally, http://www.redbooks.ibm.com/abstracts/redp5527.html?Open Spectrum Scale user group in Singapore, 2019 https://developer.ibm.com/storage/2019/03/14/spectrum-scale-user-group-in-singapore-2019/ 7 traits to use Spectrum Scale to run container workload https://developer.ibm.com/storage/2019/02/26/7-traits-to-use-spectrum-scale-to-run-container-workload/ Health Monitoring of IBM Spectrum Scale Cluster via External Monitoring Framework https://developer.ibm.com/storage/2019/01/22/health-monitoring-of-ibm-spectrum-scale-cluster-via-external-monitoring-framework/ Migrating data from native HDFS to IBM Spectrum Scale based shared storage https://developer.ibm.com/storage/2019/01/18/migrating-data-from-native-hdfs-to-ibm-spectrum-scale-based-shared-storage/ Bulk File Creation useful for Test on Filesystems https://developer.ibm.com/storage/2019/01/16/bulk-file-creation-useful-for-test-on-filesystems/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 01/14/2019 06:24 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q4 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q4 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Redpaper: IBM Spectrum Scale and IBM StoredIQ: Identifying and securing your business data to support regulatory requirements http://www.redbooks.ibm.com/abstracts/redp5525.html?Open IBM Spectrum Scale Memory Usage https://www.slideshare.net/tomerperry/ibm-spectrum-scale-memory-usage?qid=50a1dfda-3102-484f-b9d0-14b69fc4800b&v=&b=&from_search=2 Spectrum Scale and Containers https://developer.ibm.com/storage/2018/12/20/spectrum-scale-and-containers/ IBM Elastic Storage Server Performance Graphical Visualization with Grafana https://developer.ibm.com/storage/2018/12/18/ibm-elastic-storage-server-performance-graphical-visualization-with-grafana/ Hadoop Performance for disaggregated compute and storage configurations based on IBM Spectrum Scale Storage https://developer.ibm.com/storage/2018/12/13/hadoop-performance-for-disaggregated-compute-and-storage-configurations-based-on-ibm-spectrum-scale-storage/ EMS HA in ESS LE (Little Endian) environment https://developer.ibm.com/storage/2018/12/07/ems-ha-in-ess-le-little-endian-environment/ What?s new in ESS 5.3.2 https://developer.ibm.com/storage/2018/12/04/whats-new-in-ess-5-3-2/ Administer your Spectrum Scale cluster easily https://developer.ibm.com/storage/2018/11/13/administer-your-spectrum-scale-cluster-easily/ Disaster Recovery using Spectrum Scale?s Active File Management https://developer.ibm.com/storage/2018/11/13/disaster-recovery-using-spectrum-scales-active-file-management/ Recovery Group Failover Procedure of IBM Elastic Storage Server (ESS) https://developer.ibm.com/storage/2018/10/08/recovery-group-failover-procedure-ibm-elastic-storage-server-ess/ Whats new in IBM Elastic Storage Server (ESS) Version 5.3.1 and 5.3.1.1 https://developer.ibm.com/storage/2018/10/04/whats-new-ibm-elastic-storage-server-ess-version-5-3-1-5-3-1-1/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 10/03/2018 08:48 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q3 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q3 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. How NFS exports became more dynamic with Spectrum Scale 5.0.2 https://developer.ibm.com/storage/2018/10/02/nfs-exports-became-dynamic-spectrum-scale-5-0-2/ HPC storage on AWS (IBM Spectrum Scale) https://developer.ibm.com/storage/2018/10/02/hpc-storage-aws-ibm-spectrum-scale/ Upgrade with Excluding the node(s) using Install-toolkit https://developer.ibm.com/storage/2018/09/30/upgrade-excluding-nodes-using-install-toolkit/ Offline upgrade using Install-toolkit https://developer.ibm.com/storage/2018/09/30/offline-upgrade-using-install-toolkit/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/21/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-2/ What?s New in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/15/whats-new-ibm-spectrum-scale-5-0-2/ Starting IBM Spectrum Scale 5.0.2 release, the installation toolkit supports upgrade rerun if fresh upgrade fails. https://developer.ibm.com/storage/2018/09/15/starting-ibm-spectrum-scale-5-0-2-release-installation-toolkit-supports-upgrade-rerun-fresh-upgrade-fails/ IBM Spectrum Scale installation toolkit ? enhancements over releases ? 5.0.2.0 https://developer.ibm.com/storage/2018/09/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases-5-0-2-0/ Announcing HDP 3.0 support with IBM Spectrum Scale https://developer.ibm.com/storage/2018/08/31/announcing-hdp-3-0-support-ibm-spectrum-scale/ IBM Spectrum Scale Tuning Overview for Hadoop Workload https://developer.ibm.com/storage/2018/08/20/ibm-spectrum-scale-tuning-overview-hadoop-workload/ Making the Most of Multicloud Storage https://developer.ibm.com/storage/2018/08/13/making-multicloud-storage/ Disaster Recovery for Transparent Cloud Tiering using SOBAR https://developer.ibm.com/storage/2018/08/13/disaster-recovery-transparent-cloud-tiering-using-sobar/ Your Optimal Choice of AI Storage for Today and Tomorrow https://developer.ibm.com/storage/2018/08/10/spectrum-scale-ai-workloads/ Analyze IBM Spectrum Scale File Access Audit with ELK Stack https://developer.ibm.com/storage/2018/07/30/analyze-ibm-spectrum-scale-file-access-audit-elk-stack/ Mellanox SX1710 40G switch MLAG configuration for IBM ESS https://developer.ibm.com/storage/2018/07/12/mellanox-sx1710-40g-switcher-mlag-configuration/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? SMB and NFS Access issues https://developer.ibm.com/storage/2018/07/10/protocol-problem-determination-guide-ibm-spectrum-scale-smb-nfs-access-issues/ Access Control in IBM Spectrum Scale Object https://developer.ibm.com/storage/2018/07/06/access-control-ibm-spectrum-scale-object/ IBM Spectrum Scale HDFS Transparency Docker support https://developer.ibm.com/storage/2018/07/06/ibm-spectrum-scale-hdfs-transparency-docker-support/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? Log Collection https://developer.ibm.com/storage/2018/07/04/protocol-problem-determination-guide-ibm-spectrum-scale-log-collection/ Redpapers IBM Spectrum Scale Immutability Introduction, Configuration Guidance, and Use Cases http://www.redbooks.ibm.com/abstracts/redp5507.html?Open Certifications Assessment of the immutability function of IBM Spectrum Scale Version 5.0 in accordance to US SEC17a-4f, EU GDPR Article 21 Section 1, German and Swiss laws and regulations in collaboration with KPMG. Certificate: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?DE968667B47544FF83F6CCDCF37E5FB5 Full assessment report: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?B290411BE1224F5A9B4D24663BCD3C5D For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 07/03/2018 12:13 AM Subject: Re: Latest Technical Blogs on Spectrum Scale (Q2 2018) Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q2 2018). We now have over 100+ developer blogs. As discussed in User Groups, passing it along: IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ IBM Spectrum Scale ILM Policies https://developer.ibm.com/storage/2018/06/02/ibm-spectrum-scale-ilm-policies/ IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ Management GUI enhancements in IBM Spectrum Scale release 5.0.1 https://developer.ibm.com/storage/2018/05/18/management-gui-enhancements-in-ibm-spectrum-scale-release-5-0-1/ Managing IBM Spectrum Scale services through GUI https://developer.ibm.com/storage/2018/05/18/managing-ibm-spectrum-scale-services-through-gui/ Use AWS CLI with IBM Spectrum Scale? object storage https://developer.ibm.com/storage/2018/05/16/use-awscli-with-ibm-spectrum-scale-object-storage/ Hadoop Storage Tiering with IBM Spectrum Scale https://developer.ibm.com/storage/2018/05/09/hadoop-storage-tiering-ibm-spectrum-scale/ How many Files on my Filesystem? https://developer.ibm.com/storage/2018/05/07/many-files-filesystem/ Recording Spectrum Scale Object Stats for Potential Billing like Purpose using Elasticsearch https://developer.ibm.com/storage/2018/05/04/spectrum-scale-object-stats-for-billing-using-elasticsearch/ New features in IBM Elastic Storage Server (ESS) Version 5.3 https://developer.ibm.com/storage/2018/04/09/new-features-ibm-elastic-storage-server-ess-version-5-3/ Using IBM Spectrum Scale for storage in IBM Cloud Private (Missed to send earlier) https://medium.com/ibm-cloud/ibm-spectrum-scale-with-ibm-cloud-private-8bf801796f19 Redpapers Hortonworks Data Platform with IBM Spectrum Scale: Reference Guide for Building an Integrated Solution http://www.redbooks.ibm.com/redpieces/abstracts/redp5448.html, Enabling Hybrid Cloud Storage for IBM Spectrum Scale Using Transparent Cloud Tiering http://www.redbooks.ibm.com/abstracts/redp5411.html?Open SAP HANA and ESS: A Winning Combination (Update) http://www.redbooks.ibm.com/abstracts/redp5436.html?Open Others IBM Spectrum Scale Software Version Recommendation Preventive Service Planning (Updated) http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009703, IDC Infobrief: A Modular Approach to Genomics Infrastructure at Scale in HCLS https://www.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=37016937USEN& For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 03/27/2018 05:23 PM Subject: Re: Latest Technical Blogs on Spectrum Scale Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q1 2018). As discussed in User Groups, passing it along: GDPR Compliance and Unstructured Data Storage https://developer.ibm.com/storage/2018/03/27/gdpr-compliance-unstructure-data-storage/ IBM Spectrum Scale for Linux on IBM Z ? Release 5.0 features and highlights https://developer.ibm.com/storage/2018/03/09/ibm-spectrum-scale-linux-ibm-z-release-5-0-features-highlights/ Management GUI enhancements in IBM Spectrum Scale release 5.0.0 https://developer.ibm.com/storage/2018/01/18/gui-enhancements-in-spectrum-scale-release-5-0-0/ IBM Spectrum Scale 5.0.0 ? What?s new in NFS? https://developer.ibm.com/storage/2018/01/18/ibm-spectrum-scale-5-0-0-whats-new-nfs/ Benefits and implementation of Spectrum Scale sudo wrappers https://developer.ibm.com/storage/2018/01/15/benefits-implementation-spectrum-scale-sudo-wrappers/ IBM Spectrum Scale: Big Data and Analytics Solution Brief https://developer.ibm.com/storage/2018/01/15/ibm-spectrum-scale-big-data-analytics-solution-brief/ Variant Sub-blocks in Spectrum Scale 5.0 https://developer.ibm.com/storage/2018/01/11/spectrum-scale-variant-sub-blocks/ Compression support in Spectrum Scale 5.0.0 https://developer.ibm.com/storage/2018/01/11/compression-support-spectrum-scale-5-0-0/ IBM Spectrum Scale Versus Apache Hadoop HDFS https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/ ESS Fault Tolerance https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/ Genomic Workloads ? How To Get it Right From Infrastructure Point Of View. https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/ IBM Spectrum Scale On AWS Cloud : This video explains how to deploy IBM Spectrum Scale on AWS. This solution helps the users who require highly available access to a shared name space across multiple instances with good -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From renata at slac.stanford.edu Tue Dec 1 18:32:39 2020 From: renata at slac.stanford.edu (Renata Maria Dart) Date: Tue, 1 Dec 2020 10:32:39 -0800 (PST) Subject: [gpfsug-discuss] memory needed for gpfs clients Message-ID: Hi, some of our gpfs clients will get stale file handles for gpfs mounts and it seems to be related to memory depletion. Even after the memory is freed though gpfs will continue be unavailable and df will hang. I have read about setting vm.min_free_kbytes as a possible fix for this, but wasn't sure if it was meant for a gpfs server or if a gpfs client would also benefit, and what value should be set. Thanks for any insights, Renata From cblack at nygenome.org Tue Dec 1 19:07:58 2020 From: cblack at nygenome.org (Christopher Black) Date: Tue, 1 Dec 2020 19:07:58 +0000 Subject: [gpfsug-discuss] memory needed for gpfs clients In-Reply-To: References: Message-ID: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> We tune vm-related sysctl values on our gpfs clients. These are values we use for 256GB+ mem hpc nodes: vm.min_free_kbytes=2097152 vm.dirty_bytes = 3435973836 vm.dirty_background_bytes = 1717986918 The vm.dirty parameters are to prevent NFS from buffering huge amounts of writes and then pushing them over the network all at once flooding out gpfs traffic. I'd also recommend checking client gpfs parameters pagepool and/or pagepoolMaxPhysMemPct to ensure you have a reasonable and understood limit for how much memory mmfsd will use. Best, Chris ?On 12/1/20, 1:32 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Renata Maria Dart" wrote: Hi, some of our gpfs clients will get stale file handles for gpfs mounts and it seems to be related to memory depletion. Even after the memory is freed though gpfs will continue be unavailable and df will hang. I have read about setting vm.min_free_kbytes as a possible fix for this, but wasn't sure if it was meant for a gpfs server or if a gpfs client would also benefit, and what value should be set. Thanks for any insights, Renata _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!C6sPl7C9qQ!H08HlNmBIkQRBOJKSHohzKHL6r39gAhQ3XTTczWoSmvffRFmQMcpJo8OyjMP7j-g$ ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. From bbanister at jumptrading.com Tue Dec 1 19:00:12 2020 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 1 Dec 2020 19:00:12 +0000 Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Message-ID: Hey all... Hope all your clusters are up and performing well... Got a new RFE (I searched and didn't find anything like it) for your consideration. The ability to change the name of an existing NSD: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=147125 We embed information into the NSD name, which sometimes needs to be updated. However there isn't a way to simply change the NSD name. You can update the NSD ServerList, but not the name. You can remove the NSD from a file system, delete it, then recreate with a new name and add it back into the file system, but there are obvious risks and serious space and performance impacts to production file systems when performing these operations. Thanks! -Bryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From renata at slac.stanford.edu Tue Dec 1 19:17:33 2020 From: renata at slac.stanford.edu (Renata Maria Dart) Date: Tue, 1 Dec 2020 11:17:33 -0800 (PST) Subject: [gpfsug-discuss] memory needed for gpfs clients In-Reply-To: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> References: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> Message-ID: Thanks very much for your feedback Chris. Renata On Tue, 1 Dec 2020, Christopher Black wrote: >We tune vm-related sysctl values on our gpfs clients. >These are values we use for 256GB+ mem hpc nodes: >vm.min_free_kbytes=2097152 >vm.dirty_bytes = 3435973836 >vm.dirty_background_bytes = 1717986918 > >The vm.dirty parameters are to prevent NFS from buffering huge amounts of writes and then pushing them over the network all at once flooding out gpfs traffic. > >I'd also recommend checking client gpfs parameters pagepool and/or pagepoolMaxPhysMemPct to ensure you have a reasonable and understood limit for how much memory mmfsd will use. > >Best, >Chris > >On 12/1/20, 1:32 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Renata Maria Dart" wrote: > > Hi, some of our gpfs clients will get stale file handles for gpfs > mounts and it seems to be related to memory depletion. Even after the > memory is freed though gpfs will continue be unavailable and df will > hang. I have read about setting vm.min_free_kbytes as a possible fix > for this, but wasn't sure if it was meant for a gpfs server or if a > gpfs client would also benefit, and what value should be set. > > Thanks for any insights, > > Renata > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-discuss__;!!C6sPl7C9qQ!H08HlNmBIkQRBOJKSHohzKHL6r39gAhQ3XTTczWoSmvffRFmQMcpJo8OyjMP7j-g$ > >________________________________ > >This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. > From jonathan.buzzard at strath.ac.uk Tue Dec 1 19:30:21 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 1 Dec 2020 19:30:21 +0000 Subject: [gpfsug-discuss] memory needed for gpfs clients In-Reply-To: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> References: <17B5008F-5BCC-4E20-889D-7B5A801F5B71@nygenome.org> Message-ID: <03389b6f-1b69-29a1-9aff-58dc490b2431@strath.ac.uk> On 01/12/2020 19:07, Christopher Black wrote: > CAUTION: This email originated outside the University. Check before clicking links or attachments. > > We tune vm-related sysctl values on our gpfs clients. > These are values we use for 256GB+ mem hpc nodes: > vm.min_free_kbytes=2097152 > vm.dirty_bytes = 3435973836 > vm.dirty_background_bytes = 1717986918 > > The vm.dirty parameters are to prevent NFS from buffering huge > amounts of writes and then pushing them over the network all at once > flooding out gpfs traffic. > > I'd also recommend checking client gpfs parameters pagepool and/or > pagepoolMaxPhysMemPct to ensure you have a reasonable and understood > limit for how much memory mmfsd will use. > We take a different approach and tackle it from the other end. Basically we use slurm to limit user processes to 4GB per core which we find is more than enough for 99% of jobs. For people needing more then there are some dedicated large memory nodes with 3TB of RAM. We have seen well over 1TB of RAM being used by a single user on occasion (generating large meshes usually). I don't think there is any limit on RAM on those nodes The compute nodes are dual Xeon 6138 with 192GB of RAM, which works out at 4.8GB of RAM. Basically it stops the machines running out of RAM for *any* administrative tasks not just GPFS. We did originally try running it closer to the wire but it appears anecdotally cgroups is not perfect and it is possible for users to get a bit over their limits, so we lowered it back down to 4GB per core. Noting that is what the tender for the machine was, but due to number of DIMM slots and and cores in the CPU, we ended up with a bit more RAM per core. We have had no memory starvation issues now in ~2 years since we went down to 4GB per core for jobs. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From cblack at nygenome.org Tue Dec 1 19:26:25 2020 From: cblack at nygenome.org (Christopher Black) Date: Tue, 1 Dec 2020 19:26:25 +0000 Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Message-ID: <7FEC2C4A-1FBC-495E-BE6F-D3E17B47C63E@nygenome.org> +1 from me. Someone did a building block install for us and named a couple io nodes with initial upper case (unlike all other unix hostnames in our env which are all lowercase). For a while it just bothered us, and we complained occasionally to hear that it was not easy to change. Over two years after install a case-sensitive bug in call home hit us on those two io nodes. Best, Chris From: on behalf of Bryan Banister Reply-To: gpfsug main discussion list Date: Tuesday, December 1, 2020 at 2:16 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Hey all? Hope all your clusters are up and performing well? Got a new RFE (I searched and didn?t find anything like it) for your consideration. The ability to change the name of an existing NSD: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=147125 We embed information into the NSD name, which sometimes needs to be updated. However there isn?t a way to simply change the NSD name. You can update the NSD ServerList, but not the name. You can remove the NSD from a file system, delete it, then recreate with a new name and add it back into the file system, but there are obvious risks and serious space and performance impacts to production file systems when performing these operations. Thanks! -Bryan ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue Dec 1 22:09:01 2020 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 1 Dec 2020 22:09:01 +0000 Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE In-Reply-To: <7FEC2C4A-1FBC-495E-BE6F-D3E17B47C63E@nygenome.org> References: <7FEC2C4A-1FBC-495E-BE6F-D3E17B47C63E@nygenome.org> Message-ID: Just for clarification, this RFE is for changing the name of the Network Shared Disk device used to store data for file systems, not a NSD I/O server node name, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Christopher Black Sent: Tuesday, December 1, 2020 1:26 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE [EXTERNAL EMAIL] +1 from me. Someone did a building block install for us and named a couple io nodes with initial upper case (unlike all other unix hostnames in our env which are all lowercase). For a while it just bothered us, and we complained occasionally to hear that it was not easy to change. Over two years after install a case-sensitive bug in call home hit us on those two io nodes. Best, Chris From: > on behalf of Bryan Banister > Reply-To: gpfsug main discussion list > Date: Tuesday, December 1, 2020 at 2:16 PM To: gpfsug main discussion list > Subject: [gpfsug-discuss] RFE upvote request for "Change NSD Name" RFE Hey all? Hope all your clusters are up and performing well? Got a new RFE (I searched and didn?t find anything like it) for your consideration. The ability to change the name of an existing NSD: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=147125 We embed information into the NSD name, which sometimes needs to be updated. However there isn?t a way to simply change the NSD name. You can update the NSD ServerList, but not the name. You can remove the NSD from a file system, delete it, then recreate with a new name and add it back into the file system, but there are obvious risks and serious space and performance impacts to production file systems when performing these operations. Thanks! -Bryan ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wsawdon at us.ibm.com Tue Dec 1 22:41:49 2020 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Tue, 1 Dec 2020 17:41:49 -0500 Subject: [gpfsug-discuss] internal details on GPFS inode expansion In-Reply-To: References: Message-ID: Dave Johnson at ddj at brown.edu asks: When GPFS needs to add inodes to the filesystem, it seems to pre-create about 4 million of them. Judging by the logs, it seems it only takes a few (13 maybe) seconds to do this. However we are suspecting that this might only be to request the additional inodes and that there is some background activity for some time afterwards. Would someone who has knowledge of the actual internals be willing to confirm or deny this, and if there is background activity, is it on all nodes in the cluster, NSD nodes, "default worker nodes"? Inodes are typically 4KB and reside ondisk in full blocks in the "inode 0 file". For every inode there is also an entry in the "inode allocation map" which indicates the inode's status (eg free, inuse). To add inodes we have to add data to both. First we determine how many inodes to add (eg always add full blocks of inodes, etc), then how many "passes" will it take to add them (the "passes" are an artifact of the inode map layout). Adding the inodes themselves involves writing blocks of free inodes. This is multi-threaded on a single node. Adding to the inode map, may involve adding more inode map "segments" or just using free space in the current segments. If adding segments these are formatted and written by multiple threads on a single node, Once the on-disk data structures are complete we update the in-memory structures to reflect that all of the new inodes are free and we update the "stripe group descriptor" and broadcast it to all the nodes that have the file system mounted. In old code - say pre 4.1 or 4.2 -- we went through another step to reread all of the inode allocation map back into memory to recompute the number of free inodes. That would have been in parallel on all the nodes that had the file system mounted. Around 4.2 or so this was changed to simply update the in-memory counters (since we know how many inodes were added, there is no need to recount them). So, adding 4M inodes involves writing a little more than 16 GB of metadata to the disk, then cycle through the in-memory data structures. Writing 16 GB in 13 seconds works out to a little more than 1 GB/s. Sounds reasonable. -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From dugan at bu.edu Fri Dec 4 14:54:07 2020 From: dugan at bu.edu (Dugan, Michael J) Date: Fri, 4 Dec 2020 14:54:07 +0000 Subject: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? In-Reply-To: References: <1388247256.209171.1605555854969@privateemail.com> , Message-ID: I have a cluster with two filesystems and I need to migrate a fileset from one to the other. I would normally do this with tar and rsync but I decided to experiment with AFM following the document below. In my test setup I'm finding that hardlinks are not preserved by the migration. Is that expected or am I doing something wrong? I'm using gpfs-5.0.5.4. Thanks. --Mike -- Michael J. Dugan Manager of Systems Programming and Administration Research Computing Services | IS&T | Boston University 617-358-0030 dugan at bu.edu http://www.bu.edu/tech ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Venkateswara R Puvvada Sent: Monday, November 23, 2020 9:41 PM To: gpfsug main discussion list Cc: gpfsug-discuss-bounces at spectrumscale.org Subject: Re: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? AFM provides near zero downtime for migration. As of today, AFM migration does not support ACLs or other EAs migration from non scale (GPFS) source. https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_uc_migrationusingafmmigrationenhancements.htm ~Venkat (vpuvvada at in.ibm.com) From: "Frederick Stock" To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Date: 11/17/2020 03:14 AM Subject: [EXTERNAL] Re: [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Have you considered using the AFM feature of Spectrum Scale? I doubt it will provide any speed improvement but it would allow for data to be accessed as it was being migrated. Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com ----- Original message ----- From: Andi Christiansen Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [EXTERNAL] [gpfsug-discuss] Migrate/syncronize data from Isilon to Scale over NFS? Date: Mon, Nov 16, 2020 2:44 PM Hi all, i have got a case where a customer wants 700TB migrated from isilon to Scale and the only way for him is exporting the same directory on NFS from two different nodes... as of now we are using multiple rsync processes on different parts of folders within the main directory. this is really slow and will take forever.. right now 14 rsync processes spread across 3 nodes fetching from 2.. does anyone know of a way to speed it up? right now we see from 1Gbit to 3Gbit if we are lucky(total bandwidth) and there is a total of 30Gbit from scale nodes and 20Gbits from isilon so we should be able to reach just under 20Gbit... if anyone have any ideas they are welcome! Thanks in advance Andi Christiansen _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Sun Dec 6 11:16:13 2020 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Sun, 06 Dec 2020 11:16:13 +0000 Subject: [gpfsug-discuss] SSUG Quick survey Message-ID: <1DDA0629-30F4-4533-9E04-63ECB2ED17ED@spectrumscale.org> On Friday in the webinar, we did some live polling of the attendees. I?m still interested in people filling in the questions ? it isn?t long and will help us with planning UG events as well. I thought it would expire when the 24 hour period was up, but it looks like in survey mode, you can still complete it: https://ahaslides.com/SSUG2020 I?ll take it down at 17:00 GMT on Wednesday 9th December, so please take 5 minutes to fill in ? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From andi at christiansen.xxx Mon Dec 7 20:15:23 2020 From: andi at christiansen.xxx (Andi Christiansen) Date: Mon, 7 Dec 2020 21:15:23 +0100 (CET) Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Message-ID: <429895590.51808.1607372123687@privateemail.com> Hi All, Merry christmas to everyone listening in! I hope someone can shed some light on this as its starting to annoy me that i cant get any information other than whats in the documentation which for this part is not very fullfilling.. atleast for me it isnt.. I am currently discussing with IBM Support about the Spectrum Scale Object install procedure for v5.1.0.1 because alot of dependencies is missing when trying to install it. Link to the documentation: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_manualobjinstall.htm And as i could read in the documentation before i contacted IBM and what they said to me after i created the ticket is that "The installation of spectrum-scale-object requires that the repositories for the OpenStack packages and their dependencies are available, which are typically the openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 repositories" And here is the funny part, i dont like the word "Typically" as if we are to guess where to find the dependencies.. i get the idea of moving the packages away from the spectrum scale package to be sure they are up-to-date when installed rather than old versions lying around until a new .x version releases.. but any who, even trying to enable those two repos have proved difficult as they are simply not available on my system.. hence why i still have a lot of dependencies missing.. My theory is that to have those repos shown to my system i would need another redhat license than the "server license" i already have? propably some sort of Redhat Openstack license? Can any one confirm if this is the case? If it is i guess that means that IBM is now pushing a new license ontop of customers if they want to use the new Object release with the 5.1.0.1 version... and that will be it for me.. ill look some where else then for the object/s3 part.. Sorry if i come across as angry but im starting to get alittle annoyed at IBM :) We were using S3 on the previous release but in the end could'nt really use it because of the limitations of the old way they implemented it and we're told there was a new backend coming which had all the features needed but then they pulled it from the .0 version without notice and we had already upgraded from 4.x.x.x to 5.1.x.x and had to find out the hard way.. most of you propably read the old discussion i started about an alternative to scale object/s3.. Best Regards Andi Christiansen -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Dec 7 22:37:43 2020 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Mon, 7 Dec 2020 22:37:43 +0000 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. In-Reply-To: <429895590.51808.1607372123687@privateemail.com> References: <429895590.51808.1607372123687@privateemail.com> Message-ID: Codeready I think you can just enable with subscription-manager, but it is disabled by default. RHOSP is an additional license. But as it says ?typically?, one might assume using the community releases is also possible, e.g. : http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/ There were some statements last year about IBM support for openstack (https://www.spectrumscaleug.org/wp-content/uploads/2019/11/SC19-IBM-Spectrum-Scale-ESS-Update.pdf slide 26, though that mentions cinder). I believe it is still expected to work, but that support would be via Red Hat subscription, or community support via the community repos as above. Carl or someone can probably give the IBM statement on this ? Simon From: on behalf of "andi at christiansen.xxx" Reply to: "gpfsug-discuss at spectrumscale.org" Date: Monday, 7 December 2020 at 20:15 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Hi All, Merry christmas to everyone listening in! I hope someone can shed some light on this as its starting to annoy me that i cant get any information other than whats in the documentation which for this part is not very fullfilling.. atleast for me it isnt.. I am currently discussing with IBM Support about the Spectrum Scale Object install procedure for v5.1.0.1 because alot of dependencies is missing when trying to install it. Link to the documentation: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_manualobjinstall.htm And as i could read in the documentation before i contacted IBM and what they said to me after i created the ticket is that "The installation of spectrum-scale-object requires that the repositories for the OpenStack packages and their dependencies are available, which are typically the openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 repositories" And here is the funny part, i dont like the word "Typically" as if we are to guess where to find the dependencies.. i get the idea of moving the packages away from the spectrum scale package to be sure they are up-to-date when installed rather than old versions lying around until a new .x version releases.. but any who, even trying to enable those two repos have proved difficult as they are simply not available on my system.. hence why i still have a lot of dependencies missing.. My theory is that to have those repos shown to my system i would need another redhat license than the "server license" i already have? propably some sort of Redhat Openstack license? Can any one confirm if this is the case? If it is i guess that means that IBM is now pushing a new license ontop of customers if they want to use the new Object release with the 5.1.0.1 version... and that will be it for me.. ill look some where else then for the object/s3 part.. Sorry if i come across as angry but im starting to get alittle annoyed at IBM :) We were using S3 on the previous release but in the end could'nt really use it because of the limitations of the old way they implemented it and we're told there was a new backend coming which had all the features needed but then they pulled it from the .0 version without notice and we had already upgraded from 4.x.x.x to 5.1.x.x and had to find out the hard way.. most of you propably read the old discussion i started about an alternative to scale object/s3.. Best Regards Andi Christiansen -------------- next part -------------- An HTML attachment was scrubbed... URL: From brnelson at us.ibm.com Tue Dec 8 01:07:46 2020 From: brnelson at us.ibm.com (Brian Nelson) Date: Mon, 7 Dec 2020 19:07:46 -0600 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Message-ID: The Spectrum Scale releases prior to 5.1 included all of the dependent packages needed by OpenStack along with the Object protocol. Although initially done because the platform repos did not have the necessary dependent packages, eventually it introduced significant difficulties in terms of keeping the growing number of dependent packages current with the latest functionality and security fixes. To ensure that bug and security fixes can be delivered as soon as possible, the switch was made to use the platform-specific repos for the dependencies rather than including them with the Scale installer. Unfortunately, this has made the install more complicated as these system repos need to be configured on the system. The subscription pool with the OpenStack repos is typically not enabled by default. To see if your subscription has the necessary repos, use the command "subscription-manager list --all --available" and search for OpenStack. If found, use the Pool ID to add the subscription to your system with the command: "subscription-manager attach --pool=PoolID". Once the pool has been added, then the repos openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 should be able to be added to the subscription-manager. If the subscription list does not show any subscriptions with OpenStack resources, then it may be necessary to add an applicable subscription, such as the "Red Hat OpenStack Platform" subscription. -Brian =================================== Brian Nelson 512-286-7735 (T/L) 363-7735 IBM Spectrum Scale brnelson at us.ibm.com ----- Forwarded by Brian Nelson/Austin/IBM on 12/07/2020 06:06 PM ----- ----- Original message ----- From: Andi Christiansen Sent by: gpfsug-discuss-bounces at spectrumscale.org To: "gpfsug-discuss at spectrumscale.org" Cc: Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. Date: Mon, Dec 7, 2020 3:15 PM Hi All, Merry christmas to everyone listening in! I hope someone can shed some light on this as its starting to annoy me that i cant get any information other than whats in the documentation which for this part is not very fullfilling.. atleast for me it isnt.. I am currently discussing with IBM Support about the Spectrum Scale Object install procedure for v5.1.0.1 because alot of dependencies is missing when trying to install it. Link to the documentation: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_manualobjinstall.htm And as i could read in the documentation before i contacted IBM and what they said to me after i created the ticket is that "The installation of spectrum-scale-object requires that the repositories for the OpenStack packages and their dependencies are available, which are typically the openstack-16-for-rhel-8 and codeready-builder-for-rhel-8 repositories" And here is the funny part, i dont like the word "Typically" as if we are to guess where to find the dependencies.. i get the idea of moving the packages away from the spectrum scale package to be sure they are up-to-date when installed rather than old versions lying around until a new .x version releases.. but any who, even trying to enable those two repos have proved difficult as they are simply not available on my system.. hence why i still have a lot of dependencies missing.. My theory is that to have those repos shown to my system i would need another redhat license than the "server license" i already have? propably some sort of Redhat Openstack license? Can any one confirm if this is the case? If it is i guess that means that IBM is now pushing a new license ontop of customers if they want to use the new Object release with the 5.1.0.1 version... and that will be it for me.. ill look some where else then for the object/s3 part.. Sorry if i come across as angry but im starting to get alittle annoyed at IBM :) We were using S3 on the previous release but in the end could'nt really use it because of the limitations of the old way they implemented it and we're told there was a new backend coming which had all the features needed but then they pulled it from the .0 version without notice and we had already upgraded from 4.x.x.x to 5.1.x.x and had to find out the hard way.. most of you propably read the old discussion i started about an alternative to scale object/s3.. Best Regards Andi Christiansen _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.vieser at 1und1.de Tue Dec 8 10:42:31 2020 From: christian.vieser at 1und1.de (Christian Vieser) Date: Tue, 8 Dec 2020 11:42:31 +0100 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. In-Reply-To: References: <429895590.51808.1607372123687@privateemail.com> Message-ID: Hi all, yesterday I just had the same thoughts as Andi. Monday morning, and very happy to see the long awaited 5.1.0.1 release on FixCentral. And then: WTF! First there is no object in 5.1.0.0 at all, and then in 5.1.0.1 all dependencies are missing! And not one single sentence about this in release notes or Readme. Nothing! No explanation that they are missing, why they are missing and where to find the officials repo for them. Today Simon saved my day: Simon Thompson wrote: > > Codeready I think you can just enable with subscription-manager, but > it is disabled by default. RHOSP is an additional license. But as it > says ?typically?, one might assume using the community releases is > also possible, > > e.g. : http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/ > Since IBM support told me months ago, that 5.1 will be based on the train release, I added the repo http://mirror.centos.org/centos/8/cloud/x86_64/openstack-train/ on my test server and now the 5.1.0.1 object rpms installed successfully. Question remains, if we should stay on the Train packages or if we can / should use the newer packages from Openstack Victoria. But now I read the upgrade instructions at https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/bl1ins_updateobj424.htm and all hope is gone. No rolling upgrade if your cluster is running object protocol. You have to upgrade to RHEL8 / CentOS8 first, for upgrading the Spectrum Scale object packages a downtime for the object service has to be scheduled. And yes, here, hided in the upgrade instructions we can find the information about the needed repos: Ensure that the following system repositories are enabled. |openstack-16-for-rhel-8-x86_64-rpms codeready-builder-for-rhel-8-x86_64-rpms| So, I'm very curious now, if I can manage to do a rolling upgrade of my test cluster from CentOS 7 to CentOS 8 and Spectrum Scale 5.0.5 to 5.1.0.1 core + NFS and then upgrading the object part while having all other services up and running. I will report here. Regards, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue Dec 8 18:14:20 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 8 Dec 2020 18:14:20 +0000 Subject: [gpfsug-discuss] Spectrum Scale 5.1.0.1 Object install / Redhat repos. In-Reply-To: References: <429895590.51808.1607372123687@privateemail.com> Message-ID: <1aaaa8c1-0c4e-e78f-d9b3-9f1a4c56f9d1@strath.ac.uk> On 07/12/2020 22:37, Simon Thompson wrote: > CAUTION: This email originated outside the University. Check before > clicking links or attachments. > > Codeready I think you can just enable with subscription-manager, but it > is disabled by default. RHOSP is an additional license. But as it says > ?typically?, one might assume using the community releases is also > possible, > If you have not already seen the bomb shell that is the end of CentOS (or at least it's transformation into the alpha version of the next RHEL beta) that's not going to work for much longer. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From mutantllama at gmail.com Wed Dec 9 01:08:02 2020 From: mutantllama at gmail.com (Carl) Date: Wed, 9 Dec 2020 12:08:02 +1100 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos Message-ID: Hi all, With the announcement of Centos 8 moving to stream https://blog.centos.org/2020/12/future-is-centos-stream/ Will Centos still be considered a clone OS? https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html#linuxclone What does this mean for the future for support for folk that are running Centos? Cheers, Carl. From carlz at us.ibm.com Wed Dec 9 14:02:27 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Wed, 9 Dec 2020 14:02:27 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos Message-ID: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> We don?t have an official statement yet, however I did want to give you all an indication of our early thinking on this. Our initial reaction is that this won?t change Scale?s support position on CentOS, as documented in the FAQ: it?s not officially supported, we?ll make best effort to support you where issues are not specific to the distro, but we reserve the right to ask for replication on a supported OS (typically RHEL). In particular, those of you using CentOS will need to pay close attention to the version of the kernel you are running, and ensure that it?s a supported one. We?ll share more as soon as we know it ourselves. Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_1774123721] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From jonathan.buzzard at strath.ac.uk Wed Dec 9 15:35:04 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 9 Dec 2020 15:35:04 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos In-Reply-To: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> References: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> Message-ID: <7e18ed6a-0b0e-d1c4-402d-2ca39f73e84e@strath.ac.uk> On 09/12/2020 14:02, Carl Zetie - carlz at us.ibm.com wrote: > CAUTION: This email originated outside the University. Check before > clicking links or attachments. > > We don?t have an official statement yet, however I did want to give you > all an indication of our early thinking on this. Er yes we do, from an IBM employee, because remember RedHat is now IBM owned, and the majority of the people making this decision are RedHat and thus IBM employees. So I quote "If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options." Or translated bend over and get the lube out. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Wed Dec 9 16:22:26 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 9 Dec 2020 16:22:26 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos In-Reply-To: References: Message-ID: <71881295-d7f3-cc9a-abd6-b855dc2f9e5d@strath.ac.uk> On 09/12/2020 01:08, Carl wrote: > CAUTION: This email originated outside the University. Check before clicking links or attachments. > > Hi all, > > With the announcement of Centos 8 moving to stream > https://blog.centos.org/2020/12/future-is-centos-stream> > Will Centos still be considered a clone OS? > https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html#linuxclone> > What does this mean for the future for support for folk that are running Centos? > https://centos.rip/ -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jnason at redlineperf.com Wed Dec 9 16:36:50 2020 From: jnason at redlineperf.com (Jill Nason) Date: Wed, 9 Dec 2020 11:36:50 -0500 Subject: [gpfsug-discuss] Job Opportunity: HPC Storage Engineer at NASA Goddard (DC) Message-ID: Good morning everyone. We have an extraordinary opportunity for an HPC Storage Engineer at NASA Goddard. This is a great opportunity for someone with a passion for IBM Spectrum Scale and NASA. Another great advantage of this opportunity is being a stone's throw from Washington D.C. Learn more about this opportunity and the required skill set by clicking the job posting below. If you have any specific questions please feel free to reach out to me. HPC Storage Engineer -- Jill Nason RedLine Performance Solutions, LLC jnason at redlineperf.com (301)685-5949 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed Dec 9 21:27:34 2020 From: ulmer at ulmer.org (Stephen Ulmer) Date: Wed, 9 Dec 2020 16:27:34 -0500 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos In-Reply-To: <7e18ed6a-0b0e-d1c4-402d-2ca39f73e84e@strath.ac.uk> References: <9EA6A886-F6D2-4816-9192-A7852F12A7F5@us.ibm.com> <7e18ed6a-0b0e-d1c4-402d-2ca39f73e84e@strath.ac.uk> Message-ID: <6D3E6378-9062-4A53-888C-7609BAC1BBBE@ulmer.org> I have some hope about this? not a lot, but there is one path where it could go well: In particular, I?m hoping that after CentOS goes stream-only RHEL goes release-only, with regular (weekly?) minor release that are actually versioned together (as opposed to ?here are some fixes for RHEL 8.x, good luck explaining where you are without a complete package version map?). The entire idea of a ?stream? for enterprise customers is ludicrous. If you are using the CentOS stream, there should be nothing preventing you from locking in at whatever package versions are in the RHEL release you want to be like. If those get published we?re not entirely in the same spot as before, but not completely screwed. TO say it another way, I hope that CentOS Stream will replace RHEL 8 Stream, and that RHEL 8 Stream will go away. Hopefully that works out, otherwise the RHEL install base will begin shrinking because there will be no free place to start. I am not employed by, and do not speak for IBM (or even myself if my wife is in the room). -- Stephen > On Dec 9, 2020, at 10:35 AM, Jonathan Buzzard wrote: > > On 09/12/2020 14:02, Carl Zetie - carlz at us.ibm.com wrote: >> CAUTION: This email originated outside the University. Check before clicking links or attachments. >> We don?t have an official statement yet, however I did want to give you all an indication of our early thinking on this. > > Er yes we do, from an IBM employee, because remember RedHat is now IBM owned, and the majority of the people making this decision are RedHat and thus IBM employees. So I quote > > "If you are using CentOS Linux 8 in a production environment, and are > concerned that CentOS Stream will not meet your needs, we encourage > you to contact Red Hat about options." > > Or translated bend over and get the lube out. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Wed Dec 9 22:24:28 2020 From: carlz at us.ibm.com (Carl Zetie - carlz@us.ibm.com) Date: Wed, 9 Dec 2020 22:24:28 +0000 Subject: [gpfsug-discuss] Future of Spectrum Scale support for Centos Message-ID: <6F193169-FC50-48BD-9314-76354AC2F7F8@us.ibm.com> >> We don?t have an official statement yet, however I did want to give you >> all an indication of our early thinking on this. >Er yes we do, from an IBM employee, because remember RedHat is now IBM >owned, and the majority of the people making this decision are RedHat >and thus IBM employees. ?We? meaning Spectrum Scale development. To reiterate, so far we don?t think this changes Spectrum Scale?s existing policy on CentOS support. Carl Zetie Program Director Offering Management Spectrum Scale ---- (919) 473 3318 ][ Research Triangle Park carlz at us.ibm.com [signature_1992429596] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 69558 bytes Desc: image001.png URL: From leslie.james.elliott at gmail.com Wed Dec 9 22:45:22 2020 From: leslie.james.elliott at gmail.com (leslie elliott) Date: Thu, 10 Dec 2020 08:45:22 +1000 Subject: [gpfsug-discuss] Protocol limits Message-ID: hi all we run a large number of shares from CES servers connected to a single scale cluster we understand the current supported limit is 1000 SMB shares, we run the same number of NFS shares we also understand that using external CES cluster to increase that limit is not supported based on the documentation, we use the same authentication for all shares, we do have additional use cases for sharing where this pathway would be attractive going forward so the question becomes if we need to run 20000 SMB and NFS shares off a scale cluster is there any hardware design we can use to do this whilst maintaining support I have submitted a support request to ask if this can be done but thought I would ask the collective good if this has already been solved thanks leslie -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Dec 9 23:21:03 2020 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 10 Dec 2020 00:21:03 +0100 Subject: [gpfsug-discuss] Protocol limits In-Reply-To: References: Message-ID: My understanding of these limits are that they are to limit the configuration files from becoming too large, which makes changing/processing them somewhat slow. For SMB shares, you might be able to limit the number of configured shares by using wildcards in the config (%U). These wildcarded entries counts as one share.. Don?t know if simimar tricks can be done for NFS.. -jf ons. 9. des. 2020 kl. 23:45 skrev leslie elliott < leslie.james.elliott at gmail.com>: > > hi all > > we run a large number of shares from CES servers connected to a single > scale cluster > we understand the current supported limit is 1000 SMB shares, we run the > same number of NFS shares > > we also understand that using external CES cluster to increase that limit > is not supported based on the documentation, we use the same authentication > for all shares, we do have additional use cases for sharing where this > pathway would be attractive going forward > > so the question becomes if we need to run 20000 SMB and NFS shares off a > scale cluster is there any hardware design we can use to do this whilst > maintaining support > > I have submitted a support request to ask if this can be done but thought > I would ask the collective good if this has already been solved > > thanks > > leslie > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eboyd at us.ibm.com Thu Dec 10 14:41:04 2020 From: eboyd at us.ibm.com (Edward Boyd) Date: Thu, 10 Dec 2020 14:41:04 +0000 Subject: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13 In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Thu Dec 10 21:59:04 2020 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Thu, 10 Dec 2020 21:59:04 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Contents_of_gpfsug-discuss_Digest=2C_V?= =?utf-8?q?ol_107=2C=09Issue_13?= In-Reply-To: Message-ID: Thanks Ed, The UQ team are well aware of the current limits published in the FAQ. However the issue is not the number of physical nodes or the concurrent user sessions, but rather the number of SMB / NFS export mounts that Spectrum Scale supports from a single cluster or even remote mount protocol clusters is no longer enough for their research environment. The current total number of Exports can not exceed 1000, which is an issue when they have multiple thousands of research project ID?s with users needing access to every project ID with its relevant security permissions. Grouping Project ID?s under a single export isn?t a viable option as there is no simple way to identify which research group / user is going to request a new project ID, new project ID?s are automatically created and allocated when a request for storage allocation is fulfilled. Projects ID?s (independent file sets) are published not only as SMB exports, but are also mounted using multiple AFM cache clusters to high performance instrument clusters, multiple HPC clusters or up to 5 different campus access points, including remote universities. The data workflow is not a simple linear workflow And the mixture of different types of users with requests for storage, and storage provisioning has resulted in the University creating their own provisioning portal which interacts with the Spectrum Scale data fabric (multiple Spectrum Scale clusters in single global namespace, connected via 100GB Ethernet over AFM) in multiple points to deliver the project ID provisioning at the relevant locations specified by the user / research group. One point of data surfacing, in this data fabric, is the Spectrum Scale Protocols cluster that Les manages, which provides the central user access point via SMB or NFS, all research users across the university who want to access one or more of their storage allocations do so via the SMB / NFS mount points from this specific storage cluster. Regards, Andrew Beattie File & Object Storage - Technical Lead IBM Australia & New Zealand Sent from my iPhone > On 11 Dec 2020, at 00:41, Edward Boyd wrote: > > ? > Please review the CES limits in the FAQ which states > > Q5.2: > What are some scaling considerations for the protocols function? > A5.2: > Scaling considerations for the protocols function include: > The number of protocol nodes. > If you are using SMB in any combination of other protocols you can configure only up to 16 protocol nodes. This is a hard limit and SMB cannot be enabled if there are more protocol nodes. If only NFS and Object are enabled, you can have 32 nodes configured as protocol nodes. > > The number of client connections. > A maximum of 3,000 SMB connections is recommended per protocol node with a maximum of 20,000 SMB connections per cluster. A maximum of 4,000 NFS connections per protocol node is recommended. A maximum of 2,000 Object connections per protocol nodes is recommended. The maximum number of connections depends on the amount of memory configured and sufficient CPU. We recommend a minimum of 64GB of memory for only Object or only NFS use cases. If you have multiple protocols enabled or if you have SMB enabled we recommend 128GB of memory on the system. > > https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html?view=kc#maxproto > Edward L. Boyd ( Ed ) > IBM Certified Client Technical Specialist, Level 2 Expert > Open Foundation, Master Certified Technical Specialist > IBM Systems, Storage Solutions > US Federal > 407-271-9210 Office / Cell / Office / Text > eboyd at us.ibm.com email > > -----gpfsug-discuss-bounces at spectrumscale.org wrote: ----- > To: gpfsug-discuss at spectrumscale.org > From: gpfsug-discuss-request at spectrumscale.org > Sent by: gpfsug-discuss-bounces at spectrumscale.org > Date: 12/10/2020 07:00AM > Subject: [EXTERNAL] gpfsug-discuss Digest, Vol 107, Issue 13 > > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Protocol limits (leslie elliott) > 2. Re: Protocol limits (Jan-Frode Myklebust) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 10 Dec 2020 08:45:22 +1000 > From: leslie elliott > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Protocol limits > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > hi all > > we run a large number of shares from CES servers connected to a single > scale cluster > we understand the current supported limit is 1000 SMB shares, we run the > same number of NFS shares > > we also understand that using external CES cluster to increase that limit > is not supported based on the documentation, we use the same authentication > for all shares, we do have additional use cases for sharing where this > pathway would be attractive going forward > > so the question becomes if we need to run 20000 SMB and NFS shares off a > scale cluster is there any hardware design we can use to do this whilst > maintaining support > > I have submitted a support request to ask if this can be done but thought I > would ask the collective good if this has already been solved > > thanks > > leslie > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 2 > Date: Thu, 10 Dec 2020 00:21:03 +0100 > From: Jan-Frode Myklebust > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Protocol limits > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > My understanding of these limits are that they are to limit the > configuration files from becoming too large, which makes > changing/processing them somewhat slow. > > For SMB shares, you might be able to limit the number of configured shares > by using wildcards in the config (%U). These wildcarded entries counts as > one share.. Don?t know if simimar tricks can be done for NFS.. > > > > -jf > > ons. 9. des. 2020 kl. 23:45 skrev leslie elliott < > leslie.james.elliott at gmail.com>: > > > > > hi all > > > > we run a large number of shares from CES servers connected to a single > > scale cluster > > we understand the current supported limit is 1000 SMB shares, we run the > > same number of NFS shares > > > > we also understand that using external CES cluster to increase that limit > > is not supported based on the documentation, we use the same authentication > > for all shares, we do have additional use cases for sharing where this > > pathway would be attractive going forward > > > > so the question becomes if we need to run 20000 SMB and NFS shares off a > > scale cluster is there any hardware design we can use to do this whilst > > maintaining support > > > > I have submitted a support request to ask if this can be done but thought > > I would ask the collective good if this has already been solved > > > > thanks > > > > leslie > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 107, Issue 13 > *********************************************** > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Fri Dec 11 00:25:59 2020 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Fri, 11 Dec 2020 00:25:59 +0000 Subject: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13 In-Reply-To: References: Message-ID: <44ae4273-a1aa-0206-9cf0-5971eab2efa6@strath.ac.uk> On 10/12/2020 21:59, Andrew Beattie wrote: > CAUTION: This email originated outside the University. Check before > clicking links or attachments. > Thanks Ed, > > The UQ team are well aware of the current limits published in the FAQ. > > However the issue is not the number of physical nodes or the concurrent > user sessions, but rather the number of SMB / NFS export mounts that > Spectrum Scale supports from a single cluster or even remote mount > protocol clusters is no longer enough for their research environment. > > The current total number of Exports can not exceed 1000, which is an > issue when they have multiple thousands of research project ID?s with > users needing access to every project ID with its relevant security > permissions. > > Grouping Project ID?s under a single export isn?t a viable option as > there is no simple way to identify which research group / user is going > to request a new project ID, new project ID?s are automatically created > and allocated when a request for storage allocation is fulfilled. > > Projects ID?s (independent file sets) are published not only as SMB > exports, but are also mounted using multiple AFM cache clusters to high > performance instrument clusters, multiple HPC clusters or up to 5 > different campus access points, including remote universities. > > The data workflow is not a simple linear workflow > And the mixture of different types of users with requests for storage, > and storage provisioning has resulted in the University creating their > own provisioning portal which interacts with the Spectrum Scale data > fabric (multiple Spectrum Scale clusters in single global namespace, > connected via 100GB Ethernet over AFM) in multiple points to deliver the > project ID provisioning at the relevant locations specified by the user > / research group. > > One point of data surfacing, in this data fabric, is the Spectrum Scale > Protocols cluster that Les manages, which provides the central user > access point via SMB or NFS, all research users across the university > who want to access one or more of their storage allocations do so via > the SMB / NFS mount points from this specific storage cluster. I am not sure thousands of SMB exports is ever a good idea. I suspect Windows Server would keel over and die too in that scenario My suggestion would be to looking into some consolidated SMB exports and then mask it all with DFS. Though this presumes that they are not handing out "project" security credentials that are shared between multiple users. That would be very bad...... JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From hoov at us.ibm.com Thu Dec 17 18:46:40 2020 From: hoov at us.ibm.com (Theodore Hoover Jr) Date: Thu, 17 Dec 2020 18:46:40 +0000 Subject: [gpfsug-discuss] Spectrum Scale Cloud Online Survey Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.16082105961220.jpg Type: image/jpeg Size: 6839 bytes Desc: not available URL: From gongwbj at cn.ibm.com Wed Dec 23 06:44:16 2020 From: gongwbj at cn.ibm.com (Wei G Gong) Date: Wed, 23 Dec 2020 14:44:16 +0800 Subject: [gpfsug-discuss] Latest Technical Blogs/Papers on IBM Spectrum Scale (2H 2020) In-Reply-To: References: Message-ID: Dear User Group Members, In continuation to this email thread, here are list of development blogs/Redpaper in the past half year . We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to this list. What's New in Spectrum Scale 5.1.0? https://www.spectrumscaleug.org/event/ssugdigital-what-is-new-in-spectrum-scale-5-1/ Spectrum Scale User Group Digital (SSUG::Digital) https://www.spectrumscaleug.org/introducing-ssugdigital/ Cloudera Data Platform Private Cloud Base with IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5608.html?Open Implementation Guide for IBM Elastic Storage System 5000 http://www.redbooks.ibm.com/abstracts/sg248498.html?Open IBM Spectrum Scale and IBM Elastic Storage System Network Guide http://www.redbooks.ibm.com/abstracts/redp5484.html?Open Deployment and Usage Guide for Running AI Workloads on Red Hat OpenShift and NVIDIA DGX Systems with IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5610.html?Open Privileged Access Management for Secure Storage Administration: IBM Spectrum Scale with IBM Security Verify Privilege Vault http://www.redbooks.ibm.com/abstracts/redp5625.html?Open IBM Storage Solutions for SAS Analytics using IBM Spectrum Scale and IBM Elastic Storage System 3000 Version 1 Release 1 http://www.redbooks.ibm.com/abstracts/redp5609.html?Open IBM Spectrum Scale configuration for sudo based administration on defined set of administrative nodes https://community.ibm.com/community/user/storage/blogs/sandeep-patil1/2020/07/27/ibm-spectrum-scale-configuration-for-sudo-based-administration-on-defined-set-of-administrative-nodes Its a containerized world - AI with IBM Spectrum Scale and NVIDIA https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/01/its-a-containerized-world Optimize running NVIDIA GPU-enabled AI workloads with data orchestration solution https://community.ibm.com/community/user/storage/blogs/pallavi-galgali1/2020/10/05/optimize-running-nvidia-gpu-enabled-ai-workloads-w Building a better and more flexible data silo should NOT be the goal of storage or considered good https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/07/building-a-better-and-more-flexible-silo-is-not-mo Do you have a strategy to solve BIG DATA problems with an AI information architecture? https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/07/are-you-solving-big-problems IBM Storage a Leader in 2020 Magic Quadrant for Distributed File Systems and Object Storage https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/21/ibm-storage-a-leader-in-2020-magic-quadrant-for-di Containerized IBM Spectrum Scale brings native supercomputer performance data access to Red Hat OpenShift https://community.ibm.com/community/user/storage/blogs/matthew-geiser1/2020/10/27/containerized-ibm-spectrum-scale Cloudera now supports IBM Spectrum Scale with high performance analytics https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/10/30/cloudera-spectrumscale IBM Storage at Supercomputing 2020 https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/11/03/ibm-storage-at-supercomputing-2020 Empower innovation in the hybrid cloud https://community.ibm.com/community/user/storage/blogs/iliana-garcia-espinosa1/2020/11/17/empower-innovation-in-the-hybrid-cloud HPCwire Chooses University of Birmingham as Best Use of High Performance Data Analytics and AI https://community.ibm.com/community/user/storage/blogs/peter-basmajian/2020/11/18/hpcwire-chooses-university-of-birmingham-as-best-u I/O Workflow of Hadoop workloads with IBM Spectrum Scale and HDFS Transparency https://community.ibm.com/community/user/storage/blogs/chinmaya-mishra1/2020/11/19/io-workflow-hadoop-hdfs-with-ibm-spectrum-scale Workflow of a Hadoop Mapreduce job with HDFS Transparency & IBM Spectrum Scale https://community.ibm.com/community/user/storage/blogs/chinmaya-mishra1/2020/11/23/workflow-of-a-mapreduce-job-with-hdfs-transparency Hybrid cloud data sharing and collaboration with IBM Spectrum Scale Active File Management https://community.ibm.com/community/user/storage/blogs/nils-haustein1/2020/12/08/hybridcloud-usecases-with-spectrumscale-afm NOW certified: IBM Software Defined Storage for IBM Cloud Pak for Data https://community.ibm.com/community/user/storage/blogs/david-wohlford1/2020/12/11/ibm-cloud-paks-now Resolving OpenStack dependencies required by the Object protocol in versions 5.1 and higher https://community.ibm.com/community/user/storage/blogs/brian-nelson1/2020/12/15/resolving-openstack-dependencies-needed-by-object Benefits and implementation of IBM Spectrum Scale\u2122 sudo wrappers https://community.ibm.com/community/user/storage/blogs/nils-haustein1/2020/12/17/spectrum-scale-sudo-wrappers Introducing Storage Suite Starter for Containers https://community.ibm.com/community/user/storage/blogs/sam-werner1/2020/12/17/storage-suite-starter-for-containers User Group Presentations: https://www.spectrumscale.org/presentations/ From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 2020/08/17 13:51 Subject: Re: Latest Technical Blogs/Papers on IBM Spectrum Scale (Q2 2020) Dear User Group Members, In continuation to this email thread, here are list of development blogs/Redpaper in the past quarter . We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to this list. What?s New in Spectrum Scale 5.0.5? https://community.ibm.com/community/user/storage/blogs/ismael-solis-moreno1/2020/07/06/whats-new-in-spectrum-scale-505 Implementation Guide for IBM Elastic Storage System 3000 http://www.redbooks.ibm.com/abstracts/sg248443.html?Open Spectrum Scale File Audit Logging (FAL) and Watch Folder(WF) Document and Demo https://developer.ibm.com/storage/2020/05/27/spectrum-scale-file-audit-logging-fal-and-watch-folderwf-document-and-demo/ IBM Spectrum Scale with IBM QRadar - Internal Threat Detection (5 mins Demo) https://www.youtube.com/watch?v=Zyw84dvoFR8&t=1s IBM Spectrum Scale Information Lifecycle Management Policies - Practical guide https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102642 Example: https://github.com/nhaustein/spectrum-scale-policy-scripts IBM Spectrum Scale configuration for sudo based administration on defined set of administrative nodes., https://developer.ibm.com/storage/2020/07/27/ibm-spectrum-scale-configuration-for-sudo-based-administration-on-defined-set-of-administrative-nodes/ IBM Spectrum Scale Erasure Code Edition in Stretched Cluster https://developer.ibm.com/storage/2020/07/10/ibm-spectrum-scale-erasure-code-edition-in-streched-cluster/ IBM Spectrum Scale installation toolkit ? extended FQDN enhancement over releases ? 5.0.5.0 https://developer.ibm.com/storage/2020/06/12/ibm-spectrum-scale-installation-toolkit-extended-fqdn-enhancement-over-releases-5-0-5-0/ IBM Spectrum Scale Security Posture with Kibana for Visualization https://developer.ibm.com/storage/2020/05/22/ibm-spectrum-scale-security-posture-with-kibana-for-visualization/ How to Visualize IBM Spectrum Scale Security Posture on Canvas https://developer.ibm.com/storage/2020/05/22/how-to-visualize-ibm-spectrum-scale-security-posture-on-canvas/ How to add Linux machine as Active Directory client to access IBM Spectrum Scale?? https://developer.ibm.com/storage/2020/04/29/how-to-add-linux-machine-as-active-directory-client-to-access-ibm-spectrum-scale/ Enabling Kerberos Authentication in IBM Spectrum Scale HDFS Transparency without Ambari https://developer.ibm.com/storage/2020/04/17/enabling-kerberos-authentication-in-ibm-spectrum-scale-hdfs-transparency-without-ambari/ Configuring Spectrum Scale File Systems for Reliability https://developer.ibm.com/storage/2020/04/08/configuring-spectrum-scale-file-systems-for-reliability/ Spectrum Scale Tuning for Large Linux Clusters https://developer.ibm.com/storage/2020/04/03/spectrum-scale-tuning-for-large-linux-clusters/ Spectrum Scale Tuning for Power Architecture https://developer.ibm.com/storage/2020/03/30/spectrum-scale-tuning-for-power-architecture/ Spectrum Scale operating system and network tuning https://developer.ibm.com/storage/2020/03/27/spectrum-scale-operating-system-and-network-tuning/ How to have granular and selective secure data at rest and in motion for workloads https://developer.ibm.com/storage/2020/03/24/how-to-have-granular-and-selective-secure-data-at-rest-and-in-motion-for-workloads/ Multiprotocol File Sharing on IBM Spectrum Scalewithout an AD or LDAP server https://www.ibm.com/downloads/cas/AN9BR9NJ Securing Data on Threat Detection Using IBM Spectrum Scale and IBM QRadar: An Enhanced Cyber Resiliency Solution http://www.redbooks.ibm.com/abstracts/redp5560.html?Open For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 03/17/2020 01:37 PM Subject: Re: Latest Technical Blogs/Papers on IBM Spectrum Scale (Q3 2019 - Q1 2020) Dear User Group Members, In continuation to this email thread, here are list of development blogs/Redpaper in the past 2 quarters . We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to this list. Redpaper HIPAA Compliance for Healthcare Workloads on IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5591.html?Open IBM Spectrum Scale CSI Driver For Container Persistent Storage http://www.redbooks.ibm.com/redpieces/abstracts/redp5589.html?Open Cyber Resiliency Solution for IBM Spectrum Scale , Blueprint http://www.redbooks.ibm.com/abstracts/redp5559.html?Open Enhanced Cyber Security with IBM Spectrum Scale and IBM QRadar http://www.redbooks.ibm.com/abstracts/redp5560.html?Open Monitoring and Managing the IBM Elastic Storage Server Using the GUI http://www.redbooks.ibm.com/abstracts/redp5471.html?Open IBM Hybrid Solution for Scalable Data Solutions using IBM Spectrum Scale http://www.redbooks.ibm.com/abstracts/redp5549.html?Open IBM Spectrum Discover: Metadata Management for Deep Insight of Unstructured Storage http://www.redbooks.ibm.com/abstracts/redp5550.html?Open Monitoring and Managing IBM Spectrum Scale Using the GUI http://www.redbooks.ibm.com/abstracts/redp5458.html?Open IBM Reference Architecture for High Performance Data and AI in Healthcare and Life Sciences, http://www.redbooks.ibm.com/abstracts/redp5481.html?Open Blogs: Why Storage and HIPAA Compliance for AI & Analytics Workloads for Healthcare https://developer.ibm.com/storage/2020/03/17/why-storage-and-hipaa-compliance-for-ai-analytics-workloads-for-healthcare/ Innovation via Integration ? Proactively Securing Your Unstructured Data from Cyber Threats & Attacks --> This was done based on your inputs (as a part of Security Survey) last year on need for Spectrum Scale integrayion with IDS a https://developer.ibm.com/storage/2020/02/24/innovation-via-integration-proactively-securing-your-unstructured-data-from-cyber-threats-attacks/ IBM Spectrum Scale CES HDFS Transparency support https://developer.ibm.com/storage/2020/02/03/ces-hdfs-transparency-support/ How to set up a remote cluster with IBM Spectrum Scale ? steps, limitations and troubleshooting https://developer.ibm.com/storage/2020/01/27/how-to-set-up-a-remote-cluster-with-ibm-spectrum-scale-steps-limitations-and-troubleshooting/ How to use IBM Spectrum Scale with CSI Operator 1.0 on Openshift 4.2 ? sample usage scenario with Tensorflow deployment https://developer.ibm.com/storage/2020/01/20/how-to-use-ibm-spectrum-scale-with-csi-operator-1-0-on-openshift-4-2-sample-usage-scenario-with-tensorflow-deployment/ Achieving WORM like functionality from NFS/SMB clients for data on Spectrum Scale https://developer.ibm.com/storage/2020/01/10/achieving-worm-like-functionality-from-nfs-smb-clients-for-data-on-spectrum-scale/ IBM Spectrum Scale CSI driver video blogs, https://developer.ibm.com/storage/2019/12/26/ibm-spectrum-scale-csi-driver-video-blogs/ IBM Spectrum Scale CSI Driver v1.0.0 released https://developer.ibm.com/storage/2019/12/10/ibm-spectrum-scale-csi-driver-v1-0-0-released/ Now configure IBM? Spectrum Scale with Overlapping UNIXMAP ranges https://developer.ibm.com/storage/2019/11/12/now-configure-ibm-spectrum-scale-with-overlapping-unixmap-ranges/ ?mmadquery?, a Powerful tool helps check AD settings from Spectrum Scale https://developer.ibm.com/storage/2019/11/11/mmadquery-a-powerful-tool-helps-check-ad-settings-from-spectrum-scale/ Spectrum Scale Data Security Modes, https://developer.ibm.com/storage/2019/10/31/spectrum-scale-data-security-modes/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.4 ? https://developer.ibm.com/storage/2019/10/25/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-4/ IBM Spectrum Scale installation toolkit ? enhancements over releases ? 5.0.4.0 https://developer.ibm.com/storage/2019/10/18/ibm-spectrum-scale-installation-toolkit-enhancements-over-releases-5-0-4-0/ IBM Spectrum Scale CSI driver beta on GitHub, https://developer.ibm.com/storage/2019/09/26/ibm-spectrum-scale-csi-driver-on-github/ Help Article: Care to be taken when configuring AD with RFC2307 https://developer.ibm.com/storage/2019/09/18/help-article-care-to-be-taken-when-configuring-ad-with-rfc2307/ IBM Spectrum Scale Erasure Code Edition (ECE): Installation Demonstration https://developer.ibm.com/storage/2019/09/10/ibm-spectrum-scale-erasure-code-edition-ece-installation-demonstration/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 09/03/2019 10:58 AM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q2 2019) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q2 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Redpaper : IBM Power Systems Enterprise AI Solutions (W/ SPECTRUM SCALE) http://www.redbooks.ibm.com/redpieces/abstracts/redp5556.html?Open IBM Spectrum Scale Erasure Code Edition (ECE): Installation Demonstration https://www.youtube.com/watch?v=6If50EvgP-U Blogs: Using IBM Spectrum Scale as platform storage for running containerized Hadoop/Spark workloads https://developer.ibm.com/storage/2019/08/27/using-ibm-spectrum-scale-as-platform-storage-for-running-containerized-hadoop-spark-workloads/ Useful Tools for Spectrum Scale CES NFS https://developer.ibm.com/storage/2019/07/22/useful-tools-for-spectrum-scale-ces-nfs/ How to ensure NFS uses strong encryption algorithms for secure data in motion ? https://developer.ibm.com/storage/2019/07/19/how-to-ensure-nfs-uses-strong-encryption-algorithms-for-secure-data-in-motion/ Introducing IBM Spectrum Scale Erasure Code Edition https://developer.ibm.com/storage/2019/07/07/introducing-ibm-spectrum-scale-erasure-code-edition/ Spectrum Scale: Which Filesystem Encryption Algo to Consider ? https://developer.ibm.com/storage/2019/07/01/spectrum-scale-which-filesystem-encryption-algo-to-consider/ IBM Spectrum Scale HDFS Transparency Apache Hadoop 3.1.x Support https://developer.ibm.com/storage/2019/06/24/ibm-spectrum-scale-hdfs-transparency-apache-hadoop-3-0-x-support/ Enhanced features in Elastic Storage Server (ESS) 5.3.4 https://developer.ibm.com/storage/2019/06/19/enhanced-features-in-elastic-storage-server-ess-5-3-4/ Upgrading IBM Spectrum Scale Erasure Code Edition using installation toolkit https://developer.ibm.com/storage/2019/06/09/upgrading-ibm-spectrum-scale-erasure-code-edition-using-installation-toolkit/ Upgrading IBM Spectrum Scale sync replication / stretch cluster setup in PureApp https://developer.ibm.com/storage/2019/06/06/upgrading-ibm-spectrum-scale-sync-replication-stretch-cluster-setup/ GPFS config remote access with multiple network definitions https://developer.ibm.com/storage/2019/05/30/gpfs-config-remote-access-with-multiple-network-definitions/ IBM Spectrum Scale Erasure Code Edition Fault Tolerance https://developer.ibm.com/storage/2019/05/30/ibm-spectrum-scale-erasure-code-edition-fault-tolerance/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.3 ? https://developer.ibm.com/storage/2019/05/02/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-3/ Understanding and Solving WBC_ERR_DOMAIN_NOT_FOUND error with Spectrum?Scale https://crk10.wordpress.com/2019/07/21/solving-the-wbc-err-domain-not-found-nt-status-none-mapped-glitch-in-ibm-spectrum-scale/ Understanding and Solving NT_STATUS_INVALID_SID issue for SMB access with Spectrum?Scale https://crk10.wordpress.com/2019/07/24/solving-nt_status_invalid_sid-for-smb-share-access-in-ibm-spectrum-scale/ mmadquery primer (apparatus to query Active Directory from IBM Spectrum?Scale) https://crk10.wordpress.com/2019/07/27/mmadquery-primer-apparatus-to-query-active-directory-from-ibm-spectrum-scale/ How to configure RHEL host as Active Directory Client using?SSSD https://crk10.wordpress.com/2019/07/28/configure-rhel-machine-as-active-directory-client-using-sssd/ How to configure RHEL host as LDAP client using?nslcd https://crk10.wordpress.com/2019/07/28/configure-rhel-machine-as-ldap-client-using-nslcd/ Solving NFSv4 AUTH_SYS nobody ownership?issue https://crk10.wordpress.com/2019/07/29/nfsv4-auth_sys-nobody-ownership-and-idmapd/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list of all blogs and collaterals. https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 04/29/2019 12:12 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q1 2019) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q1 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Spectrum Scale 5.0.3 https://developer.ibm.com/storage/2019/04/24/spectrum-scale-5-0-3/ IBM Spectrum Scale HDFS Transparency Ranger Support https://developer.ibm.com/storage/2019/04/01/ibm-spectrum-scale-hdfs-transparency-ranger-support/ Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and Sharing Files Globally, http://www.redbooks.ibm.com/abstracts/redp5527.html?Open Spectrum Scale user group in Singapore, 2019 https://developer.ibm.com/storage/2019/03/14/spectrum-scale-user-group-in-singapore-2019/ 7 traits to use Spectrum Scale to run container workload https://developer.ibm.com/storage/2019/02/26/7-traits-to-use-spectrum-scale-to-run-container-workload/ Health Monitoring of IBM Spectrum Scale Cluster via External Monitoring Framework https://developer.ibm.com/storage/2019/01/22/health-monitoring-of-ibm-spectrum-scale-cluster-via-external-monitoring-framework/ Migrating data from native HDFS to IBM Spectrum Scale based shared storage https://developer.ibm.com/storage/2019/01/18/migrating-data-from-native-hdfs-to-ibm-spectrum-scale-based-shared-storage/ Bulk File Creation useful for Test on Filesystems https://developer.ibm.com/storage/2019/01/16/bulk-file-creation-useful-for-test-on-filesystems/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 01/14/2019 06:24 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q4 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q4 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Redpaper: IBM Spectrum Scale and IBM StoredIQ: Identifying and securing your business data to support regulatory requirements http://www.redbooks.ibm.com/abstracts/redp5525.html?Open IBM Spectrum Scale Memory Usage https://www.slideshare.net/tomerperry/ibm-spectrum-scale-memory-usage?qid=50a1dfda-3102-484f-b9d0-14b69fc4800b&v=&b=&from_search=2 Spectrum Scale and Containers https://developer.ibm.com/storage/2018/12/20/spectrum-scale-and-containers/ IBM Elastic Storage Server Performance Graphical Visualization with Grafana https://developer.ibm.com/storage/2018/12/18/ibm-elastic-storage-server-performance-graphical-visualization-with-grafana/ Hadoop Performance for disaggregated compute and storage configurations based on IBM Spectrum Scale Storage https://developer.ibm.com/storage/2018/12/13/hadoop-performance-for-disaggregated-compute-and-storage-configurations-based-on-ibm-spectrum-scale-storage/ EMS HA in ESS LE (Little Endian) environment https://developer.ibm.com/storage/2018/12/07/ems-ha-in-ess-le-little-endian-environment/ What?s new in ESS 5.3.2 https://developer.ibm.com/storage/2018/12/04/whats-new-in-ess-5-3-2/ Administer your Spectrum Scale cluster easily https://developer.ibm.com/storage/2018/11/13/administer-your-spectrum-scale-cluster-easily/ Disaster Recovery using Spectrum Scale?s Active File Management https://developer.ibm.com/storage/2018/11/13/disaster-recovery-using-spectrum-scales-active-file-management/ Recovery Group Failover Procedure of IBM Elastic Storage Server (ESS) https://developer.ibm.com/storage/2018/10/08/recovery-group-failover-procedure-ibm-elastic-storage-server-ess/ Whats new in IBM Elastic Storage Server (ESS) Version 5.3.1 and 5.3.1.1 https://developer.ibm.com/storage/2018/10/04/whats-new-ibm-elastic-storage-server-ess-version-5-3-1-5-3-1-1/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 10/03/2018 08:48 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q3 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q3 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. How NFS exports became more dynamic with Spectrum Scale 5.0.2 https://developer.ibm.com/storage/2018/10/02/nfs-exports-became-dynamic-spectrum-scale-5-0-2/ HPC storage on AWS (IBM Spectrum Scale) https://developer.ibm.com/storage/2018/10/02/hpc-storage-aws-ibm-spectrum-scale/ Upgrade with Excluding the node(s) using Install-toolkit https://developer.ibm.com/storage/2018/09/30/upgrade-excluding-nodes-using-install-toolkit/ Offline upgrade using Install-toolkit https://developer.ibm.com/storage/2018/09/30/offline-upgrade-using-install-toolkit/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/21/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-2/ What?s New in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/15/whats-new-ibm-spectrum-scale-5-0-2/ Starting IBM Spectrum Scale 5.0.2 release, the installation toolkit supports upgrade rerun if fresh upgrade fails. https://developer.ibm.com/storage/2018/09/15/starting-ibm-spectrum-scale-5-0-2-release-installation-toolkit-supports-upgrade-rerun-fresh-upgrade-fails/ IBM Spectrum Scale installation toolkit ? enhancements over releases ? 5.0.2.0 https://developer.ibm.com/storage/2018/09/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases-5-0-2-0/ Announcing HDP 3.0 support with IBM Spectrum Scale https://developer.ibm.com/storage/2018/08/31/announcing-hdp-3-0-support-ibm-spectrum-scale/ IBM Spectrum Scale Tuning Overview for Hadoop Workload https://developer.ibm.com/storage/2018/08/20/ibm-spectrum-scale-tuning-overview-hadoop-workload/ Making the Most of Multicloud Storage https://developer.ibm.com/storage/2018/08/13/making-multicloud-storage/ Disaster Recovery for Transparent Cloud Tiering using SOBAR https://developer.ibm.com/storage/2018/08/13/disaster-recovery-transparent-cloud-tiering-using-sobar/ Your Optimal Choice of AI Storage for Today and Tomorrow https://developer.ibm.com/storage/2018/08/10/spectrum-scale-ai-workloads/ Analyze IBM Spectrum Scale File Access Audit with ELK Stack https://developer.ibm.com/storage/2018/07/30/analyze-ibm-spectrum-scale-file-access-audit-elk-stack/ Mellanox SX1710 40G switch MLAG configuration for IBM ESS https://developer.ibm.com/storage/2018/07/12/mellanox-sx1710-40g-switcher-mlag-configuration/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? SMB and NFS Access issues https://developer.ibm.com/storage/2018/07/10/protocol-problem-determination-guide-ibm-spectrum-scale-smb-nfs-access-issues/ Access Control in IBM Spectrum Scale Object https://developer.ibm.com/storage/2018/07/06/access-control-ibm-spectrum-scale-object/ IBM Spectrum Scale HDFS Transparency Docker support https://developer.ibm.com/storage/2018/07/06/ibm-spectrum-scale-hdfs-transparency-docker-support/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? Log Collection https://developer.ibm.com/storage/2018/07/04/protocol-problem-determination-guide-ibm-spectrum-scale-log-collection/ Redpapers IBM Spectrum Scale Immutability Introduction, Configuration Guidance, and Use Cases http://www.redbooks.ibm.com/abstracts/redp5507.html?Open Certifications Assessment of the immutability function of IBM Spectrum Scale Version 5.0 in accordance to US SEC17a-4f, EU GDPR Article 21 Section 1, German and Swiss laws and regulations in collaboration with KPMG. Certificate: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?DE968667B47544FF83F6CCDCF37E5FB5 Full assessment report: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?B290411BE1224F5A9B4D24663BCD3C5D For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 07/03/2018 12:13 AM Subject: Re: Latest Technical Blogs on Spectrum Scale (Q2 2018) Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q2 2018). We now have over 100+ developer blogs. As discussed in User Groups, passing it along: IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ IBM Spectrum Scale ILM Policies https://developer.ibm.com/storage/2018/06/02/ibm-spectrum-scale-ilm-policies/ IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ Management GUI enhancements in IBM Spectrum Scale release 5.0.1 https://developer.ibm.com/storage/2018/05/18/management-gui-enhancements-in-ibm-spectrum-scale-release-5-0-1/ Managing IBM Spectrum Scale services through GUI https://developer.ibm.com/storage/2018/05/18/managing-ibm-spectrum-scale-services-through-gui/ Use AWS CLI with IBM Spectrum Scale? object storage https://developer.ibm.com/storage/2018/05/16/use-awscli-with-ibm-spectrum-scale-object-storage/ Hadoop Storage Tiering with IBM Spectrum Scale https://developer.ibm.com/storage/2018/05/09/hadoop-storage-tiering-ibm-spectrum-scale/ How many Files on my Filesystem? https://developer.ibm.com/storage/2018/05/07/many-files-filesystem/ Recording Spectrum Scale Object Stats for Potential Billing like Purpose using Elasticsearch https://developer.ibm.com/storage/2018/05/04/spectrum-scale-object-stats-for-billing-using-elasticsearch/ New features in IBM Elastic Storage Server (ESS) Version 5.3 https://developer.ibm.com/storage/2018/04/09/new-features-ibm-elastic-storage-server-ess-version-5-3/ Using IBM Spectrum Scale for storage in IBM Cloud Private (Missed to send earlier) https://medium.com/ibm-cloud/ibm-spectrum-scale-with-ibm-cloud-private-8bf801796f19 Redpapers Hortonworks Data Platform with IBM Spectrum Scale: Reference Guide for Building an Integrated Solution http://www.redbooks.ibm.com/redpieces/abstracts/redp5448.html, Enabling Hybrid Cloud Storage for IBM Spectrum Scale Using Transparent Cloud Tiering http://www.redbooks.ibm.com/abstracts/redp5411.html?Open SAP HANA and ESS: A Winning Combination (Update) http://www.redbooks.ibm.com/abstracts/redp5436.html?Open Others IBM Spectrum Scale Software Version Recommendation Preventive Service Planning (Updated) http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009703, IDC Infobrief: A Modular Approach to Genomics Infrastructure at Scale in HCLS https://www.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=37016937USEN& For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20 (GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 03/27/2018 05:23 PM Subject: Re: Latest Technical Blogs on Spectrum Scale Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q1 2018). As discussed in User Groups, passing it along: GDPR Compliance and Unstructured Data Storage https://developer.ibm.com/storage/2018/03/27/gdpr-compliance-unstructure-data-storage/ IBM Spectrum Scale for Linux on IBM Z ? Release 5.0 features and highlights https://developer.ibm.com/storage/2018/03/09/ibm-spectrum-scale-linux-ibm-z-release-5-0-features-highlights/ Management GUI enhancements in IBM Spectrum Scale release 5.0.0 https://developer.ibm.com/storage/2018/01/18/gui-enhancements-in-spectrum-scale-release-5-0-0/ IBM Spectrum Scale 5.0.0 ? What?s new in NFS? https://developer.ibm.com/storage/2018/01/18/ibm-spectrum-scale-5-0-0-whats-new-nfs/ Benefits and implementation of Spectrum Scale sudo wrappers https://developer.ibm.com/storage/2018/01/15/benefits-implementation-spectrum-scale-sudo-wrappers/ IBM Spectrum Scale: Big Data and Analytics Solution Brief https://developer.ibm.com/storage/2018/01/15/ibm-spectrum-scale-big-data-analytics-solution-brief/ Variant Sub-blocks in Spectrum Scale 5.0 https://developer.ibm.com/storage/2018/01/11/spectrum-scale-variant-sub-blocks/ Compression support in Spectrum Scale 5.0.0 https://developer.ibm.com/storage/2018/01/11/compression-support-spectrum-scale-5-0-0/ IBM Spectrum Scale Versus Apache Hadoop HDFS https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/ ESS Fault Tolerance https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/ Genomic Workloads ? How To Get it Right From Infrastructure Point Of View. https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/ IBM Spectrum Scale On AWS Cloud : This video explains how to deploy IBM Spectrum Scale on AWS. This solution helps the users who require highly available access to a shared name space across multiple instances with good -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: