From lgayne at us.ibm.com Mon Apr 1 15:04:49 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Mon, 1 Apr 2019 09:04:49 -0500 Subject: [gpfsug-discuss] A net new cluster In-Reply-To: References: <7F92D137-07D4-4136-9182-9C5E165704FE@nygenome.org> Message-ID: Yes, native GPFS access can be used by AFM, but only for shorter distances (10s of miles, e.g.). For intercontinental or cross-US distances, the latency would be too high for that protocol so NFS would be recommended. Lyle From: "Marc A Kaplan" To: gpfsug main discussion list Date: 03/29/2019 03:05 PM Subject: Re: [gpfsug-discuss] A net new cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org I don't know the particulars of the case in question, nor much about ESS rules... But for a vanilla Spectrum Scale cluster -. 1) There is nothing wrong or ill-advised about upgrading software and then creating a new version 5.x file system... keeping any older file systems in place. 2) I thought AFM was improved years ago to support GPFS native access -- need not go through NFS stack...? Whereas your wrote: ... nor is it advisable to try to create a new pool or filesystem in same cluster and then migrate (partially because migrating between filesystems within a cluster with afm would require going through nfs stack afaik) ... _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=vngQUjSBYhOMpp8HMi2XWB2feIO7aKGG6UivD0ADm6s&s=PjdyuwVaVKavcSGf9ltOn_k6wRMlka7CYhHzUdSKo5M&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From vpuvvada at in.ibm.com Tue Apr 2 11:57:51 2019 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Tue, 2 Apr 2019 16:27:51 +0530 Subject: [gpfsug-discuss] A net new cluster In-Reply-To: References: <7F92D137-07D4-4136-9182-9C5E165704FE@nygenome.org> Message-ID: AFM supports data migration between the two different file systems from the same cluster using NSD protocol. AFM based migration using the NSD protocol is usually performed by remote mounting the old filesystem (if not in the same cluster) at the new cluster's gateway node(s). Only gateway node is required to mount the remote filesystem. Some recent improvements to the AFM prefetch 1. Directory level prefetch, users no longer required to provide list files. Directory prefetch automatically detects the changed or new files and queues only the changed files for the migration. Prefetch queuing starts immediately, and does not wait for the full list file/directory processing unlike in the earlier releases (pre 5.0.2). 2. Multiple prefetches for the same fileset from different gateway nodes. (will be available in 5.0.3.x, 5.0.2.x). User can select any gateway node to run the prefetch for a fileset, or split list of files or directories and execute them from the multiple gateway nodes simultaneously. This method gets good migration performance and better utilization of network bandwidth as the multiple streams are used for the transfer. 3. Better prefetch queueing statistics than previous releases, provides total number of files, how many queued, total amount of data etc.. ~Venkat (vpuvvada at in.ibm.com) From: "Lyle Gayne" To: gpfsug main discussion list Date: 04/01/2019 07:35 PM Subject: Re: [gpfsug-discuss] A net new cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Yes, native GPFS access can be used by AFM, but only for shorter distances (10s of miles, e.g.). For intercontinental or cross-US distances, the latency would be too high for that protocol so NFS would be recommended. Lyle "Marc A Kaplan" ---03/29/2019 03:05:53 PM---I don't know the particulars of the case in question, nor much about ESS rules... From: "Marc A Kaplan" To: gpfsug main discussion list Date: 03/29/2019 03:05 PM Subject: Re: [gpfsug-discuss] A net new cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org I don't know the particulars of the case in question, nor much about ESS rules... But for a vanilla Spectrum Scale cluster -. 1) There is nothing wrong or ill-advised about upgrading software and then creating a new version 5.x file system... keeping any older file systems in place. 2) I thought AFM was improved years ago to support GPFS native access -- need not go through NFS stack...? Whereas your wrote: ... nor is it advisable to try to create a new pool or filesystem in same cluster and then migrate (partially because migrating between filesystems within a cluster with afm would require going through nfs stack afaik) ... _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=92LOlNh2yLzrrGTDA7HnfF8LFr55zGxghLZtvZcZD7A&m=rdsJfQ2D_ev0wHZkn4J-X3gFEMwJzwKuuP0EVdOqShA&s=4Du5XtaI8UBQwYJ-I772xbA5kidqKoJC-XasFXwEdsM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Tue Apr 2 13:29:20 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 2 Apr 2019 12:29:20 +0000 Subject: [gpfsug-discuss] Reminder - US Spring User Group Meeting - April 16-17th, NCAR Boulder Co Message-ID: <35009945-A3C7-46CB-943A-11C9C1749ABD@nuance.com> 2 weeks until the US Spring user group meeting! We have an excellent facility and we?ll be able to offer breakfast, lunch, and evening social event on site. All at no charge to attendees. Register here: https://www.eventbrite.com/e/spectrum-scale-gpfs-user-group-us-spring-2019-meeting-tickets-57035376346 (directions, locations, and suggested hotels) Topics will include: - User Talks - Breakout sessions - Spectrum Scale: The past, the present, the future - Accelerating AI workloads with IBM Spectrum Scale - AI ecosystem and solutions with IBM Spectrum Scale - Spectrum Scale Update - ESS Update - Support Update - Container & Cloud Update - AFM Update - High Performance Tier - Memory Consumption in Spectrum Scale - Spectrum Scale Use Cases - New storage options for Spectrum Scale - Overview - Introduction to Spectrum Scale (For Beginners) Bob Oesterlin/Kristy Kallback-Rose -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Wed Apr 3 15:43:30 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Wed, 03 Apr 2019 15:43:30 +0100 Subject: [gpfsug-discuss] Reminder Worldwide/UK User group Message-ID: <805C18AE-A2F9-477B-A989-B37D52924849@spectrumscale.org> I?ve just published the draft agenda for the worldwide/UK user group on 8th and 9th May in London. https://www.spectrumscaleug.org/event/uk-user-group-meeting/ As AI is clearly a hot topic, we have a number of slots dedicated to Spectrum Scale with AI this year. Registration is available from the link above. We?re still filling in some slots on the agenda and if you are a customer and would like to do a site update/talk please let me know. We?re thinking about also having a lightning talks slot where people can do 3-5 mins on their use of scale and favourite/worst feature. ? and if I don?t get any volunteers, we?ll be picking people from the audience ? I?m also pleased to announce that Mellanox Technologies and NVIDIA have joined our other sponsors OCF, e8 Storage, Lenovo, and DDN Storage. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Wed Apr 3 17:12:33 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Wed, 3 Apr 2019 16:12:33 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 3 17:17:44 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Wed, 3 Apr 2019 16:17:44 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group From jfosburg at mdanderson.org Wed Apr 3 17:20:48 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 3 Apr 2019 16:20:48 +0000 Subject: [gpfsug-discuss] [EXT] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: <88ad5b6a15c4444596d69503c695a0d1@mdanderson.org> We've added ESSes to existing non-ESS clusters a couple of times. In this case, we had to create a pool for the ESSes so we could send new writes to them and allow us to drain the old non-ESS blocks so we could remove them. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 [1553012336789_download] ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Prasad Surampudi Sent: Wednesday, April 3, 2019 11:12:33 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Adding ESS to existing Scale Cluster WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Apr 3 17:41:32 2019 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 3 Apr 2019 16:41:32 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Robert.Oesterlin at nuance.com Wed Apr 3 18:25:54 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 3 Apr 2019 17:25:54 +0000 Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D@nuance.com> Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Apr 3 19:11:45 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 3 Apr 2019 20:11:45 +0200 Subject: [gpfsug-discuss] New ESS install - Network adapter down level In-Reply-To: <66070BCC-1D30-48E5-B0E7-0680865F0E4D@nuance.com> References: <66070BCC-1D30-48E5-B0E7-0680865F0E4D@nuance.com> Message-ID: Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.buchanan at us.ibm.com Wed Apr 3 19:54:00 2019 From: stephen.buchanan at us.ibm.com (Stephen R Buchanan) Date: Wed, 3 Apr 2019 18:54:00 +0000 Subject: [gpfsug-discuss] New ESS install - Network adapter down level In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Apr 3 20:01:11 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 3 Apr 2019 19:01:11 +0000 Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Thanks all. I just missed this. Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Wed Apr 3 20:34:59 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Wed, 3 Apr 2019 19:34:59 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: Message-ID: Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. We also have protocol nodes for SMB access to SAS applications/users. Now, we are planning to gradually move our cluster from V7000/Flash to ESS and retire V7Ks. So, when we grow our filesystem, we are thinking of adding an ESS as an additional block of storage instead of adding another V7000. Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in the same filesystem, but can't create a new filesystem as we want to have single name space for our SMB Shares. Also, we'd like keep all our existing compute, protocol, and NSD servers all in the same scale cluster along with ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option of adding ESS nodes to existing cluster like mmaddnode or similar commands. So, just wondering how we could add ESS IO nodes to existing cluster like any other node..is running mmaddnode command on ESS possible? Also, looks like it's against the IBMs recommendation of separating the Storage, Compute and Protocol nodes into their own scale clusters and use cross-cluster filesystem mounts..any comments/suggestions? Prasad Surampudi The ATS Group ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: Wednesday, April 3, 2019 2:54 PM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 87, Issue 4 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) 2. New ESS install - Network adapter down level (Oesterlin, Robert) 3. Re: New ESS install - Network adapter down level (Jan-Frode Myklebust) 4. Re: New ESS install - Network adapter down level (Stephen R Buchanan) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Apr 2019 16:41:32 +0000 From: "Sanchez, Paul" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: Content-Type: text/plain; charset="us-ascii" > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ------------------------------ Message: 2 Date: Wed, 3 Apr 2019 17:25:54 +0000 From: "Oesterlin, Robert" To: gpfsug main discussion list Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> Content-Type: text/plain; charset="utf-8" Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Wed, 3 Apr 2019 20:11:45 +0200 From: Jan-Frode Myklebust To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="utf-8" Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 3 Apr 2019 18:54:00 +0000 From: "Stephen R Buchanan" To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 87, Issue 4 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed Apr 3 21:22:33 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 3 Apr 2019 20:22:33 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: , Message-ID: <5fd0776a85e94948b71770f8574e54ae@mdanderson.org> We had Lab Services do our installs and integrations. Learning curve for them, and we uncovered some deficiencies in the TDA, but it did work. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 [1553012336789_download] ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Prasad Surampudi Sent: Wednesday, April 3, 2019 2:34:59 PM To: gpfsug-discuss-request at spectrumscale.org; gpfsug-discuss at spectrumscale.org Subject: [EXT] Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. We also have protocol nodes for SMB access to SAS applications/users. Now, we are planning to gradually move our cluster from V7000/Flash to ESS and retire V7Ks. So, when we grow our filesystem, we are thinking of adding an ESS as an additional block of storage instead of adding another V7000. Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in the same filesystem, but can't create a new filesystem as we want to have single name space for our SMB Shares. Also, we'd like keep all our existing compute, protocol, and NSD servers all in the same scale cluster along with ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option of adding ESS nodes to existing cluster like mmaddnode or similar commands. So, just wondering how we could add ESS IO nodes to existing cluster like any other node..is running mmaddnode command on ESS possible? Also, looks like it's against the IBMs recommendation of separating the Storage, Compute and Protocol nodes into their own scale clusters and use cross-cluster filesystem mounts..any comments/suggestions? Prasad Surampudi The ATS Group ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: Wednesday, April 3, 2019 2:54 PM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 87, Issue 4 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) 2. New ESS install - Network adapter down level (Oesterlin, Robert) 3. Re: New ESS install - Network adapter down level (Jan-Frode Myklebust) 4. Re: New ESS install - Network adapter down level (Stephen R Buchanan) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Apr 2019 16:41:32 +0000 From: "Sanchez, Paul" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: Content-Type: text/plain; charset="us-ascii" > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ------------------------------ Message: 2 Date: Wed, 3 Apr 2019 17:25:54 +0000 From: "Oesterlin, Robert" To: gpfsug main discussion list Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> Content-Type: text/plain; charset="utf-8" Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Wed, 3 Apr 2019 20:11:45 +0200 From: Jan-Frode Myklebust To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="utf-8" Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 3 Apr 2019 18:54:00 +0000 From: "Stephen R Buchanan" To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 87, Issue 4 ********************************************* The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Apr 3 21:34:37 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 3 Apr 2019 22:34:37 +0200 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: Message-ID: It doesn?t seem to be documented anywhere, but we can add ESS to nok-ESS clusters. It?s mostly just following the QDG, skipping the gssgencluster step. Just beware that it will take down your current cluster when doing the first ?gssgenclusterrgs?. This is to change quite a few config settings ? it recently caugth me by surprise :-/ Involve IBM lab services, and we should be able to help :-) -jf ons. 3. apr. 2019 kl. 21:35 skrev Prasad Surampudi < prasad.surampudi at theatsgroup.com>: > Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. > We also have protocol nodes for SMB access to SAS applications/users. Now, > we are planning to gradually move our cluster from V7000/Flash to ESS and > retire V7Ks. So, when we grow our filesystem, we are thinking of adding an > ESS as an additional block of storage instead of adding another V7000. > Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in > the same filesystem, but can't create a new filesystem as we want to have > single name space for our SMB Shares. Also, we'd like keep all our existing > compute, protocol, and NSD servers all in the same scale cluster along with > ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option > of adding ESS nodes to existing cluster like mmaddnode or similar > commands. So, just wondering how we could add ESS IO nodes to existing > cluster like any other node..is running mmaddnode command on ESS possible? > Also, looks like it's against the IBMs recommendation of separating the > Storage, Compute and Protocol nodes into their own scale clusters and use > cross-cluster filesystem mounts..any comments/suggestions? > > Prasad Surampudi > > The ATS Group > > > > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of > gpfsug-discuss-request at spectrumscale.org < > gpfsug-discuss-request at spectrumscale.org> > *Sent:* Wednesday, April 3, 2019 2:54 PM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* gpfsug-discuss Digest, Vol 87, Issue 4 > > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) > 2. New ESS install - Network adapter down level (Oesterlin, Robert) > 3. Re: New ESS install - Network adapter down level > (Jan-Frode Myklebust) > 4. Re: New ESS install - Network adapter down level > (Stephen R Buchanan) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 3 Apr 2019 16:41:32 +0000 > From: "Sanchez, Paul" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > > note though you can't have GNR based vdisks (ESS/DSS-G) in the same > storage pool. > > At one time there was definitely a warning from IBM in the docs about not > mixing big-endian and little-endian GNR in the same cluster/filesystem. > But at least since Nov 2017, IBM has published videos showing clusters > containing both. (In my opinion, they had to support this because they > changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for Scale > itself, I can confirm that filesystems can contain NSDs which are provided > by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN > storage based NSD servers. We typically do rolling upgrades of GNR > building blocks by adding blocks to an existing cluster, emptying and > removing the existing blocks, upgrading those in isolation, then repeating > with the next cluster. As a result, we have had every combination in play > at some point in time. Care just needs to be taken with nodeclass naming > and mmchconfig parameters. (We derive the correct params for each new > building block from its final config after upgrading/testing it in > isolation.) > > -Paul > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Simon Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB > storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't have > GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks then > you are going to have to have a new filesystem and copy data. So it depends > what your endgame is really. We just did such a process and one of my > colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [ > gpfsug-discuss-bounces at spectrumscale.org] on behalf of > prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] > Sent: 03 April 2019 17:12 > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum Scale > cluster. Can the ESS nodes be added to existing scale cluster without > changing existing cluster name? Or do we need to create a new scale cluster > with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > ------------------------------ > > Message: 2 > Date: Wed, 3 Apr 2019 17:25:54 +0000 > From: "Oesterlin, Robert" > To: gpfsug main discussion list > Subject: [gpfsug-discuss] New ESS install - Network adapter down level > Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> > Content-Type: text/plain; charset="utf-8" > > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected > 12.23.8010, net adapter count: 4 > > > Bob Oesterlin > Sr Principal Storage Engineer, Nuance > 507-269-0413 > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/42850e8f/attachment-0001.html > > > > ------------------------------ > > Message: 3 > Date: Wed, 3 Apr 2019 20:11:45 +0200 > From: Jan-Frode Myklebust > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down > level > Message-ID: > FrPJyB-36ZJX7w at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Have you tried: > > updatenode nodename -P gss_ofed > > But, is this the known issue listed in the qdg? > > > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > > > -jf > > ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < > Robert.Oesterlin at nuance.com>: > > > Any insight on what command I need to fix this? It?s the only error I > have > > when running gssinstallcheck. > > > > > > > > [ERROR] Network adapter > > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > > > > > > > Bob Oesterlin > > > > Sr Principal Storage Engineer, Nuance > > > > 507-269-0413 > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/fad4ff57/attachment-0001.html > > > > ------------------------------ > > Message: 4 > Date: Wed, 3 Apr 2019 18:54:00 +0000 > From: "Stephen R Buchanan" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down > level > Message-ID: > < > OFBD2A098D.0085093E-ON002583D1.0066D1E2-002583D1.0067D25D at notes.na.collabserv.com > > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/09e229d1/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 87, Issue 4 > ********************************************* > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed Apr 3 21:38:45 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 3 Apr 2019 20:38:45 +0000 Subject: [gpfsug-discuss] [EXT] Re: gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: , Message-ID: <416a73c67b594e89b734e1f2229c159c@mdanderson.org> Adding ESSes did not bring our clusters down. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 [1553012336789_download] ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Jan-Frode Myklebust Sent: Wednesday, April 3, 2019 3:34:37 PM To: gpfsug main discussion list Cc: gpfsug-discuss-request at spectrumscale.org Subject: [EXT] Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. It doesn?t seem to be documented anywhere, but we can add ESS to nok-ESS clusters. It?s mostly just following the QDG, skipping the gssgencluster step. Just beware that it will take down your current cluster when doing the first ?gssgenclusterrgs?. This is to change quite a few config settings ? it recently caugth me by surprise :-/ Involve IBM lab services, and we should be able to help :-) -jf ons. 3. apr. 2019 kl. 21:35 skrev Prasad Surampudi >: Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. We also have protocol nodes for SMB access to SAS applications/users. Now, we are planning to gradually move our cluster from V7000/Flash to ESS and retire V7Ks. So, when we grow our filesystem, we are thinking of adding an ESS as an additional block of storage instead of adding another V7000. Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in the same filesystem, but can't create a new filesystem as we want to have single name space for our SMB Shares. Also, we'd like keep all our existing compute, protocol, and NSD servers all in the same scale cluster along with ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option of adding ESS nodes to existing cluster like mmaddnode or similar commands. So, just wondering how we could add ESS IO nodes to existing cluster like any other node..is running mmaddnode command on ESS possible? Also, looks like it's against the IBMs recommendation of separating the Storage, Compute and Protocol nodes into their own scale clusters and use cross-cluster filesystem mounts..any comments/suggestions? Prasad Surampudi The ATS Group ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org > on behalf of gpfsug-discuss-request at spectrumscale.org > Sent: Wednesday, April 3, 2019 2:54 PM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 87, Issue 4 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) 2. New ESS install - Network adapter down level (Oesterlin, Robert) 3. Re: New ESS install - Network adapter down level (Jan-Frode Myklebust) 4. Re: New ESS install - Network adapter down level (Stephen R Buchanan) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Apr 2019 16:41:32 +0000 From: "Sanchez, Paul" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: > Content-Type: text/plain; charset="us-ascii" > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ------------------------------ Message: 2 Date: Wed, 3 Apr 2019 17:25:54 +0000 From: "Oesterlin, Robert" > To: gpfsug main discussion list > Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> Content-Type: text/plain; charset="utf-8" Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Wed, 3 Apr 2019 20:11:45 +0200 From: Jan-Frode Myklebust > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: > Content-Type: text/plain; charset="utf-8" Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 3 Apr 2019 18:54:00 +0000 From: "Stephen R Buchanan" > To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: > Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 87, Issue 4 ********************************************* _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Thu Apr 4 08:48:18 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Thu, 04 Apr 2019 08:48:18 +0100 Subject: [gpfsug-discuss] Slack workspace Message-ID: <391FC115-8A42-4122-A976-13939B24A78A@spectrumscale.org> We?ve been pondering for a while (quite a long while actually!) adding a slack workspace for the user group. That?s not to say I want to divert traffic from the mailing list, but maybe it will be useful for some people. Please don?t feel compelled to join the slack workspace, but if you want to join, then there?s a link on: https://www.spectrumscaleug.org/join/ to get an invite. I know there are a lot of IBM people on the mailing list, and they often reply off-list to member posts (which I appreciate!), so please still use the mailing list for questions, but maybe there are some discussions that will work better on slack ? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Lehmann at csiro.au Thu Apr 4 08:56:07 2019 From: Greg.Lehmann at csiro.au (Lehmann, Greg (IM&T, Pullenvale)) Date: Thu, 4 Apr 2019 07:56:07 +0000 Subject: [gpfsug-discuss] Slack workspace In-Reply-To: <391FC115-8A42-4122-A976-13939B24A78A@spectrumscale.org> References: <391FC115-8A42-4122-A976-13939B24A78A@spectrumscale.org> Message-ID: It?s worth a shot. We have one for Australian HPC sysadmins that seems quite popular (with its own GPFS channel.) There is also a SigHPC slack for a more international flavour that came a bit later. People tend to use it for p2p comms when at conferences as well. From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (Spectrum Scale User Group Chair) Sent: Thursday, April 4, 2019 5:48 PM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Slack workspace We?ve been pondering for a while (quite a long while actually!) adding a slack workspace for the user group. That?s not to say I want to divert traffic from the mailing list, but maybe it will be useful for some people. Please don?t feel compelled to join the slack workspace, but if you want to join, then there?s a link on: https://www.spectrumscaleug.org/join/ to get an invite. I know there are a lot of IBM people on the mailing list, and they often reply off-list to member posts (which I appreciate!), so please still use the mailing list for questions, but maybe there are some discussions that will work better on slack ? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Apr 4 14:48:35 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 4 Apr 2019 13:48:35 +0000 Subject: [gpfsug-discuss] Agenda - Spectrum Scale UG meeting, April 16-17th, NCAR, Boulder Message-ID: <5EEB5BA9-562F-45DE-A534-F01DBFB41FE0@nuance.com> Registration is only open for a few more days! Register here: https://www.eventbrite.com/e/spectrum-scale-gpfs-user-group-us-spring-2019-meeting-tickets-57035376346 (directions, locations, and suggested hotels) Breakfast, Lunch included (free of charge) and Evening social event at NCAR! Here is the ?final? agenda: Tuesday, April 16th 8:30 9:00 Registration and Networking 9:00 9:20 Welcome Kristy Kallback-Rose / Bob Oesterlin (Chair) / Ted Hoover (IBM) 9:20 9:45 Spectrum Scale: The past, the present, the future Wayne Sawdon (IBM) 9:45 10:10 Accelerating AI workloads with IBM Spectrum Scale Ted Hoover (IBM) 10:10 10:30 Nvidia: Doing large scale AI faster ? scale innovation with multi-node-multi-GPU computing and a scaling data pipeline Jacci Cenci (Nvidia) 10:30 11:00 Coffee and Networking n/a 11:00 11:25 AI ecosystem and solutions with IBM Spectrum Scale Piyush Chaudhary (IBM) 11:25 11:45 Customer Talk / Partner Talk TBD 11:45 12:00 Meet the devs Ulf Troppens (IBM) 12:00 13:00 Lunch and Networking 13:00 13:30 Spectrum Scale Update Puneet Chauhdary (IBM) 13:30 13:45 ESS Update Puneet Chauhdary (IBM) 13:45 14:00 Support Update Bob Simon (IBM) 14:00 14:30 Memory Consumption in Spectrum Scale Tomer Perry (IBM) 14:30 15:00 Coffee and Networking n/a 15:00 15:20 New HPC Usage Model @ J?lich: Multi PB User Data Migration Martin Lischewski (Forschungszentrum J?lich) 15:20 15:40 Open discussion: large scale data migration All 15:40 16:00 Container & Cloud Update Ted Hoover (IBM) 16:00 16:20 Towards Proactive Service with Call Home Ulf Troppens (IBM) 16:20 16:30 Break 16:30 17:00 Advanced metadata management with Spectrum Discover Deepavali Bhagwat (IBM) 17:00 17:20 High Performance Tier Tomer Perry (IBM) 17:20 18:00 Meet the Devs - Ask us Anything All 18:00 20:00 Get Together n/a 13:00 - 17:15 Breakout Session: Getting Started with Spectrum Scale Wednesday, April 17th 8:30 9:00 Coffee und Networking n/a 8:30 9:00 Spectrum Scale Licensing Carl Zetie (IBM) 9:00 10:00 "Spectrum Scale Use Cases (Beginner) Spectrum Scale Protocols (Overview) (Beginner)" Spectrum Scale backup and SOBAR Chris Maestas (IBM) Getting started with AFM (Advanced) Venkat Puvva (IBM) 10:00 11:00 How to design a Spectrum Scale environment? (Beginner) Tomer Perry (IBM) Spectrum Scale on Google Cloud Jeff Ceason (IBM) Spectrum Scale Trial VM Spectrum Scale Vagrant" "Chris Maestas (IBM Ulf Troppens (IBM)" 11:00 12:00 "Spectrum Scale GUI (Beginner) Spectrum Scale REST API (Beginner)" "Chris Maestas (IBM) Spectrum Scale Network flow Tomer Perry (IBM) Spectrum Scale Watch Folder (Advanced) Spectrum Scale File System Audit Logging "Deepavali Bhagwat (IBM) 12:00 13:00 Lunch and Networking n/a 13:00 13:20 Sponsor Talk: Excelero TBD 13:20 13:40 AWE site update Paul Tomlinson (AWE) 13:40 14:00 Sponsor Talk: Lenovo Ray Padden (Lenovo) 14:00 14:30 Coffee and Networking n/a 14:30 15:00 TCT Update Rob Basham 15:00 15:30 AFM Update Venkat Puvva (IBM) 15:30 15:50 New Storage Options for Spectrum Scale Carl Zetie (IBM) 15:50 16:00 Wrap-up Kristy Kallback-Rose / Bob Oesterlin Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Sat Apr 6 15:11:53 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Sat, 6 Apr 2019 14:11:53 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: Message-ID: There is a non-technical issue you may need to consider. IBM has set licensing rules about mixing in the same Spectrum Scale cluster both ESS from IBM and 3rd party storage that is licensed under ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). I am sure Carl Zetie or other IBMers who watch this list can explain the exact restrictions. Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum NAS and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com On 3 Apr 2019, at 19:47, Sanchez, Paul wrote: >> note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) > > -Paul > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] > Sent: 03 April 2019 17:12 > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=qihHkHSqt2rVgrBVDaeGaUrYw-BMlNQ6AQ1EU7EtYr0&s=EANfMzGKOlziRRZj0X9jkK-7HsqY_MkWwZgA5OXOiCo&e= > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=qihHkHSqt2rVgrBVDaeGaUrYw-BMlNQ6AQ1EU7EtYr0&s=EANfMzGKOlziRRZj0X9jkK-7HsqY_MkWwZgA5OXOiCo&e= > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.zacek77 at gmail.com Sat Apr 6 22:50:53 2019 From: m.zacek77 at gmail.com (Michal Zacek) Date: Sat, 6 Apr 2019 23:50:53 +0200 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL Message-ID: Hello, we decided to convert NFS4 acl to POSIX (we need share same data between SMB, NFS and GPFS clients), so I created script to convert NFS4 to posix ACL. It is very simple, first I do "chmod -R 770 DIR" and then "setfacl -R ..... DIR". I was surprised that conversion to posix acl has taken more then 2TB of metadata space.There is about one hundred million files at GPFS filesystem. Is this expected behavior? Thanks, Michal Example of NFS4 acl: #NFSv4 ACL #owner:root #group:root special:owner@:rwx-:allow (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED group:ag_cud_96_lab:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED group:ag_cud_96_lab_ro:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED converted to posix acl: # owner: root # group: root user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- group:ag_cud_96_lab:rwx default:group:ag_cud_96_lab:rwx group:ag_cud_96_lab_ro:r-x default:group:ag_cud_96_lab_ro:r-x -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.rupp at us.ibm.com Sun Apr 7 16:26:14 2019 From: richard.rupp at us.ibm.com (RICHARD RUPP) Date: Sun, 7 Apr 2019 11:26:14 -0400 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: This has been publically documented in the Spectrum Scale FAQ Q13.17, Q13.18 and Q13.19. Regards, Richard Rupp, Sales Specialist, Phone: 1-347-510-6746 From: "Daniel Kidger" To: "gpfsug main discussion list" Date: 04/06/2019 10:12 AM Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org There is a non-technical issue you may need to consider. IBM has set licensing rules about mixing in the same Spectrum Scale cluster both ESS from IBM and 3rd party storage that is licensed under ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). I am sure Carl Zetie or other IBMers who watch this list can explain the exact restrictions. Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum NAS and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com On 3 Apr 2019, at 19:47, Sanchez, Paul wrote: note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org < gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [ gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=EXL-jEd1jmdzvOIhT87C7SIqmAS9uhVQ6J3kObct4OY&m=3KUx-vFPoAlAOV8zt_7RCV5o1kvr5LobB3JxXuR5-Rg&s=qsN98nblbvXfi2y1V40IAjyT_8DY3bwqk9pon-auNw4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From kkr at lbl.gov Mon Apr 8 19:05:22 2019 From: kkr at lbl.gov (Kristy Kallback-Rose) Date: Mon, 8 Apr 2019 11:05:22 -0700 Subject: [gpfsug-discuss] Registration DEADLINE April 9 - Spectrum Scale UG meeting, April 16-17th, NCAR, Boulder In-Reply-To: <5EEB5BA9-562F-45DE-A534-F01DBFB41FE0@nuance.com> References: <5EEB5BA9-562F-45DE-A534-F01DBFB41FE0@nuance.com> Message-ID: <1963C600-822D-4755-8C6F-AF03B5E67162@lbl.gov> Do you like free food? OK, maybe your school days are long gone, but who doesn?t like free food? We need to give the catering folks a head count, so we will close registration tomorrow evening, April 9. So register now for the Boulder GPFS/Spectrum Scale User Group Event (link and agenda below). This is your chance to give IBM feedback and discuss GPFS with your fellow storage admins and IBMers. We?d love to hear your participation in the discussions. Best, Kristy > On Apr 4, 2019, at 6:48 AM, Oesterlin, Robert wrote: > > Registration is only open for a few more days! > > Register here: https://www.eventbrite.com/e/spectrum-scale-gpfs-user-group-us-spring-2019-meeting-tickets-57035376346 (directions, locations, and suggested hotels) > > Breakfast, Lunch included (free of charge) and Evening social event at NCAR! > > Here is the ?final? agenda: > > Tuesday, April 16th > 8:30 9:00 Registration and Networking > 9:00 9:20 Welcome Kristy Kallback-Rose / Bob Oesterlin (Chair) / Ted Hoover (IBM) > 9:20 9:45 Spectrum Scale: The past, the present, the future Wayne Sawdon (IBM) > 9:45 10:10 Accelerating AI workloads with IBM Spectrum Scale Ted Hoover (IBM) > 10:10 10:30 Nvidia: Doing large scale AI faster ? scale innovation with multi-node-multi-GPU computing and a scaling data pipeline Jacci Cenci (Nvidia) > 10:30 11:00 Coffee and Networking n/a > 11:00 11:25 AI ecosystem and solutions with IBM Spectrum Scale Piyush Chaudhary (IBM) > 11:25 11:45 Customer Talk / Partner Talk TBD > 11:45 12:00 Meet the devs Ulf Troppens (IBM) > 12:00 13:00 Lunch and Networking > 13:00 13:30 Spectrum Scale Update Puneet Chauhdary (IBM) > 13:30 13:45 ESS Update Puneet Chauhdary (IBM) > 13:45 14:00 Support Update Bob Simon (IBM) > 14:00 14:30 Memory Consumption in Spectrum Scale Tomer Perry (IBM) > 14:30 15:00 Coffee and Networking n/a > 15:00 15:20 New HPC Usage Model @ J?lich: Multi PB User Data Migration Martin Lischewski (Forschungszentrum J?lich) > 15:20 15:40 Open discussion: large scale data migration All > 15:40 16:00 Container & Cloud Update Ted Hoover (IBM) > 16:00 16:20 Towards Proactive Service with Call Home Ulf Troppens (IBM) > 16:20 16:30 Break > 16:30 17:00 Advanced metadata management with Spectrum Discover Deepavali Bhagwat (IBM) > 17:00 17:20 High Performance Tier Tomer Perry (IBM) > 17:20 18:00 Meet the Devs - Ask us Anything All > 18:00 20:00 Get Together n/a > > 13:00 - 17:15 Breakout Session: Getting Started with Spectrum Scale > > Wednesday, April 17th > 8:30 9:00 Coffee und Networking n/a > 8:30 9:00 Spectrum Scale Licensing Carl Zetie (IBM) > 9:00 10:00 "Spectrum Scale Use Cases (Beginner) > Spectrum Scale Protocols (Overview) (Beginner)" > Spectrum Scale backup and SOBAR Chris Maestas (IBM) > Getting started with AFM (Advanced) Venkat Puvva (IBM) > 10:00 11:00 How to design a Spectrum Scale environment? (Beginner) Tomer Perry (IBM) > Spectrum Scale on Google Cloud Jeff Ceason (IBM) > Spectrum Scale Trial VM > Spectrum Scale Vagrant" "Chris Maestas (IBM Ulf Troppens (IBM)" > 11:00 12:00 "Spectrum Scale GUI (Beginner) > Spectrum Scale REST API (Beginner)" "Chris Maestas (IBM) > Spectrum Scale Network flow Tomer Perry (IBM) > Spectrum Scale Watch Folder (Advanced) > Spectrum Scale File System Audit Logging "Deepavali Bhagwat (IBM) > 12:00 13:00 Lunch and Networking n/a > 13:00 13:20 Sponsor Talk: Excelero TBD > 13:20 13:40 AWE site update Paul Tomlinson (AWE) > 13:40 14:00 Sponsor Talk: Lenovo Ray Padden (Lenovo) > 14:00 14:30 Coffee and Networking n/a > 14:30 15:00 TCT Update Rob Basham > 15:00 15:30 AFM Update Venkat Puvva (IBM) > 15:30 15:50 New Storage Options for Spectrum Scale Carl Zetie (IBM) > 15:50 16:00 Wrap-up Kristy Kallback-Rose / Bob Oesterlin > > > Bob Oesterlin > Sr Principal Storage Engineer, Nuance > 507-269-0413 > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Apr 10 15:35:57 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 10 Apr 2019 14:35:57 +0000 Subject: [gpfsug-discuss] Follow-up: ESS File systems Message-ID: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> I?m trying to finalize my file system configuration for production. I?ll be moving 3-3.5B files from my legacy storage to ESS (about 1.8PB). The legacy file systems are block size 256k, 8k subblocks. Target ESS is a GL4, 8TB drives (2.2PB using 8+2p) For file systems configured on the ESS, the vdisk block size must equal the file system block size. Using 8+2p, the smallest block size is 512K. Looking at the overall file size histogram, a block size of 1MB might be a good compromise in efficiency and sub block size (32k subblock). With 4K inodes, somewhere around 60-70% of the current files end up in inodes. Of the files in the range 4k-32K, those are the ones that would potentially ?waste? some space because they are smaller than the sub block but too big for an inode. That?s roughly 10-15% of the files. This ends up being a compromise because of our inability to use the V5 file system format (clients still at CentOS 6/Scale 4.2.3). For metadata, the file systems are currently using about 15TB of space (replicated, across roughly 1.7PB usage). This represents a mix of 256b and 4k inodes (70% 256b). Assuming a 8x increase the upper limit of needs would be 128TB. Since some of that is already in 4K inodes, I feel an allocation of 90-100 TB (4-5% of data space) is closer to reality. I don?t know if having a separate metadata pool makes sense if I?m using the V4 format, in which the block size of metadata and data is the same. Summary, I think the best options are: Option (1): 2 file systems of 1PB each. 1PB data pool, 50TB system pool, 1MB block size, 2x replicated metadata Option (2): 2 file systems of 1PB each. 1PB data/metadata pool, 1MB block size, 2x replicated metadata (preferred, then I don?t need to manage my metadata space) Any thoughts would be appreciated. Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Wed Apr 10 18:57:32 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 10 Apr 2019 13:57:32 -0400 Subject: [gpfsug-discuss] Follow-up: ESS File systems In-Reply-To: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> References: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> Message-ID: If you're into pondering some more tweaks: -i InodeSize is tunable system pool : --metadata-block-size is tunable separately from -B blocksize On ESS you might want to use different block size and error correcting codes for (v)disks that hold system pool. Generally I think you'd want to set up system pool for best performance for relatively short reads and updates. -------------- next part -------------- An HTML attachment was scrubbed... URL: From TOMP at il.ibm.com Wed Apr 10 21:11:17 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Wed, 10 Apr 2019 23:11:17 +0300 Subject: [gpfsug-discuss] Follow-up: ESS File systems In-Reply-To: References: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> Message-ID: Its also important to look into the actual space "wasted" by the "subblock mismatch". For example, a snip from a filehist output I've found somewhere: File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 2M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 1297314 2.65% 0.00% 0.00% 1 34014892 72.11% 0.74% 0.59% 2 2217365 76.64% 0.84% 0.67% 3 1967998 80.66% 0.96% 0.77% 4 798170 82.29% 1.03% 0.83% 5 1518258 85.39% 1.20% 0.96% 6 581539 86.58% 1.27% 1.02% 7 659969 87.93% 1.37% 1.10% 8 1178798 90.33% 1.58% 1.27% 9 189220 90.72% 1.62% 1.30% 10 130197 90.98% 1.64% 1.32% So, 72% of the files are smaller then 1 subblock ( 2M in the above case BTW). If, for example, we'll double it - we will "waste" ~76% of the files, and if we'll push it to 16M it will be ~90% of the files... But, we really care about capacity, right? So, going into the 16M extreme, we'll "waste" 1.58% of the capacity ( worst case of course). So, if it will give you ( highly depends on the workload of course) 4X the performance ( just for the sake of discussion) - will it be OK to pay the 1.5% "premium" ? Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Marc A Kaplan" To: gpfsug main discussion list Date: 10/04/2019 20:57 Subject: Re: [gpfsug-discuss] Follow-up: ESS File systems Sent by: gpfsug-discuss-bounces at spectrumscale.org If you're into pondering some more tweaks: -i InodeSize is tunable system pool : --metadata-block-size is tunable separately from -B blocksize On ESS you might want to use different block size and error correcting codes for (v)disks that hold system pool. Generally I think you'd want to set up system pool for best performance for relatively short reads and updates. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=pKTwc3LbUTao8mMRXJzrpTnBdOxO9b7mRlJZiUHOof4&s=YHGve_DLxkWdwq7yiDHjBvXoHmwLkUh7zBiK7LUpmsw&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From TOMP at il.ibm.com Wed Apr 10 21:19:15 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Wed, 10 Apr 2019 23:19:15 +0300 Subject: [gpfsug-discuss] Follow-up: ESS File systems In-Reply-To: References: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> Message-ID: Just to clarify - its 2M block size, so 64k subblock size. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Tomer Perry" To: gpfsug main discussion list Date: 10/04/2019 23:11 Subject: Re: [gpfsug-discuss] Follow-up: ESS File systems Sent by: gpfsug-discuss-bounces at spectrumscale.org Its also important to look into the actual space "wasted" by the "subblock mismatch". For example, a snip from a filehist output I've found somewhere: File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 2M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 1297314 2.65% 0.00% 0.00% 1 34014892 72.11% 0.74% 0.59% 2 2217365 76.64% 0.84% 0.67% 3 1967998 80.66% 0.96% 0.77% 4 798170 82.29% 1.03% 0.83% 5 1518258 85.39% 1.20% 0.96% 6 581539 86.58% 1.27% 1.02% 7 659969 87.93% 1.37% 1.10% 8 1178798 90.33% 1.58% 1.27% 9 189220 90.72% 1.62% 1.30% 10 130197 90.98% 1.64% 1.32% So, 72% of the files are smaller then 1 subblock ( 2M in the above case BTW). If, for example, we'll double it - we will "waste" ~76% of the files, and if we'll push it to 16M it will be ~90% of the files... But, we really care about capacity, right? So, going into the 16M extreme, we'll "waste" 1.58% of the capacity ( worst case of course). So, if it will give you ( highly depends on the workload of course) 4X the performance ( just for the sake of discussion) - will it be OK to pay the 1.5% "premium" ? Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Marc A Kaplan" To: gpfsug main discussion list Date: 10/04/2019 20:57 Subject: Re: [gpfsug-discuss] Follow-up: ESS File systems Sent by: gpfsug-discuss-bounces at spectrumscale.org If you're into pondering some more tweaks: -i InodeSize is tunable system pool : --metadata-block-size is tunable separately from -B blocksize On ESS you might want to use different block size and error correcting codes for (v)disks that hold system pool. Generally I think you'd want to set up system pool for best performance for relatively short reads and updates. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=qbhRxpvXiJPC72GAztszQ27LP3W7o1nmJYNV1rP2k2U&s=T5j2wkoj3NuxnK-RAMPlSc9vYHIViTOe8hGF68u5VsU&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Apr 12 10:38:32 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 12 Apr 2019 09:38:32 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0946DFC72F618f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: From jose.filipe.higino at gmail.com Fri Apr 12 11:52:21 2019 From: jose.filipe.higino at gmail.com (=?UTF-8?Q?Jos=C3=A9_Filipe_Higino?=) Date: Fri, 12 Apr 2019 22:52:21 +1200 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: Does not this depend on the License type... Being licensed by data... gives you the ability to spin as much client nodes as possible... including to the ESS cluster right? On Fri, 12 Apr 2019 at 21:38, Daniel Kidger wrote: > > Yes I am aware of the FAQ, and it particular Q13.17 which says: > > *No, systems from OEM vendors are considered distinct products even when > they embed IBM Spectrum Scale. They cannot be part of the same cluster as > IBM licenses.* > > But if this statement is taken literally, then once a customer has bought > say a Lenovo GSS/DSS-G, they are then "locked-in" to buying more storage > other OEM/ESA partners (Lenovo, Bull, DDN, etc.), as above statement > suggests that they cannot add IBM storage such as ESS to their GPFS cluster. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "RICHARD RUPP" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Date: Sun, Apr 7, 2019 4:49 PM > > > *This has been publically documented in the Spectrum Scale FAQ Q13.17, > Q13.18 and Q13.19.* > > Regards, > > *Richard Rupp*, Sales Specialist, *Phone:* *1-347-510-6746* > > > [image: Inactive hide details for "Daniel Kidger" ---04/06/2019 10:12:12 > AM---There is a non-technical issue you may need to consider.]"Daniel > Kidger" ---04/06/2019 10:12:12 AM---There is a non-technical issue you may > need to consider. IBM has set licensing rules about mixing in > > From: "Daniel Kidger" > To: "gpfsug main discussion list" > Date: 04/06/2019 10:12 AM > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > There is a non-technical issue you may need to consider. > IBM has set licensing rules about mixing in the same Spectrum Scale > cluster both ESS from IBM and 3rd party storage that is licensed under > ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). > > I am sure Carl Zetie or other IBMers who watch this list can explain the > exact restrictions. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > *+* <+44-7818%20522%20266>*44-(0)7818 522 266* <+44-7818%20522%20266> > *daniel.kidger at uk.ibm.com* > > > > On 3 Apr 2019, at 19:47, Sanchez, Paul <*Paul.Sanchez at deshaw.com* > > wrote: > > - > - > - > - note though you can't have GNR based vdisks (ESS/DSS-G) in > the same storage pool. > > At one time there was definitely a warning from IBM in the docs > about not mixing big-endian and little-endian GNR in the same > cluster/filesystem. But at least since Nov 2017, IBM has published videos > showing clusters containing both. (In my opinion, they had to support this > because they changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for > Scale itself, I can confirm that filesystems can contain NSDs which are > provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with > SAN storage based NSD servers. We typically do rolling upgrades of GNR > building blocks by adding blocks to an existing cluster, emptying and > removing the existing blocks, upgrading those in isolation, then repeating > with the next cluster. As a result, we have had every combination in play > at some point in time. Care just needs to be taken with nodeclass naming > and mmchconfig parameters. (We derive the correct params for each new > building block from its final config after upgrading/testing it in > isolation.) > > -Paul > > -----Original Message----- > From: *gpfsug-discuss-bounces at spectrumscale.org* > < > *gpfsug-discuss-bounces at spectrumscale.org* > > On Behalf Of Simon > Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list <*gpfsug-discuss at spectrumscale.org* > > > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other > SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't > have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks > then you are going to have to have a new filesystem and copy data. So it > depends what your endgame is really. We just did such a process and one of > my colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: *gpfsug-discuss-bounces at spectrumscale.org* > [ > *gpfsug-discuss-bounces at spectrumscale.org* > ] on behalf of > *prasad.surampudi at theatsgroup.com* > [ > *prasad.surampudi at theatsgroup.com* > ] > Sent: 03 April 2019 17:12 > To: *gpfsug-discuss at spectrumscale.org* > > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum > Scale cluster. Can the ESS nodes be added to existing scale cluster without > changing existing cluster name? Or do we need to create a new scale cluster > with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0946DFC72F618f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: From daniel.kidger at uk.ibm.com Fri Apr 12 12:35:38 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 12 Apr 2019 11:35:38 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.16a111c26babd5baef61.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jose.filipe.higino at gmail.com Fri Apr 12 14:11:59 2019 From: jose.filipe.higino at gmail.com (=?UTF-8?Q?Jos=C3=A9_Filipe_Higino?=) Date: Sat, 13 Apr 2019 01:11:59 +1200 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: got it now. Sorry, I miss understood that. I was already aware. =) On Fri, 12 Apr 2019 at 23:35, Daniel Kidger wrote: > Jose, > I was not considering client nodes at all. > Under the current license models, all licenses are capacity based (in two > flavours: per-TiB or per-disk), and so adding new clients is never a > licensing issue. > My point was that if you own an OEM supplied cluster from say Lenovo, you > can add to that legally from many vendors , just not from IBM themselves. > (or maybe the FAQ rules need further clarification?) > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "Jos? Filipe Higino" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Date: Fri, Apr 12, 2019 11:52 AM > > Does not this depend on the License type... > > Being licensed by data... gives you the ability to spin as much client > nodes as possible... including to the ESS cluster right? > > On Fri, 12 Apr 2019 at 21:38, Daniel Kidger > wrote: > > > Yes I am aware of the FAQ, and it particular Q13.17 which says: > > *No, systems from OEM vendors are considered distinct products even when > they embed IBM Spectrum Scale. They cannot be part of the same cluster as > IBM licenses.* > > But if this statement is taken literally, then once a customer has bought > say a Lenovo GSS/DSS-G, they are then "locked-in" to buying more storage > other OEM/ESA partners (Lenovo, Bull, DDN, etc.), as above statement > suggests that they cannot add IBM storage such as ESS to their GPFS cluster. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "RICHARD RUPP" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Date: Sun, Apr 7, 2019 4:49 PM > > > *This has been publically documented in the Spectrum Scale FAQ Q13.17, > Q13.18 and Q13.19.* > > Regards, > > *Richard Rupp*, Sales Specialist, *Phone:* *1-347-510-6746* > > > [image: Inactive hide details for "Daniel Kidger" ---04/06/2019 10:12:12 > AM---There is a non-technical issue you may need to consider.]"Daniel > Kidger" ---04/06/2019 10:12:12 AM---There is a non-technical issue you may > need to consider. IBM has set licensing rules about mixing in > > From: "Daniel Kidger" > To: "gpfsug main discussion list" > Date: 04/06/2019 10:12 AM > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > There is a non-technical issue you may need to consider. > IBM has set licensing rules about mixing in the same Spectrum Scale > cluster both ESS from IBM and 3rd party storage that is licensed under > ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). > > I am sure Carl Zetie or other IBMers who watch this list can explain the > exact restrictions. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > *+* <+44-7818%20522%20266>*44-(0)7818 522 266* <+44-7818%20522%20266> > *daniel.kidger at uk.ibm.com* > > > > On 3 Apr 2019, at 19:47, Sanchez, Paul <*Paul.Sanchez at deshaw.com* > > wrote: > > - > - > - > - note though you can't have GNR based vdisks (ESS/DSS-G) in > the same storage pool. > > At one time there was definitely a warning from IBM in the docs > about not mixing big-endian and little-endian GNR in the same > cluster/filesystem. But at least since Nov 2017, IBM has published videos > showing clusters containing both. (In my opinion, they had to support this > because they changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for > Scale itself, I can confirm that filesystems can contain NSDs which are > provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with > SAN storage based NSD servers. We typically do rolling upgrades of GNR > building blocks by adding blocks to an existing cluster, emptying and > removing the existing blocks, upgrading those in isolation, then repeating > with the next cluster. As a result, we have had every combination in play > at some point in time. Care just needs to be taken with nodeclass naming > and mmchconfig parameters. (We derive the correct params for each new > building block from its final config after upgrading/testing it in > isolation.) > > -Paul > > -----Original Message----- > From: *gpfsug-discuss-bounces at spectrumscale.org* > < > *gpfsug-discuss-bounces at spectrumscale.org* > > On Behalf Of Simon > Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list <*gpfsug-discuss at spectrumscale.org* > > > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other > SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't > have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks > then you are going to have to have a new filesystem and copy data. So it > depends what your endgame is really. We just did such a process and one of > my colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: *gpfsug-discuss-bounces at spectrumscale.org* > [ > *gpfsug-discuss-bounces at spectrumscale.org* > ] on behalf of > *prasad.surampudi at theatsgroup.com* > [ > *prasad.surampudi at theatsgroup.com* > ] > Sent: 03 April 2019 17:12 > To: *gpfsug-discuss at spectrumscale.org* > > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum > Scale cluster. Can the ESS nodes be added to existing scale cluster without > changing existing cluster name? Or do we need to create a new scale cluster > with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.16a111c26babd5baef61.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Fri Apr 12 19:59:45 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 12 Apr 2019 18:59:45 +0000 Subject: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS Message-ID: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Anyone care to tell me why this is failing or how I can do further debug. Cluster is otherwise healthy. Bob Oesterlin Sr Principal Storage Engineer, Nuance Time Cluster Name Reporting Node Event Name Entity Type Entity Name Severity Message 12.04.2019 13:03:26.429 nrg.gssio1-hs ems1-hs gui_refresh_task_failed NODE ems1-hs WARNING The following GUI refresh task(s) failed: FILESETS -------------- next part -------------- An HTML attachment was scrubbed... URL: From PPOD at de.ibm.com Fri Apr 12 20:05:54 2019 From: PPOD at de.ibm.com (Przemyslaw Podfigurny1) Date: Fri, 12 Apr 2019 19:05:54 +0000 Subject: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS In-Reply-To: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> References: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15550956962140.png Type: image/png Size: 1167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15550956962141.png Type: image/png Size: 6645 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15550956962142.png Type: image/png Size: 1167 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Fri Apr 12 20:18:20 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 12 Apr 2019 19:18:20 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: FW: gui_refresh_task_failed : FILESETS In-Reply-To: References: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Message-ID: Ah - failing because it?s checking the remote file system for information - how do I disable that? root at ems1 ~]# /usr/lpp/mmfs/gui/cli/runtask filesets --debug debug: locale=en_US debug: Running 'mmlsfileset 'fs1' -Y ' on node localhost debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=fs1 group_by gpfs_fset_name last 13 bucket_size 300' debug: Running 'mmlsfileset 'fs1test' -Y ' on node localhost debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=fs1test group_by gpfs_fset_name last 13 bucket_size 300' debug: Running 'mmlsfileset 'nrg5_tools' -Y ' on node localhost debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=tools group_by gpfs_fset_name last 13 bucket_size 300' on remote cluster nrg5-gpfs.nrg5-gpfs01 err: com.ibm.fscc.zimon.unified.ZiMONException: Remote access is not configured debug: Will not raise the following event using 'mmsysmonc' since it already exists in the database: reportingNode = 'ems1-hs', eventName = 'gui_refresh_task_failed', entityId = '3', arguments = 'FILESETS', identifier = 'null' err: com.ibm.fscc.zimon.unified.ZiMONException: Remote access is not configured err: com.ibm.fscc.cli.CommandException: EFSSG1150C Running specified task was unsuccessful. at com.ibm.fscc.cli.CommandException.createCommandException(CommandException.java:117) at com.ibm.fscc.newcli.commands.task.CmdRunTask.doExecute(CmdRunTask.java:84) at com.ibm.fscc.newcli.internal.AbstractCliCommand.execute(AbstractCliCommand.java:156) at com.ibm.fscc.cli.CliProtocol.processNewStyleCommand(CliProtocol.java:460) at com.ibm.fscc.cli.CliProtocol.processRequest(CliProtocol.java:446) at com.ibm.fscc.cli.CliServer$CliClientServer.run(CliServer.java:97) EFSSG1150C Running specified task was unsuccessful. Bob Oesterlin Sr Principal Storage Engineer, Nuance From: on behalf of Przemyslaw Podfigurny1 Reply-To: gpfsug main discussion list Date: Friday, April 12, 2019 at 2:06 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: [EXTERNAL] Re: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS Execute the refresh task with debug option enabled on your GUI node ems1-hs to see what is the cause: /usr/lpp/mmfs/gui/cli/runtask filesets --debug Mit freundlichen Gr??en / Kind regards [cid:15550956962140] [IBM Spectrum Scale] ? ? Przemyslaw Podfigurny Software Engineer, Spectrum Scale GUI Department M069 / Spectrum Scale Software Development +49 7034 274 5403 (Office) +49 1624 159 497 (Mobile) ppod at de.ibm.com [cid:15550956962142] IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz / Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ----- Original message ----- From: "Oesterlin, Robert" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS Date: Fri, Apr 12, 2019 9:00 PM Anyone care to tell me why this is failing or how I can do further debug. Cluster is otherwise healthy. Bob Oesterlin Sr Principal Storage Engineer, Nuance Time Cluster Name Reporting Node Event Name Entity Type Entity Name Severity Message 12.04.2019 13:03:26.429 nrg.gssio1-hs ems1-hs gui_refresh_task_failed NODE ems1-hs WARNING The following GUI refresh task(s) failed: FILESETS _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1168 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 6646 bytes Desc: image002.png URL: From sandeep.patil at in.ibm.com Mon Apr 15 09:54:05 2019 From: sandeep.patil at in.ibm.com (Sandeep Ramesh) Date: Mon, 15 Apr 2019 14:24:05 +0530 Subject: [gpfsug-discuss] IBM Spectrum Scale Security Survey Message-ID: bcc: gpfsug-discuss at spectrumscale.org Dear Spectrum Scale User, Below is a survey link where we are seeking feedback to improve and enhance IBM Spectrum Scale. This is an anonymous survey and your participation in this survey is completely voluntary. IBM Spectrum Scale Cyber Security Survey https://www.surveymonkey.com/r/9ZNCZ75 (Average time of 4 mins with 10 simple questions). Your response is invaluable to us. Thank you and looking forward for your participation. Regards IBM Spectrum Scale Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From PPOD at de.ibm.com Mon Apr 15 10:18:00 2019 From: PPOD at de.ibm.com (Przemyslaw Podfigurny1) Date: Mon, 15 Apr 2019 09:18:00 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: FW: gui_refresh_task_failed : FILESETS In-Reply-To: References: , <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15553195530160.png Type: image/png Size: 1167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15553195530161.png Type: image/png Size: 6645 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15553195530162.png Type: image/png Size: 1167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D4F13A.94743D30.png Type: image/png Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D4F13A.94743D30.png Type: image/png Size: 6646 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D4F13A.94743D30.png Type: image/png Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D4F13A.94743D30.png Type: image/png Size: 6646 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D4F13A.94743D30.png Type: image/png Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D4F13A.94743D30.png Type: image/png Size: 6646 bytes Desc: not available URL: From prasad.surampudi at theatsgroup.com Tue Apr 16 13:38:34 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Tue, 16 Apr 2019 12:38:34 +0000 Subject: [gpfsug-discuss] Spectrum Scale Replication across failure groups In-Reply-To: References: , Message-ID: We have a filesystem with 'system' and 'v7kdata' pools. All the NSDs in v7kdata are with failure group '-1'. Filesystem metadata is already replicated. Now we are planning to replicate the filesystem data. So, If I add new NSDs with failure group '2' in the v7kdata pool, would I be able to replicate GPFS data between NSDs with '-1' failure group and NSDs with failure group '2' ? i.e have one copy of file on NSD with '-1' and another copy on NSD with failure group '2' ? or Do I have to change the NSDs with failure group '-1' to '1' ? mobile 302.419.5833|fax 484.320.4306|psurampudi at theATSgroup.com Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Tue Apr 16 14:15:30 2019 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 16 Apr 2019 09:15:30 -0400 Subject: [gpfsug-discuss] Spectrum Scale Replication across failure groups In-Reply-To: References: Message-ID: I believe that -1 is "special", in that all -1?s are different form each other. So you will wind up with data on several -1 NSDs, instead of a -1 and a 2. In fact you probably didn?t specify -1, it was likely assigned automatically. Read the first paragraph in the failureGroup entry in: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_mmcrnsd.htm I do realize that the subsequent paragraphs do confuse the issue somewhat, but the first paragraph describes what?s happening. Liberty, -- Stephen > On Apr 16, 2019, at 8:38 AM, Prasad Surampudi > wrote: > > We have a filesystem with 'system' and 'v7kdata' pools. All the NSDs in v7kdata are with failure group '-1'. Filesystem metadata is already replicated. Now we are planning to replicate the filesystem data. So, If I add new NSDs with failure group '2' in the v7kdata pool, would I be able to replicate GPFS data between NSDs with '-1' failure group and NSDs with failure group '2' ? i.e have one copy of file on NSD with '-1' and another copy on NSD with failure group '2' ? or Do I have to change the NSDs with failure group '-1' to '1' ? > > > mobile 302.419.5833|fax 484.320.4306|psurampudi at theATSgroup.com > Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage Systems > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Tue Apr 16 14:48:47 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Tue, 16 Apr 2019 09:48:47 -0400 Subject: [gpfsug-discuss] Spectrum Scale Replication across failuregroups In-Reply-To: References: Message-ID: I think it would be wise to first set the failure group on the existing NSDs to a valid value and not use -1. I would also suggest you not use consecutive numbers like 1 and 2 but something with some distance between them, for example 10 and 20, or 100 and 200. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Stephen Ulmer To: gpfsug main discussion list Cc: "gpfsug-discuss-request at spectrumscale.org" Date: 04/16/2019 09:18 AM Subject: Re: [gpfsug-discuss] Spectrum Scale Replication across failure groups Sent by: gpfsug-discuss-bounces at spectrumscale.org I believe that -1 is "special", in that all -1?s are different form each other. So you will wind up with data on several -1 NSDs, instead of a -1 and a 2. In fact you probably didn?t specify -1, it was likely assigned automatically. Read the first paragraph in the failureGroup entry in: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_mmcrnsd.htm I do realize that the subsequent paragraphs do confuse the issue somewhat, but the first paragraph describes what?s happening. Liberty, -- Stephen On Apr 16, 2019, at 8:38 AM, Prasad Surampudi < prasad.surampudi at theatsgroup.com> wrote: We have a filesystem with 'system' and 'v7kdata' pools. All the NSDs in v7kdata are with failure group '-1'. Filesystem metadata is already replicated. Now we are planning to replicate the filesystem data. So, If I add new NSDs with failure group '2' in the v7kdata pool, would I be able to replicate GPFS data between NSDs with '-1' failure group and NSDs with failure group '2' ? i.e have one copy of file on NSD with '-1' and another copy on NSD with failure group '2' ? or Do I have to change the NSDs with failure group '-1' to '1' ? mobile 302.419.5833|fax 484.320.4306|psurampudi at theATSgroup.com Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage Systems _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=qj8cjidW9IKqym8U4WV2Buxy_hsl7bpmELnPNc8MYPg&s=hNTiNvPnIYhBCgPOm2NLtq9vP1MIVCipuIA8snw7Eg4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at markomanolis.com Thu Apr 18 16:16:52 2019 From: george at markomanolis.com (George Markomanolis) Date: Thu, 18 Apr 2019 11:16:52 -0400 Subject: [gpfsug-discuss] IO500 - Call for Submission for ISC-19 Message-ID: Dear all, Please consider the submission of results to the new list. *Deadline*: 10 June 2019 AoE The IO500 is now accepting and encouraging submissions for the upcoming 4th IO500 list to be revealed at ISC-HPC 2019 in Frankfurt, Germany. Once again, we are also accepting submissions to the 10 node I/O challenge to encourage submission of small scale results. The new ranked lists will be announced at our ISC19 BoF [2]. We hope to see you, and your results, there. The benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please submit and we look forward to seeing many of you at ISC 2019! Please note that submissions of all size are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below. Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017, published its first list at SC17, and has grown exponentially since then. The need for such an initiative has long been known within High-Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking. The multi-fold goals of the benchmark suite are as follows: 1. Maximizing simplicity in running the benchmark suite 2. Encouraging complexity in tuning for performance 3. Allowing submitters to highlight their ?hero run? performance numbers 4. Forcing submitters to simultaneously report performance for challenging IO patterns. Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication. The goals of the community are also multi-fold: 1. Gather historical data for the sake of analysis and to aid predictions of storage futures 2. Collect tuning information to share valuable performance optimizations across the community 3. Encourage vendors and designers to optimize for workloads beyond ?hero runs? 4. Establish bounded expectations for users, procurers, and administrators Edit 10 Node I/O Challenge At ISC, we will announce our second IO-500 award for the 10 Node Challenge. This challenge is conducted using the regular IO-500 benchmark, however, with the rule that exactly *10 computes nodes* must be used to run the benchmark (one exception is find, which may use 1 node). You may use any shared storage with, e.g., any number of servers. When submitting for the IO-500 list, you can opt-in for ?Participate in the 10 compute node challenge only?, then we won't include the results into the ranked list. Other 10 compute node submission will be included in the full list and in the ranked list. We will announce the result in a separate derived list and in the full list but not on the ranked IO-500 list at io500.org. Edit Birds-of-a-feather Once again, we encourage you to submit [1], to join our community, and to attend our BoF ?The IO-500 and the Virtual Institute of I/O? at ISC 2019 [2] where we will announce the fourth IO500 list and second 10 node challenge list. The current list includes results from BeeGPFS, DataWarp, IME, Lustre, Spectrum Scale, and WekaIO. We hope that the next list has even more. We look forward to answering any questions or concerns you might have. - [1] http://io500.org/submission - [2] The BoF schedule will be announced soon -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.caubet at psi.ch Thu Apr 18 16:32:58 2019 From: marc.caubet at psi.ch (Caubet Serrabou Marc (PSI)) Date: Thu, 18 Apr 2019 15:32:58 +0000 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Message-ID: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch> Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Thu Apr 18 16:54:18 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Thu, 18 Apr 2019 11:54:18 -0400 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' In-Reply-To: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch> References: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch> Message-ID: We can try to provide some guidance on what you are seeing but generally to do true analysis of performance issues customers should contact IBM lab based services (LBS). We need some additional information to understand what is happening. On which node did you collect the waiters and what command did you run to capture the data? What is the network connection between the remote cluster and the storage cluster? Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Date: 04/18/2019 11:41 AM Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=dHk9lhiQqWEszuFxOcyajfLhFM0xLk7rMkdNNNQOuyQ&s=HTJYxe-mxXg7paKH_AWo3OU8-A_YHvpotkB9f0h2amg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.caubet at psi.ch Thu Apr 18 18:41:45 2019 From: marc.caubet at psi.ch (Caubet Serrabou Marc (PSI)) Date: Thu, 18 Apr 2019 17:41:45 +0000 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' In-Reply-To: References: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch>, Message-ID: <0081EB235765E14395278B9AE1DF34180A86B2D4@MBX214.d.ethz.ch> Hi, thanks a lot. About the requested information: * Waiters were captured with the command 'mmdiag --waiters', and it was performed on one of the IO (NSD) nodes. * Connection between storage and client clusters is with Infiniband EDR. For the GPFS client cluster we have 3 chassis, each one has 24 blades with unmanaged EDR switch (24 for the blades, 12 external), and currently 10 EDR external ports are connected for external connectivity. On the other hand, the GPFS storage cluster has 2 IO nodes (as commented in the previous e-mail, DSS G240). Each IO node has connected 4 x EDR ports. Regarding the Infiniband connectivty, my network contains 2 top EDR managed switches configured with up/down routing, connecting the unmanaged switches from the chassis and the 2 managed Infiniband switches for the storage (for redundancy). Whenever needed I can go through PMR if this would easy the debug, no problem for me. I was wondering about the meaning "waiting for helper threads" and what could be the reason for that Thanks a lot for your help and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of IBM Spectrum Scale [scale at us.ibm.com] Sent: Thursday, April 18, 2019 5:54 PM To: gpfsug main discussion list Cc: gpfsug-discuss-bounces at spectrumscale.org Subject: Re: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' We can try to provide some guidance on what you are seeing but generally to do true analysis of performance issues customers should contact IBM lab based services (LBS). We need some additional information to understand what is happening. * On which node did you collect the waiters and what command did you run to capture the data? * What is the network connection between the remote cluster and the storage cluster? Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Date: 04/18/2019 11:41 AM Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Thu Apr 18 21:55:25 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Thu, 18 Apr 2019 16:55:25 -0400 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' In-Reply-To: <0081EB235765E14395278B9AE1DF34180A86B2D4@MBX214.d.ethz.ch> References: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch>, <0081EB235765E14395278B9AE1DF34180A86B2D4@MBX214.d.ethz.ch> Message-ID: Thanks for the information. Since the waiters information is from one of the IO servers then the threads waiting for IO should be waiting for actual IO requests to the storage. Seeing IO operations taking seconds long generally indicates your storage is not working optimally. We would expect IOs to complete in sub-second time, as in some number of milliseconds. You are using a record size of 16M yet you stated the file system block size is 1M. Is that really what you wanted to test? Also, you have included the -fsync option to gpfsperf which will impact the results. Have you considered using the nsdperf program instead of the gpfsperf program? You can find nsdperf in the samples/net directory. One last thing I noticed was in the configuration of your management node. It showed the following. [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k To my understanding the management node has no direct access to the storage, that is any IO requests to the file system from the management node go through the IO nodes. That being true GPFS will not make use of NSD worker threads on the management node. As you can see your configuration is creating 3K NSD worker threads and none will be used so you might want to consider changing that value to 1. It will not change your performance numbers but it should free up a bit of memory on the management node. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Cc: "gpfsug-discuss-bounces at spectrumscale.org" Date: 04/18/2019 01:45 PM Subject: Re: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, thanks a lot. About the requested information: * Waiters were captured with the command 'mmdiag --waiters', and it was performed on one of the IO (NSD) nodes. * Connection between storage and client clusters is with Infiniband EDR. For the GPFS client cluster we have 3 chassis, each one has 24 blades with unmanaged EDR switch (24 for the blades, 12 external), and currently 10 EDR external ports are connected for external connectivity. On the other hand, the GPFS storage cluster has 2 IO nodes (as commented in the previous e-mail, DSS G240). Each IO node has connected 4 x EDR ports. Regarding the Infiniband connectivty, my network contains 2 top EDR managed switches configured with up/down routing, connecting the unmanaged switches from the chassis and the 2 managed Infiniband switches for the storage (for redundancy). Whenever needed I can go through PMR if this would easy the debug, no problem for me. I was wondering about the meaning "waiting for helper threads" and what could be the reason for that Thanks a lot for your help and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of IBM Spectrum Scale [scale at us.ibm.com] Sent: Thursday, April 18, 2019 5:54 PM To: gpfsug main discussion list Cc: gpfsug-discuss-bounces at spectrumscale.org Subject: Re: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' We can try to provide some guidance on what you are seeing but generally to do true analysis of performance issues customers should contact IBM lab based services (LBS). We need some additional information to understand what is happening. On which node did you collect the waiters and what command did you run to capture the data? What is the network connection between the remote cluster and the storage cluster? Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Date: 04/18/2019 11:41 AM Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=YUp1yAfDFGnpxatHqsvM9LzHFt--RrMBCKoQF_Fa_zQ&s=4NBW1TmPGKAkvbymtK2QWCnLnBp-S0AVmEJxT2H1z0k&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkachwala at ddn.com Tue Apr 23 13:25:41 2019 From: tkachwala at ddn.com (Taizun Kachwala) Date: Tue, 23 Apr 2019 12:25:41 +0000 Subject: [gpfsug-discuss] Hi from Taizun (DDN Storage @Pune, India) Message-ID: Hi, My name is Taizun and I lead the effort of developing & supporting DDN Solution using IBM GPFS/Spectrum Scale as an Embedded application stack making it a converged infrastructure using DDN Storage Fusion Architecture (SFA) appliances (GS18K, GS14K, GS400NV/200NV and GS 7990) and also as an independent product solution that can be deployed on bare metal servers as NSD server or client role. Our solution is mainly targeted towards HPC customers in AI, Analytics, BigData, High-Performance File-Server, etc. We support 4.x as well as 5.x SS product-line on CentOS & RHEL respectively. Thanks & Regards, Taizun Kachwala Lead SDET, DDN India +91 98222 07304 +91 95118 89204 -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Tue Apr 23 17:14:24 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Tue, 23 Apr 2019 16:14:24 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 21 In-Reply-To: References: Message-ID: I am trying to analyze a filehist report of a Spectrum Scale filesystem I recently collected. Given below is the data and I have put my interpretation in parentheses. Could someone from Sale development review and let me know if my interpretation is correct? Filesystem block size is 16 MB and system pool block size is 256 KB. GPFS Filehist report for Test Filesystem All: Files = 38,808,641 (38 Million Total Files) All: Files in inodes = 8153748 Available space = 1550139596472320 1550140 GB 1550 TB Total Size of files = 1110707126790022 Total Size of files in inodes = 26008177568 Total Space = 1123175375306752 1123175 GB 1123 TB Largest File = 3070145200128 - ( 2.8 TB) Average Size = 28620098 ? ( 27 MB ) Non-zero: Files = 38642491 Average NZ size = 28743155 Directories = 687233 (Total Number of Directories) Directories in inode = 650552 Total Dir Space = 5988433920 Avg Entries per dir = 57.5 (Avg # files per Directory) Files with indirect blocks = 181003 File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 16M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 7,669,346 19.76% 0.00% 0.00% ( ~7 Million files <= 512 KB ) 1 25,548,588 85.59% 1.19% 0.86% - ( ~25 Million files > 512 KB <= 1 MB ) 2 1,270,115 88.87% 1.31% 0.95% - (~1 Million files > 1 MB <= 1.5 MB ) .... .... .... 32 10387 97.37% 2.43% 1.76% Histogram of files with N 16M blocks (plus end fragment) Blocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 1 177550 97.82% 2.70% 1.95% ( ~177 K files <= 16 MB) .... .... .... 100 640 99.77% 17.31% 12.54% Number of files with more than 100 16M blocks 101+ 88121 100.00% 100.00% 72.46% ( ~88 K files > 1600 MB) -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Thu Apr 25 16:55:24 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale UG Chair)) Date: Thu, 25 Apr 2019 16:55:24 +0100 Subject: [gpfsug-discuss] (no subject) Message-ID: An HTML attachment was scrubbed... URL: From luke.raimbach at googlemail.com Thu Apr 25 19:29:04 2019 From: luke.raimbach at googlemail.com (Luke Raimbach) Date: Thu, 25 Apr 2019 19:29:04 +0100 Subject: [gpfsug-discuss] (no subject) In-Reply-To: References: Message-ID: Pop me down for a spot old bean. Make sure IBM put on good sandwiches! On Thu, 25 Apr 2019, 16:55 Simon Thompson (Spectrum Scale UG Chair), < chair at spectrumscale.org> wrote: > It's just a few weeks until the UK/Worldwide Spectrum Scale user group in > London on 8th/9th May 2019. > > As we need to confirm numbers for catering, we'll be closing registration > on 1st May. > > If you plan to attend, please register via: > > https://www.spectrumscaleug.org/event/uk-user-group-meeting/ > > (I think we have about 10 places left) > > The full agenda is now posted and our evening event is confirmed, thanks > to the support of our sponsors IBM, OCF, e8 storage, Lenovo, DDN and NVIDA. > > Simon > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Fri Apr 26 07:44:58 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 26 Apr 2019 14:44:58 +0800 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 21 In-Reply-To: References: Message-ID: From my understanding, your interpretation is correct. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Date: 04/24/2019 12:17 AM Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 21 Sent by: gpfsug-discuss-bounces at spectrumscale.org I am trying to analyze a filehist report of a Spectrum Scale filesystem I recently collected. Given below is the data and I have put my interpretation in parentheses. Could someone from Sale development review and let me know if my interpretation is correct? Filesystem block size is 16 MB and system pool block size is 256 KB. GPFS Filehist report for Test Filesystem All: Files = 38,808,641 (38 Million Total Files) All: Files in inodes = 8153748 Available space = 1550139596472320 1550140 GB 1550 TB Total Size of files = 1110707126790022 Total Size of files in inodes = 26008177568 Total Space = 1123175375306752 1123175 GB 1123 TB Largest File = 3070145200128 - ( 2.8 TB) Average Size = 28620098 ? ( 27 MB ) Non-zero: Files = 38642491 Average NZ size = 28743155 Directories = 687233 (Total Number of Directories) Directories in inode = 650552 Total Dir Space = 5988433920 Avg Entries per dir = 57.5 (Avg # files per Directory) Files with indirect blocks = 181003 File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 16M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 7,669,346 19.76% 0.00% 0.00% ( ~7 Million files <= 512 KB ) 1 25,548,588 85.59% 1.19% 0.86% - ( ~25 Million files > 512 KB <= 1 MB ) 2 1,270,115 88.87% 1.31% 0.95% - (~1 Million files > 1 MB <= 1.5 MB ) .... .... .... 32 10387 97.37% 2.43% 1.76% Histogram of files with N 16M blocks (plus end fragment) Blocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 1 177550 97.82% 2.70% 1.95% ( ~177 K files <= 16 MB) .... .... .... 100 640 99.77% 17.31% 12.54% Number of files with more than 100 16M blocks 101+ 88121 100.00% 100.00% 72.46% ( ~88 K files > 1600 MB) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=uBqBwHtxxGncMVk3Suv2icRbZNIqzOgMlfJ6LnIqNhc&s=WdJyzA9yDIx3Cyj6Kg-LvXKTj8ED4J7wm_5wJ6iyccg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From xhejtman at ics.muni.cz Fri Apr 26 13:17:33 2019 From: xhejtman at ics.muni.cz (Lukas Hejtmanek) Date: Fri, 26 Apr 2019 14:17:33 +0200 Subject: [gpfsug-discuss] gpfs and device number Message-ID: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> Hello, I noticed that from time to time, device id of a gpfs volume is not same across whole gpfs cluster. [root at kat1 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 28h/40d Inode: 3 [root at kat2 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2bh/43d Inode: 3 [root at kat3 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2ah/42d Inode: 3 this is really bad for kernel NFS as it uses device id for file handles thus NFS failover leads to nfs stale handle error. Is there a way to force a device number? -- Luk?? Hejtm?nek Linux Administrator only because Full Time Multitasking Ninja is not an official job title From TOMP at il.ibm.com Sat Apr 27 20:37:48 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Sat, 27 Apr 2019 22:37:48 +0300 Subject: [gpfsug-discuss] gpfs and device number In-Reply-To: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> References: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> Message-ID: Hi, Please use the fsid option in /etc/exports ( man exports and: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adm_nfslin.htm ) Also check https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adv_cnfs.htm in case you want HA with kernel NFS. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: Lukas Hejtmanek To: gpfsug-discuss at spectrumscale.org Date: 26/04/2019 15:37 Subject: [gpfsug-discuss] gpfs and device number Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello, I noticed that from time to time, device id of a gpfs volume is not same across whole gpfs cluster. [root at kat1 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 28h/40d Inode: 3 [root at kat2 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2bh/43d Inode: 3 [root at kat3 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2ah/42d Inode: 3 this is really bad for kernel NFS as it uses device id for file handles thus NFS failover leads to nfs stale handle error. Is there a way to force a device number? -- Luk?? Hejtm?nek Linux Administrator only because Full Time Multitasking Ninja is not an official job title _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=F4TfIKrFl9BVdEAYxZLWlFF-zF-irdwcP9LnGpgiZrs&s=Ice-yo0p955RcTDGPEGwJ-wIwN9F6PvWOpUvR6RMd4M&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeep.patil at in.ibm.com Mon Apr 29 07:42:18 2019 From: sandeep.patil at in.ibm.com (Sandeep Ramesh) Date: Mon, 29 Apr 2019 06:42:18 +0000 Subject: [gpfsug-discuss] Latest Technical Blogs on IBM Spectrum Scale (Q1 2019) In-Reply-To: References: Message-ID: Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q1 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Spectrum Scale 5.0.3 https://developer.ibm.com/storage/2019/04/24/spectrum-scale-5-0-3/ IBM Spectrum Scale HDFS Transparency Ranger Support https://developer.ibm.com/storage/2019/04/01/ibm-spectrum-scale-hdfs-transparency-ranger-support/ Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and Sharing Files Globally, http://www.redbooks.ibm.com/abstracts/redp5527.html?Open Spectrum Scale user group in Singapore, 2019 https://developer.ibm.com/storage/2019/03/14/spectrum-scale-user-group-in-singapore-2019/ 7 traits to use Spectrum Scale to run container workload https://developer.ibm.com/storage/2019/02/26/7-traits-to-use-spectrum-scale-to-run-container-workload/ Health Monitoring of IBM Spectrum Scale Cluster via External Monitoring Framework https://developer.ibm.com/storage/2019/01/22/health-monitoring-of-ibm-spectrum-scale-cluster-via-external-monitoring-framework/ Migrating data from native HDFS to IBM Spectrum Scale based shared storage https://developer.ibm.com/storage/2019/01/18/migrating-data-from-native-hdfs-to-ibm-spectrum-scale-based-shared-storage/ Bulk File Creation useful for Test on Filesystems https://developer.ibm.com/storage/2019/01/16/bulk-file-creation-useful-for-test-on-filesystems/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 01/14/2019 06:24 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q4 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q4 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Redpaper: IBM Spectrum Scale and IBM StoredIQ: Identifying and securing your business data to support regulatory requirements http://www.redbooks.ibm.com/abstracts/redp5525.html?Open IBM Spectrum Scale Memory Usage https://www.slideshare.net/tomerperry/ibm-spectrum-scale-memory-usage?qid=50a1dfda-3102-484f-b9d0-14b69fc4800b&v=&b=&from_search=2 Spectrum Scale and Containers https://developer.ibm.com/storage/2018/12/20/spectrum-scale-and-containers/ IBM Elastic Storage Server Performance Graphical Visualization with Grafana https://developer.ibm.com/storage/2018/12/18/ibm-elastic-storage-server-performance-graphical-visualization-with-grafana/ Hadoop Performance for disaggregated compute and storage configurations based on IBM Spectrum Scale Storage https://developer.ibm.com/storage/2018/12/13/hadoop-performance-for-disaggregated-compute-and-storage-configurations-based-on-ibm-spectrum-scale-storage/ EMS HA in ESS LE (Little Endian) environment https://developer.ibm.com/storage/2018/12/07/ems-ha-in-ess-le-little-endian-environment/ What?s new in ESS 5.3.2 https://developer.ibm.com/storage/2018/12/04/whats-new-in-ess-5-3-2/ Administer your Spectrum Scale cluster easily https://developer.ibm.com/storage/2018/11/13/administer-your-spectrum-scale-cluster-easily/ Disaster Recovery using Spectrum Scale?s Active File Management https://developer.ibm.com/storage/2018/11/13/disaster-recovery-using-spectrum-scales-active-file-management/ Recovery Group Failover Procedure of IBM Elastic Storage Server (ESS) https://developer.ibm.com/storage/2018/10/08/recovery-group-failover-procedure-ibm-elastic-storage-server-ess/ Whats new in IBM Elastic Storage Server (ESS) Version 5.3.1 and 5.3.1.1 https://developer.ibm.com/storage/2018/10/04/whats-new-ibm-elastic-storage-server-ess-version-5-3-1-5-3-1-1/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 10/03/2018 08:48 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q3 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q3 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. How NFS exports became more dynamic with Spectrum Scale 5.0.2 https://developer.ibm.com/storage/2018/10/02/nfs-exports-became-dynamic-spectrum-scale-5-0-2/ HPC storage on AWS (IBM Spectrum Scale) https://developer.ibm.com/storage/2018/10/02/hpc-storage-aws-ibm-spectrum-scale/ Upgrade with Excluding the node(s) using Install-toolkit https://developer.ibm.com/storage/2018/09/30/upgrade-excluding-nodes-using-install-toolkit/ Offline upgrade using Install-toolkit https://developer.ibm.com/storage/2018/09/30/offline-upgrade-using-install-toolkit/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/21/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-2/ What?s New in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/15/whats-new-ibm-spectrum-scale-5-0-2/ Starting IBM Spectrum Scale 5.0.2 release, the installation toolkit supports upgrade rerun if fresh upgrade fails. https://developer.ibm.com/storage/2018/09/15/starting-ibm-spectrum-scale-5-0-2-release-installation-toolkit-supports-upgrade-rerun-fresh-upgrade-fails/ IBM Spectrum Scale installation toolkit ? enhancements over releases ? 5.0.2.0 https://developer.ibm.com/storage/2018/09/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases-5-0-2-0/ Announcing HDP 3.0 support with IBM Spectrum Scale https://developer.ibm.com/storage/2018/08/31/announcing-hdp-3-0-support-ibm-spectrum-scale/ IBM Spectrum Scale Tuning Overview for Hadoop Workload https://developer.ibm.com/storage/2018/08/20/ibm-spectrum-scale-tuning-overview-hadoop-workload/ Making the Most of Multicloud Storage https://developer.ibm.com/storage/2018/08/13/making-multicloud-storage/ Disaster Recovery for Transparent Cloud Tiering using SOBAR https://developer.ibm.com/storage/2018/08/13/disaster-recovery-transparent-cloud-tiering-using-sobar/ Your Optimal Choice of AI Storage for Today and Tomorrow https://developer.ibm.com/storage/2018/08/10/spectrum-scale-ai-workloads/ Analyze IBM Spectrum Scale File Access Audit with ELK Stack https://developer.ibm.com/storage/2018/07/30/analyze-ibm-spectrum-scale-file-access-audit-elk-stack/ Mellanox SX1710 40G switch MLAG configuration for IBM ESS https://developer.ibm.com/storage/2018/07/12/mellanox-sx1710-40g-switcher-mlag-configuration/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? SMB and NFS Access issues https://developer.ibm.com/storage/2018/07/10/protocol-problem-determination-guide-ibm-spectrum-scale-smb-nfs-access-issues/ Access Control in IBM Spectrum Scale Object https://developer.ibm.com/storage/2018/07/06/access-control-ibm-spectrum-scale-object/ IBM Spectrum Scale HDFS Transparency Docker support https://developer.ibm.com/storage/2018/07/06/ibm-spectrum-scale-hdfs-transparency-docker-support/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? Log Collection https://developer.ibm.com/storage/2018/07/04/protocol-problem-determination-guide-ibm-spectrum-scale-log-collection/ Redpapers IBM Spectrum Scale Immutability Introduction, Configuration Guidance, and Use Cases http://www.redbooks.ibm.com/abstracts/redp5507.html?Open Certifications Assessment of the immutability function of IBM Spectrum Scale Version 5.0 in accordance to US SEC17a-4f, EU GDPR Article 21 Section 1, German and Swiss laws and regulations in collaboration with KPMG. Certificate: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?DE968667B47544FF83F6CCDCF37E5FB5 Full assessment report: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?B290411BE1224F5A9B4D24663BCD3C5D For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 07/03/2018 12:13 AM Subject: Re: Latest Technical Blogs on Spectrum Scale (Q2 2018) Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q2 2018). We now have over 100+ developer blogs. As discussed in User Groups, passing it along: IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ IBM Spectrum Scale ILM Policies https://developer.ibm.com/storage/2018/06/02/ibm-spectrum-scale-ilm-policies/ IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ Management GUI enhancements in IBM Spectrum Scale release 5.0.1 https://developer.ibm.com/storage/2018/05/18/management-gui-enhancements-in-ibm-spectrum-scale-release-5-0-1/ Managing IBM Spectrum Scale services through GUI https://developer.ibm.com/storage/2018/05/18/managing-ibm-spectrum-scale-services-through-gui/ Use AWS CLI with IBM Spectrum Scale? object storage https://developer.ibm.com/storage/2018/05/16/use-awscli-with-ibm-spectrum-scale-object-storage/ Hadoop Storage Tiering with IBM Spectrum Scale https://developer.ibm.com/storage/2018/05/09/hadoop-storage-tiering-ibm-spectrum-scale/ How many Files on my Filesystem? https://developer.ibm.com/storage/2018/05/07/many-files-filesystem/ Recording Spectrum Scale Object Stats for Potential Billing like Purpose using Elasticsearch https://developer.ibm.com/storage/2018/05/04/spectrum-scale-object-stats-for-billing-using-elasticsearch/ New features in IBM Elastic Storage Server (ESS) Version 5.3 https://developer.ibm.com/storage/2018/04/09/new-features-ibm-elastic-storage-server-ess-version-5-3/ Using IBM Spectrum Scale for storage in IBM Cloud Private (Missed to send earlier) https://medium.com/ibm-cloud/ibm-spectrum-scale-with-ibm-cloud-private-8bf801796f19 Redpapers Hortonworks Data Platform with IBM Spectrum Scale: Reference Guide for Building an Integrated Solution http://www.redbooks.ibm.com/redpieces/abstracts/redp5448.html, Enabling Hybrid Cloud Storage for IBM Spectrum Scale Using Transparent Cloud Tiering http://www.redbooks.ibm.com/abstracts/redp5411.html?Open SAP HANA and ESS: A Winning Combination (Update) http://www.redbooks.ibm.com/abstracts/redp5436.html?Open Others IBM Spectrum Scale Software Version Recommendation Preventive Service Planning (Updated) http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009703, IDC Infobrief: A Modular Approach to Genomics Infrastructure at Scale in HCLS https://www.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=37016937USEN& For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 03/27/2018 05:23 PM Subject: Re: Latest Technical Blogs on Spectrum Scale Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q1 2018). As discussed in User Groups, passing it along: GDPR Compliance and Unstructured Data Storage https://developer.ibm.com/storage/2018/03/27/gdpr-compliance-unstructure-data-storage/ IBM Spectrum Scale for Linux on IBM Z ? Release 5.0 features and highlights https://developer.ibm.com/storage/2018/03/09/ibm-spectrum-scale-linux-ibm-z-release-5-0-features-highlights/ Management GUI enhancements in IBM Spectrum Scale release 5.0.0 https://developer.ibm.com/storage/2018/01/18/gui-enhancements-in-spectrum-scale-release-5-0-0/ IBM Spectrum Scale 5.0.0 ? What?s new in NFS? https://developer.ibm.com/storage/2018/01/18/ibm-spectrum-scale-5-0-0-whats-new-nfs/ Benefits and implementation of Spectrum Scale sudo wrappers https://developer.ibm.com/storage/2018/01/15/benefits-implementation-spectrum-scale-sudo-wrappers/ IBM Spectrum Scale: Big Data and Analytics Solution Brief https://developer.ibm.com/storage/2018/01/15/ibm-spectrum-scale-big-data-analytics-solution-brief/ Variant Sub-blocks in Spectrum Scale 5.0 https://developer.ibm.com/storage/2018/01/11/spectrum-scale-variant-sub-blocks/ Compression support in Spectrum Scale 5.0.0 https://developer.ibm.com/storage/2018/01/11/compression-support-spectrum-scale-5-0-0/ IBM Spectrum Scale Versus Apache Hadoop HDFS https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/ ESS Fault Tolerance https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/ Genomic Workloads ? How To Get it Right From Infrastructure Point Of View. https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/ IBM Spectrum Scale On AWS Cloud : This video explains how to deploy IBM Spectrum Scale on AWS. This solution helps the users who require highly available access to a shared name space across multiple instances with good performance, without requiring an in-depth knowledge of IBM Spectrum Scale. Detailed Demo : https://www.youtube.com/watch?v=6j5Xj_d0bh4 Brief Demo : https://www.youtube.com/watch?v=-aMQKPW_RfY. For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Cc: Doris Conti/Poughkeepsie/IBM at IBMUS Date: 01/10/2018 12:13 PM Subject: Re: Latest Technical Blogs on Spectrum Scale Dear User Group Members, Here are list of development blogs in the last quarter. Passing it to this email group as Doris had got a feedback in the UG meetings to notify the members with the latest updates periodically. Genomic Workloads ? How To Get it Right From Infrastructure Point Of View. https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/ IBM Spectrum Scale Versus Apache Hadoop HDFS https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/ ESS Fault Tolerance https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/ IBM Spectrum Scale MMFSCK ? Savvy Enhancements https://developer.ibm.com/storage/2018/01/05/ibm-spectrum-scale-mmfsck-savvy-enhancements/ ESS Disk Management https://developer.ibm.com/storage/2018/01/02/ess-disk-management/ IBM Spectrum Scale Object Protocol On Ubuntu https://developer.ibm.com/storage/2018/01/01/ibm-spectrum-scale-object-protocol-ubuntu/ IBM Spectrum Scale 5.0 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2017/12/20/ibm-spectrum-scale-5-0-whats-new-object/ A Complete Guide to ? Protocol Problem Determination Guide for IBM Spectrum Scale? ? Part 1 https://developer.ibm.com/storage/2017/12/19/complete-guide-protocol-problem-determination-guide-ibm-spectrum-scale-1/ IBM Spectrum Scale installation toolkit ? enhancements over releases https://developer.ibm.com/storage/2017/12/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases/ Network requirements in an Elastic Storage Server Setup https://developer.ibm.com/storage/2017/12/13/network-requirements-in-an-elastic-storage-server-setup/ Co-resident migration with Transparent cloud tierin https://developer.ibm.com/storage/2017/12/05/co-resident-migration-transparent-cloud-tierin/ IBM Spectrum Scale on Hortonworks HDP Hadoop clusters : A Complete Big Data Solution https://developer.ibm.com/storage/2017/12/05/ibm-spectrum-scale-hortonworks-hdp-hadoop-clusters-complete-big-data-solution/ Big data analytics with Spectrum Scale using remote cluster mount & multi-filesystem support https://developer.ibm.com/storage/2017/11/28/big-data-analytics-spectrum-scale-using-remote-cluster-mount-multi-filesystem-support/ IBM Spectrum Scale HDFS Transparency Short Circuit Write Support https://developer.ibm.com/storage/2017/11/28/ibm-spectrum-scale-hdfs-transparency-short-circuit-write-support/ IBM Spectrum Scale HDFS Transparency Federation Support https://developer.ibm.com/storage/2017/11/27/ibm-spectrum-scale-hdfs-transparency-federation-support/ How to configure and performance tuning different system workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-different-system-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ How to configure and performance tuning Spark workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-spark-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ How to configure and performance tuning database workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-database-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ How to configure and performance tuning Hadoop workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/24/configure-performance-tuning-hadoop-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ IBM Spectrum Scale Sharing Nothing Cluster Performance Tuning https://developer.ibm.com/storage/2017/11/24/ibm-spectrum-scale-sharing-nothing-cluster-performance-tuning/ How to Configure IBM Spectrum Scale? with NIS based Authentication. https://developer.ibm.com/storage/2017/11/21/configure-ibm-spectrum-scale-nis-based-authentication/ For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Cc: Doris Conti/Poughkeepsie/IBM at IBMUS Date: 11/16/2017 08:15 PM Subject: Latest Technical Blogs on Spectrum Scale Dear User Group members, Here are the Development Blogs in last 3 months on Spectrum Scale Technical Topics. Spectrum Scale Monitoring ? Know More ? https://developer.ibm.com/storage/2017/11/16/spectrum-scale-monitoring-know/ IBM Spectrum Scale 5.0 Release ? What?s coming ! https://developer.ibm.com/storage/2017/11/14/ibm-spectrum-scale-5-0-release-whats-coming/ Four Essentials things to know for managing data ACLs on IBM Spectrum Scale? from Windows https://developer.ibm.com/storage/2017/11/13/four-essentials-things-know-managing-data-acls-ibm-spectrum-scale-windows/ GSSUTILS: A new way of running SSR, Deploying or Upgrading ESS Server https://developer.ibm.com/storage/2017/11/13/gssutils/ IBM Spectrum Scale Object Authentication https://developer.ibm.com/storage/2017/11/02/spectrum-scale-object-authentication/ Video Surveillance ? Choosing the right storage https://developer.ibm.com/storage/2017/11/02/video-surveillance-choosing-right-storage/ IBM Spectrum scale object deep dive training with problem determination https://www.slideshare.net/SmitaRaut/ibm-spectrum-scale-object-deep-dive-training Spectrum Scale as preferred software defined storage for Ubuntu OpenStack https://developer.ibm.com/storage/2017/09/29/spectrum-scale-preferred-software-defined-storage-ubuntu-openstack/ IBM Elastic Storage Server 2U24 Storage ? an All-Flash offering, a performance workhorse https://developer.ibm.com/storage/2017/10/06/ess-5-2-flash-storage/ A Complete Guide to Configure LDAP-based authentication with IBM Spectrum Scale? for File Access https://developer.ibm.com/storage/2017/09/21/complete-guide-configure-ldap-based-authentication-ibm-spectrum-scale-file-access/ Deploying IBM Spectrum Scale on AWS Quick Start https://developer.ibm.com/storage/2017/09/18/deploy-ibm-spectrum-scale-on-aws-quick-start/ Monitoring Spectrum Scale Object metrics https://developer.ibm.com/storage/2017/09/14/monitoring-spectrum-scale-object-metrics/ Tier your data with ease to Spectrum Scale Private Cloud(s) using Moonwalk Universal https://developer.ibm.com/storage/2017/09/14/tier-data-ease-spectrum-scale-private-clouds-using-moonwalk-universal/ Why do I see owner as ?Nobody? for my export mounted using NFSV4 Protocol on IBM Spectrum Scale?? https://developer.ibm.com/storage/2017/09/08/see-owner-nobody-export-mounted-using-nfsv4-protocol-ibm-spectrum-scale/ IBM Spectrum Scale? Authentication using Active Directory and LDAP https://developer.ibm.com/storage/2017/09/01/ibm-spectrum-scale-authentication-using-active-directory-ldap/ IBM Spectrum Scale? Authentication using Active Directory and RFC2307 https://developer.ibm.com/storage/2017/09/01/ibm-spectrum-scale-authentication-using-active-directory-rfc2307/ High Availability Implementation with IBM Spectrum Virtualize and IBM Spectrum Scale https://developer.ibm.com/storage/2017/08/30/high-availability-implementation-ibm-spectrum-virtualize-ibm-spectrum-scale/ 10 Frequently asked Questions on configuring Authentication using AD + AUTO ID mapping on IBM Spectrum Scale?. https://developer.ibm.com/storage/2017/08/04/10-frequently-asked-questions-configuring-authentication-using-ad-auto-id-mapping-ibm-spectrum-scale/ IBM Spectrum Scale? Authentication using Active Directory https://developer.ibm.com/storage/2017/07/30/ibm-spectrum-scale-auth-using-active-directory/ Five cool things that you didn?t know Transparent Cloud Tiering on Spectrum Scale can do https://developer.ibm.com/storage/2017/07/29/five-cool-things-didnt-know-transparent-cloud-tiering-spectrum-scale-can/ IBM Spectrum Scale GUI videos https://developer.ibm.com/storage/2017/07/25/ibm-spectrum-scale-gui-videos/ IBM Spectrum Scale? Authentication ? Planning for NFS Access https://developer.ibm.com/storage/2017/07/24/ibm-spectrum-scale-planning-nfs-access/ For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Tue Apr 30 10:24:45 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Tue, 30 Apr 2019 10:24:45 +0100 Subject: [gpfsug-discuss] Break-out session for new user and prospects [London Usergroup] Message-ID: <776770B2-5F84-4462-B900-58EBB982DC1C@spectrumscale.org> Hi all, We know that a lot of the talks at the user groups are for experienced users, following feedback from the USA user group, we thought we?d advertise that this year we?re planning to run a break-out for new users on day 1. Break-out session for new user and prospects (Wed May 8th, 13:00 - 16:45) This year we will offer a break-out session for new Spectrum Scale user and prospects to get started with Spectrum Scale. In this session we will cover Spectrum Scale Use Cases, the architecture of a Spectrum Scale environment, and discuss how the manifold Spectrum Scale features support the different use case. Please inform customers and colleagues who are interested to learn about Spectrum Scale to grab one of the last seats. Registration link: https://www.spectrumscaleug.org/event/uk-user-group-meeting/ There?s just a couple of places left for the usergroup, so please do share and register if you plan to attend. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgayne at us.ibm.com Mon Apr 1 15:04:49 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Mon, 1 Apr 2019 09:04:49 -0500 Subject: [gpfsug-discuss] A net new cluster In-Reply-To: References: <7F92D137-07D4-4136-9182-9C5E165704FE@nygenome.org> Message-ID: Yes, native GPFS access can be used by AFM, but only for shorter distances (10s of miles, e.g.). For intercontinental or cross-US distances, the latency would be too high for that protocol so NFS would be recommended. Lyle From: "Marc A Kaplan" To: gpfsug main discussion list Date: 03/29/2019 03:05 PM Subject: Re: [gpfsug-discuss] A net new cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org I don't know the particulars of the case in question, nor much about ESS rules... But for a vanilla Spectrum Scale cluster -. 1) There is nothing wrong or ill-advised about upgrading software and then creating a new version 5.x file system... keeping any older file systems in place. 2) I thought AFM was improved years ago to support GPFS native access -- need not go through NFS stack...? Whereas your wrote: ... nor is it advisable to try to create a new pool or filesystem in same cluster and then migrate (partially because migrating between filesystems within a cluster with afm would require going through nfs stack afaik) ... _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=vngQUjSBYhOMpp8HMi2XWB2feIO7aKGG6UivD0ADm6s&s=PjdyuwVaVKavcSGf9ltOn_k6wRMlka7CYhHzUdSKo5M&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From vpuvvada at in.ibm.com Tue Apr 2 11:57:51 2019 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Tue, 2 Apr 2019 16:27:51 +0530 Subject: [gpfsug-discuss] A net new cluster In-Reply-To: References: <7F92D137-07D4-4136-9182-9C5E165704FE@nygenome.org> Message-ID: AFM supports data migration between the two different file systems from the same cluster using NSD protocol. AFM based migration using the NSD protocol is usually performed by remote mounting the old filesystem (if not in the same cluster) at the new cluster's gateway node(s). Only gateway node is required to mount the remote filesystem. Some recent improvements to the AFM prefetch 1. Directory level prefetch, users no longer required to provide list files. Directory prefetch automatically detects the changed or new files and queues only the changed files for the migration. Prefetch queuing starts immediately, and does not wait for the full list file/directory processing unlike in the earlier releases (pre 5.0.2). 2. Multiple prefetches for the same fileset from different gateway nodes. (will be available in 5.0.3.x, 5.0.2.x). User can select any gateway node to run the prefetch for a fileset, or split list of files or directories and execute them from the multiple gateway nodes simultaneously. This method gets good migration performance and better utilization of network bandwidth as the multiple streams are used for the transfer. 3. Better prefetch queueing statistics than previous releases, provides total number of files, how many queued, total amount of data etc.. ~Venkat (vpuvvada at in.ibm.com) From: "Lyle Gayne" To: gpfsug main discussion list Date: 04/01/2019 07:35 PM Subject: Re: [gpfsug-discuss] A net new cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Yes, native GPFS access can be used by AFM, but only for shorter distances (10s of miles, e.g.). For intercontinental or cross-US distances, the latency would be too high for that protocol so NFS would be recommended. Lyle "Marc A Kaplan" ---03/29/2019 03:05:53 PM---I don't know the particulars of the case in question, nor much about ESS rules... From: "Marc A Kaplan" To: gpfsug main discussion list Date: 03/29/2019 03:05 PM Subject: Re: [gpfsug-discuss] A net new cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org I don't know the particulars of the case in question, nor much about ESS rules... But for a vanilla Spectrum Scale cluster -. 1) There is nothing wrong or ill-advised about upgrading software and then creating a new version 5.x file system... keeping any older file systems in place. 2) I thought AFM was improved years ago to support GPFS native access -- need not go through NFS stack...? Whereas your wrote: ... nor is it advisable to try to create a new pool or filesystem in same cluster and then migrate (partially because migrating between filesystems within a cluster with afm would require going through nfs stack afaik) ... _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=92LOlNh2yLzrrGTDA7HnfF8LFr55zGxghLZtvZcZD7A&m=rdsJfQ2D_ev0wHZkn4J-X3gFEMwJzwKuuP0EVdOqShA&s=4Du5XtaI8UBQwYJ-I772xbA5kidqKoJC-XasFXwEdsM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Tue Apr 2 13:29:20 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 2 Apr 2019 12:29:20 +0000 Subject: [gpfsug-discuss] Reminder - US Spring User Group Meeting - April 16-17th, NCAR Boulder Co Message-ID: <35009945-A3C7-46CB-943A-11C9C1749ABD@nuance.com> 2 weeks until the US Spring user group meeting! We have an excellent facility and we?ll be able to offer breakfast, lunch, and evening social event on site. All at no charge to attendees. Register here: https://www.eventbrite.com/e/spectrum-scale-gpfs-user-group-us-spring-2019-meeting-tickets-57035376346 (directions, locations, and suggested hotels) Topics will include: - User Talks - Breakout sessions - Spectrum Scale: The past, the present, the future - Accelerating AI workloads with IBM Spectrum Scale - AI ecosystem and solutions with IBM Spectrum Scale - Spectrum Scale Update - ESS Update - Support Update - Container & Cloud Update - AFM Update - High Performance Tier - Memory Consumption in Spectrum Scale - Spectrum Scale Use Cases - New storage options for Spectrum Scale - Overview - Introduction to Spectrum Scale (For Beginners) Bob Oesterlin/Kristy Kallback-Rose -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Wed Apr 3 15:43:30 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Wed, 03 Apr 2019 15:43:30 +0100 Subject: [gpfsug-discuss] Reminder Worldwide/UK User group Message-ID: <805C18AE-A2F9-477B-A989-B37D52924849@spectrumscale.org> I?ve just published the draft agenda for the worldwide/UK user group on 8th and 9th May in London. https://www.spectrumscaleug.org/event/uk-user-group-meeting/ As AI is clearly a hot topic, we have a number of slots dedicated to Spectrum Scale with AI this year. Registration is available from the link above. We?re still filling in some slots on the agenda and if you are a customer and would like to do a site update/talk please let me know. We?re thinking about also having a lightning talks slot where people can do 3-5 mins on their use of scale and favourite/worst feature. ? and if I don?t get any volunteers, we?ll be picking people from the audience ? I?m also pleased to announce that Mellanox Technologies and NVIDIA have joined our other sponsors OCF, e8 Storage, Lenovo, and DDN Storage. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Wed Apr 3 17:12:33 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Wed, 3 Apr 2019 16:12:33 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 3 17:17:44 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Wed, 3 Apr 2019 16:17:44 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group From jfosburg at mdanderson.org Wed Apr 3 17:20:48 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 3 Apr 2019 16:20:48 +0000 Subject: [gpfsug-discuss] [EXT] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: <88ad5b6a15c4444596d69503c695a0d1@mdanderson.org> We've added ESSes to existing non-ESS clusters a couple of times. In this case, we had to create a pool for the ESSes so we could send new writes to them and allow us to drain the old non-ESS blocks so we could remove them. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 [1553012336789_download] ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Prasad Surampudi Sent: Wednesday, April 3, 2019 11:12:33 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Adding ESS to existing Scale Cluster WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Apr 3 17:41:32 2019 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 3 Apr 2019 16:41:32 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Robert.Oesterlin at nuance.com Wed Apr 3 18:25:54 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 3 Apr 2019 17:25:54 +0000 Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D@nuance.com> Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Apr 3 19:11:45 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 3 Apr 2019 20:11:45 +0200 Subject: [gpfsug-discuss] New ESS install - Network adapter down level In-Reply-To: <66070BCC-1D30-48E5-B0E7-0680865F0E4D@nuance.com> References: <66070BCC-1D30-48E5-B0E7-0680865F0E4D@nuance.com> Message-ID: Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.buchanan at us.ibm.com Wed Apr 3 19:54:00 2019 From: stephen.buchanan at us.ibm.com (Stephen R Buchanan) Date: Wed, 3 Apr 2019 18:54:00 +0000 Subject: [gpfsug-discuss] New ESS install - Network adapter down level In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Apr 3 20:01:11 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 3 Apr 2019 19:01:11 +0000 Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Thanks all. I just missed this. Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Wed Apr 3 20:34:59 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Wed, 3 Apr 2019 19:34:59 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: Message-ID: Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. We also have protocol nodes for SMB access to SAS applications/users. Now, we are planning to gradually move our cluster from V7000/Flash to ESS and retire V7Ks. So, when we grow our filesystem, we are thinking of adding an ESS as an additional block of storage instead of adding another V7000. Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in the same filesystem, but can't create a new filesystem as we want to have single name space for our SMB Shares. Also, we'd like keep all our existing compute, protocol, and NSD servers all in the same scale cluster along with ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option of adding ESS nodes to existing cluster like mmaddnode or similar commands. So, just wondering how we could add ESS IO nodes to existing cluster like any other node..is running mmaddnode command on ESS possible? Also, looks like it's against the IBMs recommendation of separating the Storage, Compute and Protocol nodes into their own scale clusters and use cross-cluster filesystem mounts..any comments/suggestions? Prasad Surampudi The ATS Group ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: Wednesday, April 3, 2019 2:54 PM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 87, Issue 4 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) 2. New ESS install - Network adapter down level (Oesterlin, Robert) 3. Re: New ESS install - Network adapter down level (Jan-Frode Myklebust) 4. Re: New ESS install - Network adapter down level (Stephen R Buchanan) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Apr 2019 16:41:32 +0000 From: "Sanchez, Paul" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: Content-Type: text/plain; charset="us-ascii" > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ------------------------------ Message: 2 Date: Wed, 3 Apr 2019 17:25:54 +0000 From: "Oesterlin, Robert" To: gpfsug main discussion list Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> Content-Type: text/plain; charset="utf-8" Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Wed, 3 Apr 2019 20:11:45 +0200 From: Jan-Frode Myklebust To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="utf-8" Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 3 Apr 2019 18:54:00 +0000 From: "Stephen R Buchanan" To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 87, Issue 4 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed Apr 3 21:22:33 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 3 Apr 2019 20:22:33 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: , Message-ID: <5fd0776a85e94948b71770f8574e54ae@mdanderson.org> We had Lab Services do our installs and integrations. Learning curve for them, and we uncovered some deficiencies in the TDA, but it did work. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 [1553012336789_download] ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Prasad Surampudi Sent: Wednesday, April 3, 2019 2:34:59 PM To: gpfsug-discuss-request at spectrumscale.org; gpfsug-discuss at spectrumscale.org Subject: [EXT] Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. We also have protocol nodes for SMB access to SAS applications/users. Now, we are planning to gradually move our cluster from V7000/Flash to ESS and retire V7Ks. So, when we grow our filesystem, we are thinking of adding an ESS as an additional block of storage instead of adding another V7000. Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in the same filesystem, but can't create a new filesystem as we want to have single name space for our SMB Shares. Also, we'd like keep all our existing compute, protocol, and NSD servers all in the same scale cluster along with ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option of adding ESS nodes to existing cluster like mmaddnode or similar commands. So, just wondering how we could add ESS IO nodes to existing cluster like any other node..is running mmaddnode command on ESS possible? Also, looks like it's against the IBMs recommendation of separating the Storage, Compute and Protocol nodes into their own scale clusters and use cross-cluster filesystem mounts..any comments/suggestions? Prasad Surampudi The ATS Group ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: Wednesday, April 3, 2019 2:54 PM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 87, Issue 4 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) 2. New ESS install - Network adapter down level (Oesterlin, Robert) 3. Re: New ESS install - Network adapter down level (Jan-Frode Myklebust) 4. Re: New ESS install - Network adapter down level (Stephen R Buchanan) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Apr 2019 16:41:32 +0000 From: "Sanchez, Paul" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: Content-Type: text/plain; charset="us-ascii" > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ------------------------------ Message: 2 Date: Wed, 3 Apr 2019 17:25:54 +0000 From: "Oesterlin, Robert" To: gpfsug main discussion list Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> Content-Type: text/plain; charset="utf-8" Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Wed, 3 Apr 2019 20:11:45 +0200 From: Jan-Frode Myklebust To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="utf-8" Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 3 Apr 2019 18:54:00 +0000 From: "Stephen R Buchanan" To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 87, Issue 4 ********************************************* The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Apr 3 21:34:37 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 3 Apr 2019 22:34:37 +0200 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: Message-ID: It doesn?t seem to be documented anywhere, but we can add ESS to nok-ESS clusters. It?s mostly just following the QDG, skipping the gssgencluster step. Just beware that it will take down your current cluster when doing the first ?gssgenclusterrgs?. This is to change quite a few config settings ? it recently caugth me by surprise :-/ Involve IBM lab services, and we should be able to help :-) -jf ons. 3. apr. 2019 kl. 21:35 skrev Prasad Surampudi < prasad.surampudi at theatsgroup.com>: > Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. > We also have protocol nodes for SMB access to SAS applications/users. Now, > we are planning to gradually move our cluster from V7000/Flash to ESS and > retire V7Ks. So, when we grow our filesystem, we are thinking of adding an > ESS as an additional block of storage instead of adding another V7000. > Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in > the same filesystem, but can't create a new filesystem as we want to have > single name space for our SMB Shares. Also, we'd like keep all our existing > compute, protocol, and NSD servers all in the same scale cluster along with > ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option > of adding ESS nodes to existing cluster like mmaddnode or similar > commands. So, just wondering how we could add ESS IO nodes to existing > cluster like any other node..is running mmaddnode command on ESS possible? > Also, looks like it's against the IBMs recommendation of separating the > Storage, Compute and Protocol nodes into their own scale clusters and use > cross-cluster filesystem mounts..any comments/suggestions? > > Prasad Surampudi > > The ATS Group > > > > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of > gpfsug-discuss-request at spectrumscale.org < > gpfsug-discuss-request at spectrumscale.org> > *Sent:* Wednesday, April 3, 2019 2:54 PM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* gpfsug-discuss Digest, Vol 87, Issue 4 > > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) > 2. New ESS install - Network adapter down level (Oesterlin, Robert) > 3. Re: New ESS install - Network adapter down level > (Jan-Frode Myklebust) > 4. Re: New ESS install - Network adapter down level > (Stephen R Buchanan) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 3 Apr 2019 16:41:32 +0000 > From: "Sanchez, Paul" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > > note though you can't have GNR based vdisks (ESS/DSS-G) in the same > storage pool. > > At one time there was definitely a warning from IBM in the docs about not > mixing big-endian and little-endian GNR in the same cluster/filesystem. > But at least since Nov 2017, IBM has published videos showing clusters > containing both. (In my opinion, they had to support this because they > changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for Scale > itself, I can confirm that filesystems can contain NSDs which are provided > by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN > storage based NSD servers. We typically do rolling upgrades of GNR > building blocks by adding blocks to an existing cluster, emptying and > removing the existing blocks, upgrading those in isolation, then repeating > with the next cluster. As a result, we have had every combination in play > at some point in time. Care just needs to be taken with nodeclass naming > and mmchconfig parameters. (We derive the correct params for each new > building block from its final config after upgrading/testing it in > isolation.) > > -Paul > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Simon Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB > storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't have > GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks then > you are going to have to have a new filesystem and copy data. So it depends > what your endgame is really. We just did such a process and one of my > colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [ > gpfsug-discuss-bounces at spectrumscale.org] on behalf of > prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] > Sent: 03 April 2019 17:12 > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum Scale > cluster. Can the ESS nodes be added to existing scale cluster without > changing existing cluster name? Or do we need to create a new scale cluster > with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > ------------------------------ > > Message: 2 > Date: Wed, 3 Apr 2019 17:25:54 +0000 > From: "Oesterlin, Robert" > To: gpfsug main discussion list > Subject: [gpfsug-discuss] New ESS install - Network adapter down level > Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> > Content-Type: text/plain; charset="utf-8" > > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected > 12.23.8010, net adapter count: 4 > > > Bob Oesterlin > Sr Principal Storage Engineer, Nuance > 507-269-0413 > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/42850e8f/attachment-0001.html > > > > ------------------------------ > > Message: 3 > Date: Wed, 3 Apr 2019 20:11:45 +0200 > From: Jan-Frode Myklebust > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down > level > Message-ID: > FrPJyB-36ZJX7w at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Have you tried: > > updatenode nodename -P gss_ofed > > But, is this the known issue listed in the qdg? > > > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > > > -jf > > ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < > Robert.Oesterlin at nuance.com>: > > > Any insight on what command I need to fix this? It?s the only error I > have > > when running gssinstallcheck. > > > > > > > > [ERROR] Network adapter > > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > > > > > > > Bob Oesterlin > > > > Sr Principal Storage Engineer, Nuance > > > > 507-269-0413 > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/fad4ff57/attachment-0001.html > > > > ------------------------------ > > Message: 4 > Date: Wed, 3 Apr 2019 18:54:00 +0000 > From: "Stephen R Buchanan" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down > level > Message-ID: > < > OFBD2A098D.0085093E-ON002583D1.0066D1E2-002583D1.0067D25D at notes.na.collabserv.com > > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/09e229d1/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 87, Issue 4 > ********************************************* > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed Apr 3 21:38:45 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 3 Apr 2019 20:38:45 +0000 Subject: [gpfsug-discuss] [EXT] Re: gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: , Message-ID: <416a73c67b594e89b734e1f2229c159c@mdanderson.org> Adding ESSes did not bring our clusters down. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 [1553012336789_download] ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Jan-Frode Myklebust Sent: Wednesday, April 3, 2019 3:34:37 PM To: gpfsug main discussion list Cc: gpfsug-discuss-request at spectrumscale.org Subject: [EXT] Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. It doesn?t seem to be documented anywhere, but we can add ESS to nok-ESS clusters. It?s mostly just following the QDG, skipping the gssgencluster step. Just beware that it will take down your current cluster when doing the first ?gssgenclusterrgs?. This is to change quite a few config settings ? it recently caugth me by surprise :-/ Involve IBM lab services, and we should be able to help :-) -jf ons. 3. apr. 2019 kl. 21:35 skrev Prasad Surampudi >: Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. We also have protocol nodes for SMB access to SAS applications/users. Now, we are planning to gradually move our cluster from V7000/Flash to ESS and retire V7Ks. So, when we grow our filesystem, we are thinking of adding an ESS as an additional block of storage instead of adding another V7000. Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in the same filesystem, but can't create a new filesystem as we want to have single name space for our SMB Shares. Also, we'd like keep all our existing compute, protocol, and NSD servers all in the same scale cluster along with ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option of adding ESS nodes to existing cluster like mmaddnode or similar commands. So, just wondering how we could add ESS IO nodes to existing cluster like any other node..is running mmaddnode command on ESS possible? Also, looks like it's against the IBMs recommendation of separating the Storage, Compute and Protocol nodes into their own scale clusters and use cross-cluster filesystem mounts..any comments/suggestions? Prasad Surampudi The ATS Group ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org > on behalf of gpfsug-discuss-request at spectrumscale.org > Sent: Wednesday, April 3, 2019 2:54 PM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 87, Issue 4 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) 2. New ESS install - Network adapter down level (Oesterlin, Robert) 3. Re: New ESS install - Network adapter down level (Jan-Frode Myklebust) 4. Re: New ESS install - Network adapter down level (Stephen R Buchanan) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Apr 2019 16:41:32 +0000 From: "Sanchez, Paul" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: > Content-Type: text/plain; charset="us-ascii" > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ------------------------------ Message: 2 Date: Wed, 3 Apr 2019 17:25:54 +0000 From: "Oesterlin, Robert" > To: gpfsug main discussion list > Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> Content-Type: text/plain; charset="utf-8" Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Wed, 3 Apr 2019 20:11:45 +0200 From: Jan-Frode Myklebust > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: > Content-Type: text/plain; charset="utf-8" Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 3 Apr 2019 18:54:00 +0000 From: "Stephen R Buchanan" > To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: > Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 87, Issue 4 ********************************************* _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Thu Apr 4 08:48:18 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Thu, 04 Apr 2019 08:48:18 +0100 Subject: [gpfsug-discuss] Slack workspace Message-ID: <391FC115-8A42-4122-A976-13939B24A78A@spectrumscale.org> We?ve been pondering for a while (quite a long while actually!) adding a slack workspace for the user group. That?s not to say I want to divert traffic from the mailing list, but maybe it will be useful for some people. Please don?t feel compelled to join the slack workspace, but if you want to join, then there?s a link on: https://www.spectrumscaleug.org/join/ to get an invite. I know there are a lot of IBM people on the mailing list, and they often reply off-list to member posts (which I appreciate!), so please still use the mailing list for questions, but maybe there are some discussions that will work better on slack ? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Lehmann at csiro.au Thu Apr 4 08:56:07 2019 From: Greg.Lehmann at csiro.au (Lehmann, Greg (IM&T, Pullenvale)) Date: Thu, 4 Apr 2019 07:56:07 +0000 Subject: [gpfsug-discuss] Slack workspace In-Reply-To: <391FC115-8A42-4122-A976-13939B24A78A@spectrumscale.org> References: <391FC115-8A42-4122-A976-13939B24A78A@spectrumscale.org> Message-ID: It?s worth a shot. We have one for Australian HPC sysadmins that seems quite popular (with its own GPFS channel.) There is also a SigHPC slack for a more international flavour that came a bit later. People tend to use it for p2p comms when at conferences as well. From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (Spectrum Scale User Group Chair) Sent: Thursday, April 4, 2019 5:48 PM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Slack workspace We?ve been pondering for a while (quite a long while actually!) adding a slack workspace for the user group. That?s not to say I want to divert traffic from the mailing list, but maybe it will be useful for some people. Please don?t feel compelled to join the slack workspace, but if you want to join, then there?s a link on: https://www.spectrumscaleug.org/join/ to get an invite. I know there are a lot of IBM people on the mailing list, and they often reply off-list to member posts (which I appreciate!), so please still use the mailing list for questions, but maybe there are some discussions that will work better on slack ? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Apr 4 14:48:35 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 4 Apr 2019 13:48:35 +0000 Subject: [gpfsug-discuss] Agenda - Spectrum Scale UG meeting, April 16-17th, NCAR, Boulder Message-ID: <5EEB5BA9-562F-45DE-A534-F01DBFB41FE0@nuance.com> Registration is only open for a few more days! Register here: https://www.eventbrite.com/e/spectrum-scale-gpfs-user-group-us-spring-2019-meeting-tickets-57035376346 (directions, locations, and suggested hotels) Breakfast, Lunch included (free of charge) and Evening social event at NCAR! Here is the ?final? agenda: Tuesday, April 16th 8:30 9:00 Registration and Networking 9:00 9:20 Welcome Kristy Kallback-Rose / Bob Oesterlin (Chair) / Ted Hoover (IBM) 9:20 9:45 Spectrum Scale: The past, the present, the future Wayne Sawdon (IBM) 9:45 10:10 Accelerating AI workloads with IBM Spectrum Scale Ted Hoover (IBM) 10:10 10:30 Nvidia: Doing large scale AI faster ? scale innovation with multi-node-multi-GPU computing and a scaling data pipeline Jacci Cenci (Nvidia) 10:30 11:00 Coffee and Networking n/a 11:00 11:25 AI ecosystem and solutions with IBM Spectrum Scale Piyush Chaudhary (IBM) 11:25 11:45 Customer Talk / Partner Talk TBD 11:45 12:00 Meet the devs Ulf Troppens (IBM) 12:00 13:00 Lunch and Networking 13:00 13:30 Spectrum Scale Update Puneet Chauhdary (IBM) 13:30 13:45 ESS Update Puneet Chauhdary (IBM) 13:45 14:00 Support Update Bob Simon (IBM) 14:00 14:30 Memory Consumption in Spectrum Scale Tomer Perry (IBM) 14:30 15:00 Coffee and Networking n/a 15:00 15:20 New HPC Usage Model @ J?lich: Multi PB User Data Migration Martin Lischewski (Forschungszentrum J?lich) 15:20 15:40 Open discussion: large scale data migration All 15:40 16:00 Container & Cloud Update Ted Hoover (IBM) 16:00 16:20 Towards Proactive Service with Call Home Ulf Troppens (IBM) 16:20 16:30 Break 16:30 17:00 Advanced metadata management with Spectrum Discover Deepavali Bhagwat (IBM) 17:00 17:20 High Performance Tier Tomer Perry (IBM) 17:20 18:00 Meet the Devs - Ask us Anything All 18:00 20:00 Get Together n/a 13:00 - 17:15 Breakout Session: Getting Started with Spectrum Scale Wednesday, April 17th 8:30 9:00 Coffee und Networking n/a 8:30 9:00 Spectrum Scale Licensing Carl Zetie (IBM) 9:00 10:00 "Spectrum Scale Use Cases (Beginner) Spectrum Scale Protocols (Overview) (Beginner)" Spectrum Scale backup and SOBAR Chris Maestas (IBM) Getting started with AFM (Advanced) Venkat Puvva (IBM) 10:00 11:00 How to design a Spectrum Scale environment? (Beginner) Tomer Perry (IBM) Spectrum Scale on Google Cloud Jeff Ceason (IBM) Spectrum Scale Trial VM Spectrum Scale Vagrant" "Chris Maestas (IBM Ulf Troppens (IBM)" 11:00 12:00 "Spectrum Scale GUI (Beginner) Spectrum Scale REST API (Beginner)" "Chris Maestas (IBM) Spectrum Scale Network flow Tomer Perry (IBM) Spectrum Scale Watch Folder (Advanced) Spectrum Scale File System Audit Logging "Deepavali Bhagwat (IBM) 12:00 13:00 Lunch and Networking n/a 13:00 13:20 Sponsor Talk: Excelero TBD 13:20 13:40 AWE site update Paul Tomlinson (AWE) 13:40 14:00 Sponsor Talk: Lenovo Ray Padden (Lenovo) 14:00 14:30 Coffee and Networking n/a 14:30 15:00 TCT Update Rob Basham 15:00 15:30 AFM Update Venkat Puvva (IBM) 15:30 15:50 New Storage Options for Spectrum Scale Carl Zetie (IBM) 15:50 16:00 Wrap-up Kristy Kallback-Rose / Bob Oesterlin Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Sat Apr 6 15:11:53 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Sat, 6 Apr 2019 14:11:53 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: Message-ID: There is a non-technical issue you may need to consider. IBM has set licensing rules about mixing in the same Spectrum Scale cluster both ESS from IBM and 3rd party storage that is licensed under ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). I am sure Carl Zetie or other IBMers who watch this list can explain the exact restrictions. Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum NAS and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com On 3 Apr 2019, at 19:47, Sanchez, Paul wrote: >> note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) > > -Paul > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] > Sent: 03 April 2019 17:12 > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=qihHkHSqt2rVgrBVDaeGaUrYw-BMlNQ6AQ1EU7EtYr0&s=EANfMzGKOlziRRZj0X9jkK-7HsqY_MkWwZgA5OXOiCo&e= > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=qihHkHSqt2rVgrBVDaeGaUrYw-BMlNQ6AQ1EU7EtYr0&s=EANfMzGKOlziRRZj0X9jkK-7HsqY_MkWwZgA5OXOiCo&e= > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.zacek77 at gmail.com Sat Apr 6 22:50:53 2019 From: m.zacek77 at gmail.com (Michal Zacek) Date: Sat, 6 Apr 2019 23:50:53 +0200 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL Message-ID: Hello, we decided to convert NFS4 acl to POSIX (we need share same data between SMB, NFS and GPFS clients), so I created script to convert NFS4 to posix ACL. It is very simple, first I do "chmod -R 770 DIR" and then "setfacl -R ..... DIR". I was surprised that conversion to posix acl has taken more then 2TB of metadata space.There is about one hundred million files at GPFS filesystem. Is this expected behavior? Thanks, Michal Example of NFS4 acl: #NFSv4 ACL #owner:root #group:root special:owner@:rwx-:allow (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED group:ag_cud_96_lab:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED group:ag_cud_96_lab_ro:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED converted to posix acl: # owner: root # group: root user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- group:ag_cud_96_lab:rwx default:group:ag_cud_96_lab:rwx group:ag_cud_96_lab_ro:r-x default:group:ag_cud_96_lab_ro:r-x -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.rupp at us.ibm.com Sun Apr 7 16:26:14 2019 From: richard.rupp at us.ibm.com (RICHARD RUPP) Date: Sun, 7 Apr 2019 11:26:14 -0400 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: This has been publically documented in the Spectrum Scale FAQ Q13.17, Q13.18 and Q13.19. Regards, Richard Rupp, Sales Specialist, Phone: 1-347-510-6746 From: "Daniel Kidger" To: "gpfsug main discussion list" Date: 04/06/2019 10:12 AM Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org There is a non-technical issue you may need to consider. IBM has set licensing rules about mixing in the same Spectrum Scale cluster both ESS from IBM and 3rd party storage that is licensed under ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). I am sure Carl Zetie or other IBMers who watch this list can explain the exact restrictions. Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum NAS and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com On 3 Apr 2019, at 19:47, Sanchez, Paul wrote: note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org < gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [ gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=EXL-jEd1jmdzvOIhT87C7SIqmAS9uhVQ6J3kObct4OY&m=3KUx-vFPoAlAOV8zt_7RCV5o1kvr5LobB3JxXuR5-Rg&s=qsN98nblbvXfi2y1V40IAjyT_8DY3bwqk9pon-auNw4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From kkr at lbl.gov Mon Apr 8 19:05:22 2019 From: kkr at lbl.gov (Kristy Kallback-Rose) Date: Mon, 8 Apr 2019 11:05:22 -0700 Subject: [gpfsug-discuss] Registration DEADLINE April 9 - Spectrum Scale UG meeting, April 16-17th, NCAR, Boulder In-Reply-To: <5EEB5BA9-562F-45DE-A534-F01DBFB41FE0@nuance.com> References: <5EEB5BA9-562F-45DE-A534-F01DBFB41FE0@nuance.com> Message-ID: <1963C600-822D-4755-8C6F-AF03B5E67162@lbl.gov> Do you like free food? OK, maybe your school days are long gone, but who doesn?t like free food? We need to give the catering folks a head count, so we will close registration tomorrow evening, April 9. So register now for the Boulder GPFS/Spectrum Scale User Group Event (link and agenda below). This is your chance to give IBM feedback and discuss GPFS with your fellow storage admins and IBMers. We?d love to hear your participation in the discussions. Best, Kristy > On Apr 4, 2019, at 6:48 AM, Oesterlin, Robert wrote: > > Registration is only open for a few more days! > > Register here: https://www.eventbrite.com/e/spectrum-scale-gpfs-user-group-us-spring-2019-meeting-tickets-57035376346 (directions, locations, and suggested hotels) > > Breakfast, Lunch included (free of charge) and Evening social event at NCAR! > > Here is the ?final? agenda: > > Tuesday, April 16th > 8:30 9:00 Registration and Networking > 9:00 9:20 Welcome Kristy Kallback-Rose / Bob Oesterlin (Chair) / Ted Hoover (IBM) > 9:20 9:45 Spectrum Scale: The past, the present, the future Wayne Sawdon (IBM) > 9:45 10:10 Accelerating AI workloads with IBM Spectrum Scale Ted Hoover (IBM) > 10:10 10:30 Nvidia: Doing large scale AI faster ? scale innovation with multi-node-multi-GPU computing and a scaling data pipeline Jacci Cenci (Nvidia) > 10:30 11:00 Coffee and Networking n/a > 11:00 11:25 AI ecosystem and solutions with IBM Spectrum Scale Piyush Chaudhary (IBM) > 11:25 11:45 Customer Talk / Partner Talk TBD > 11:45 12:00 Meet the devs Ulf Troppens (IBM) > 12:00 13:00 Lunch and Networking > 13:00 13:30 Spectrum Scale Update Puneet Chauhdary (IBM) > 13:30 13:45 ESS Update Puneet Chauhdary (IBM) > 13:45 14:00 Support Update Bob Simon (IBM) > 14:00 14:30 Memory Consumption in Spectrum Scale Tomer Perry (IBM) > 14:30 15:00 Coffee and Networking n/a > 15:00 15:20 New HPC Usage Model @ J?lich: Multi PB User Data Migration Martin Lischewski (Forschungszentrum J?lich) > 15:20 15:40 Open discussion: large scale data migration All > 15:40 16:00 Container & Cloud Update Ted Hoover (IBM) > 16:00 16:20 Towards Proactive Service with Call Home Ulf Troppens (IBM) > 16:20 16:30 Break > 16:30 17:00 Advanced metadata management with Spectrum Discover Deepavali Bhagwat (IBM) > 17:00 17:20 High Performance Tier Tomer Perry (IBM) > 17:20 18:00 Meet the Devs - Ask us Anything All > 18:00 20:00 Get Together n/a > > 13:00 - 17:15 Breakout Session: Getting Started with Spectrum Scale > > Wednesday, April 17th > 8:30 9:00 Coffee und Networking n/a > 8:30 9:00 Spectrum Scale Licensing Carl Zetie (IBM) > 9:00 10:00 "Spectrum Scale Use Cases (Beginner) > Spectrum Scale Protocols (Overview) (Beginner)" > Spectrum Scale backup and SOBAR Chris Maestas (IBM) > Getting started with AFM (Advanced) Venkat Puvva (IBM) > 10:00 11:00 How to design a Spectrum Scale environment? (Beginner) Tomer Perry (IBM) > Spectrum Scale on Google Cloud Jeff Ceason (IBM) > Spectrum Scale Trial VM > Spectrum Scale Vagrant" "Chris Maestas (IBM Ulf Troppens (IBM)" > 11:00 12:00 "Spectrum Scale GUI (Beginner) > Spectrum Scale REST API (Beginner)" "Chris Maestas (IBM) > Spectrum Scale Network flow Tomer Perry (IBM) > Spectrum Scale Watch Folder (Advanced) > Spectrum Scale File System Audit Logging "Deepavali Bhagwat (IBM) > 12:00 13:00 Lunch and Networking n/a > 13:00 13:20 Sponsor Talk: Excelero TBD > 13:20 13:40 AWE site update Paul Tomlinson (AWE) > 13:40 14:00 Sponsor Talk: Lenovo Ray Padden (Lenovo) > 14:00 14:30 Coffee and Networking n/a > 14:30 15:00 TCT Update Rob Basham > 15:00 15:30 AFM Update Venkat Puvva (IBM) > 15:30 15:50 New Storage Options for Spectrum Scale Carl Zetie (IBM) > 15:50 16:00 Wrap-up Kristy Kallback-Rose / Bob Oesterlin > > > Bob Oesterlin > Sr Principal Storage Engineer, Nuance > 507-269-0413 > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Apr 10 15:35:57 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 10 Apr 2019 14:35:57 +0000 Subject: [gpfsug-discuss] Follow-up: ESS File systems Message-ID: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> I?m trying to finalize my file system configuration for production. I?ll be moving 3-3.5B files from my legacy storage to ESS (about 1.8PB). The legacy file systems are block size 256k, 8k subblocks. Target ESS is a GL4, 8TB drives (2.2PB using 8+2p) For file systems configured on the ESS, the vdisk block size must equal the file system block size. Using 8+2p, the smallest block size is 512K. Looking at the overall file size histogram, a block size of 1MB might be a good compromise in efficiency and sub block size (32k subblock). With 4K inodes, somewhere around 60-70% of the current files end up in inodes. Of the files in the range 4k-32K, those are the ones that would potentially ?waste? some space because they are smaller than the sub block but too big for an inode. That?s roughly 10-15% of the files. This ends up being a compromise because of our inability to use the V5 file system format (clients still at CentOS 6/Scale 4.2.3). For metadata, the file systems are currently using about 15TB of space (replicated, across roughly 1.7PB usage). This represents a mix of 256b and 4k inodes (70% 256b). Assuming a 8x increase the upper limit of needs would be 128TB. Since some of that is already in 4K inodes, I feel an allocation of 90-100 TB (4-5% of data space) is closer to reality. I don?t know if having a separate metadata pool makes sense if I?m using the V4 format, in which the block size of metadata and data is the same. Summary, I think the best options are: Option (1): 2 file systems of 1PB each. 1PB data pool, 50TB system pool, 1MB block size, 2x replicated metadata Option (2): 2 file systems of 1PB each. 1PB data/metadata pool, 1MB block size, 2x replicated metadata (preferred, then I don?t need to manage my metadata space) Any thoughts would be appreciated. Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Wed Apr 10 18:57:32 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 10 Apr 2019 13:57:32 -0400 Subject: [gpfsug-discuss] Follow-up: ESS File systems In-Reply-To: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> References: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> Message-ID: If you're into pondering some more tweaks: -i InodeSize is tunable system pool : --metadata-block-size is tunable separately from -B blocksize On ESS you might want to use different block size and error correcting codes for (v)disks that hold system pool. Generally I think you'd want to set up system pool for best performance for relatively short reads and updates. -------------- next part -------------- An HTML attachment was scrubbed... URL: From TOMP at il.ibm.com Wed Apr 10 21:11:17 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Wed, 10 Apr 2019 23:11:17 +0300 Subject: [gpfsug-discuss] Follow-up: ESS File systems In-Reply-To: References: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> Message-ID: Its also important to look into the actual space "wasted" by the "subblock mismatch". For example, a snip from a filehist output I've found somewhere: File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 2M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 1297314 2.65% 0.00% 0.00% 1 34014892 72.11% 0.74% 0.59% 2 2217365 76.64% 0.84% 0.67% 3 1967998 80.66% 0.96% 0.77% 4 798170 82.29% 1.03% 0.83% 5 1518258 85.39% 1.20% 0.96% 6 581539 86.58% 1.27% 1.02% 7 659969 87.93% 1.37% 1.10% 8 1178798 90.33% 1.58% 1.27% 9 189220 90.72% 1.62% 1.30% 10 130197 90.98% 1.64% 1.32% So, 72% of the files are smaller then 1 subblock ( 2M in the above case BTW). If, for example, we'll double it - we will "waste" ~76% of the files, and if we'll push it to 16M it will be ~90% of the files... But, we really care about capacity, right? So, going into the 16M extreme, we'll "waste" 1.58% of the capacity ( worst case of course). So, if it will give you ( highly depends on the workload of course) 4X the performance ( just for the sake of discussion) - will it be OK to pay the 1.5% "premium" ? Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Marc A Kaplan" To: gpfsug main discussion list Date: 10/04/2019 20:57 Subject: Re: [gpfsug-discuss] Follow-up: ESS File systems Sent by: gpfsug-discuss-bounces at spectrumscale.org If you're into pondering some more tweaks: -i InodeSize is tunable system pool : --metadata-block-size is tunable separately from -B blocksize On ESS you might want to use different block size and error correcting codes for (v)disks that hold system pool. Generally I think you'd want to set up system pool for best performance for relatively short reads and updates. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=pKTwc3LbUTao8mMRXJzrpTnBdOxO9b7mRlJZiUHOof4&s=YHGve_DLxkWdwq7yiDHjBvXoHmwLkUh7zBiK7LUpmsw&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From TOMP at il.ibm.com Wed Apr 10 21:19:15 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Wed, 10 Apr 2019 23:19:15 +0300 Subject: [gpfsug-discuss] Follow-up: ESS File systems In-Reply-To: References: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> Message-ID: Just to clarify - its 2M block size, so 64k subblock size. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Tomer Perry" To: gpfsug main discussion list Date: 10/04/2019 23:11 Subject: Re: [gpfsug-discuss] Follow-up: ESS File systems Sent by: gpfsug-discuss-bounces at spectrumscale.org Its also important to look into the actual space "wasted" by the "subblock mismatch". For example, a snip from a filehist output I've found somewhere: File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 2M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 1297314 2.65% 0.00% 0.00% 1 34014892 72.11% 0.74% 0.59% 2 2217365 76.64% 0.84% 0.67% 3 1967998 80.66% 0.96% 0.77% 4 798170 82.29% 1.03% 0.83% 5 1518258 85.39% 1.20% 0.96% 6 581539 86.58% 1.27% 1.02% 7 659969 87.93% 1.37% 1.10% 8 1178798 90.33% 1.58% 1.27% 9 189220 90.72% 1.62% 1.30% 10 130197 90.98% 1.64% 1.32% So, 72% of the files are smaller then 1 subblock ( 2M in the above case BTW). If, for example, we'll double it - we will "waste" ~76% of the files, and if we'll push it to 16M it will be ~90% of the files... But, we really care about capacity, right? So, going into the 16M extreme, we'll "waste" 1.58% of the capacity ( worst case of course). So, if it will give you ( highly depends on the workload of course) 4X the performance ( just for the sake of discussion) - will it be OK to pay the 1.5% "premium" ? Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Marc A Kaplan" To: gpfsug main discussion list Date: 10/04/2019 20:57 Subject: Re: [gpfsug-discuss] Follow-up: ESS File systems Sent by: gpfsug-discuss-bounces at spectrumscale.org If you're into pondering some more tweaks: -i InodeSize is tunable system pool : --metadata-block-size is tunable separately from -B blocksize On ESS you might want to use different block size and error correcting codes for (v)disks that hold system pool. Generally I think you'd want to set up system pool for best performance for relatively short reads and updates. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=qbhRxpvXiJPC72GAztszQ27LP3W7o1nmJYNV1rP2k2U&s=T5j2wkoj3NuxnK-RAMPlSc9vYHIViTOe8hGF68u5VsU&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Apr 12 10:38:32 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 12 Apr 2019 09:38:32 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0946DFC72F618f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: From jose.filipe.higino at gmail.com Fri Apr 12 11:52:21 2019 From: jose.filipe.higino at gmail.com (=?UTF-8?Q?Jos=C3=A9_Filipe_Higino?=) Date: Fri, 12 Apr 2019 22:52:21 +1200 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: Does not this depend on the License type... Being licensed by data... gives you the ability to spin as much client nodes as possible... including to the ESS cluster right? On Fri, 12 Apr 2019 at 21:38, Daniel Kidger wrote: > > Yes I am aware of the FAQ, and it particular Q13.17 which says: > > *No, systems from OEM vendors are considered distinct products even when > they embed IBM Spectrum Scale. They cannot be part of the same cluster as > IBM licenses.* > > But if this statement is taken literally, then once a customer has bought > say a Lenovo GSS/DSS-G, they are then "locked-in" to buying more storage > other OEM/ESA partners (Lenovo, Bull, DDN, etc.), as above statement > suggests that they cannot add IBM storage such as ESS to their GPFS cluster. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "RICHARD RUPP" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Date: Sun, Apr 7, 2019 4:49 PM > > > *This has been publically documented in the Spectrum Scale FAQ Q13.17, > Q13.18 and Q13.19.* > > Regards, > > *Richard Rupp*, Sales Specialist, *Phone:* *1-347-510-6746* > > > [image: Inactive hide details for "Daniel Kidger" ---04/06/2019 10:12:12 > AM---There is a non-technical issue you may need to consider.]"Daniel > Kidger" ---04/06/2019 10:12:12 AM---There is a non-technical issue you may > need to consider. IBM has set licensing rules about mixing in > > From: "Daniel Kidger" > To: "gpfsug main discussion list" > Date: 04/06/2019 10:12 AM > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > There is a non-technical issue you may need to consider. > IBM has set licensing rules about mixing in the same Spectrum Scale > cluster both ESS from IBM and 3rd party storage that is licensed under > ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). > > I am sure Carl Zetie or other IBMers who watch this list can explain the > exact restrictions. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > *+* <+44-7818%20522%20266>*44-(0)7818 522 266* <+44-7818%20522%20266> > *daniel.kidger at uk.ibm.com* > > > > On 3 Apr 2019, at 19:47, Sanchez, Paul <*Paul.Sanchez at deshaw.com* > > wrote: > > - > - > - > - note though you can't have GNR based vdisks (ESS/DSS-G) in > the same storage pool. > > At one time there was definitely a warning from IBM in the docs > about not mixing big-endian and little-endian GNR in the same > cluster/filesystem. But at least since Nov 2017, IBM has published videos > showing clusters containing both. (In my opinion, they had to support this > because they changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for > Scale itself, I can confirm that filesystems can contain NSDs which are > provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with > SAN storage based NSD servers. We typically do rolling upgrades of GNR > building blocks by adding blocks to an existing cluster, emptying and > removing the existing blocks, upgrading those in isolation, then repeating > with the next cluster. As a result, we have had every combination in play > at some point in time. Care just needs to be taken with nodeclass naming > and mmchconfig parameters. (We derive the correct params for each new > building block from its final config after upgrading/testing it in > isolation.) > > -Paul > > -----Original Message----- > From: *gpfsug-discuss-bounces at spectrumscale.org* > < > *gpfsug-discuss-bounces at spectrumscale.org* > > On Behalf Of Simon > Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list <*gpfsug-discuss at spectrumscale.org* > > > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other > SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't > have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks > then you are going to have to have a new filesystem and copy data. So it > depends what your endgame is really. We just did such a process and one of > my colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: *gpfsug-discuss-bounces at spectrumscale.org* > [ > *gpfsug-discuss-bounces at spectrumscale.org* > ] on behalf of > *prasad.surampudi at theatsgroup.com* > [ > *prasad.surampudi at theatsgroup.com* > ] > Sent: 03 April 2019 17:12 > To: *gpfsug-discuss at spectrumscale.org* > > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum > Scale cluster. Can the ESS nodes be added to existing scale cluster without > changing existing cluster name? Or do we need to create a new scale cluster > with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0946DFC72F618f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: From daniel.kidger at uk.ibm.com Fri Apr 12 12:35:38 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 12 Apr 2019 11:35:38 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.16a111c26babd5baef61.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jose.filipe.higino at gmail.com Fri Apr 12 14:11:59 2019 From: jose.filipe.higino at gmail.com (=?UTF-8?Q?Jos=C3=A9_Filipe_Higino?=) Date: Sat, 13 Apr 2019 01:11:59 +1200 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: got it now. Sorry, I miss understood that. I was already aware. =) On Fri, 12 Apr 2019 at 23:35, Daniel Kidger wrote: > Jose, > I was not considering client nodes at all. > Under the current license models, all licenses are capacity based (in two > flavours: per-TiB or per-disk), and so adding new clients is never a > licensing issue. > My point was that if you own an OEM supplied cluster from say Lenovo, you > can add to that legally from many vendors , just not from IBM themselves. > (or maybe the FAQ rules need further clarification?) > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "Jos? Filipe Higino" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Date: Fri, Apr 12, 2019 11:52 AM > > Does not this depend on the License type... > > Being licensed by data... gives you the ability to spin as much client > nodes as possible... including to the ESS cluster right? > > On Fri, 12 Apr 2019 at 21:38, Daniel Kidger > wrote: > > > Yes I am aware of the FAQ, and it particular Q13.17 which says: > > *No, systems from OEM vendors are considered distinct products even when > they embed IBM Spectrum Scale. They cannot be part of the same cluster as > IBM licenses.* > > But if this statement is taken literally, then once a customer has bought > say a Lenovo GSS/DSS-G, they are then "locked-in" to buying more storage > other OEM/ESA partners (Lenovo, Bull, DDN, etc.), as above statement > suggests that they cannot add IBM storage such as ESS to their GPFS cluster. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "RICHARD RUPP" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Date: Sun, Apr 7, 2019 4:49 PM > > > *This has been publically documented in the Spectrum Scale FAQ Q13.17, > Q13.18 and Q13.19.* > > Regards, > > *Richard Rupp*, Sales Specialist, *Phone:* *1-347-510-6746* > > > [image: Inactive hide details for "Daniel Kidger" ---04/06/2019 10:12:12 > AM---There is a non-technical issue you may need to consider.]"Daniel > Kidger" ---04/06/2019 10:12:12 AM---There is a non-technical issue you may > need to consider. IBM has set licensing rules about mixing in > > From: "Daniel Kidger" > To: "gpfsug main discussion list" > Date: 04/06/2019 10:12 AM > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > There is a non-technical issue you may need to consider. > IBM has set licensing rules about mixing in the same Spectrum Scale > cluster both ESS from IBM and 3rd party storage that is licensed under > ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). > > I am sure Carl Zetie or other IBMers who watch this list can explain the > exact restrictions. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > *+* <+44-7818%20522%20266>*44-(0)7818 522 266* <+44-7818%20522%20266> > *daniel.kidger at uk.ibm.com* > > > > On 3 Apr 2019, at 19:47, Sanchez, Paul <*Paul.Sanchez at deshaw.com* > > wrote: > > - > - > - > - note though you can't have GNR based vdisks (ESS/DSS-G) in > the same storage pool. > > At one time there was definitely a warning from IBM in the docs > about not mixing big-endian and little-endian GNR in the same > cluster/filesystem. But at least since Nov 2017, IBM has published videos > showing clusters containing both. (In my opinion, they had to support this > because they changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for > Scale itself, I can confirm that filesystems can contain NSDs which are > provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with > SAN storage based NSD servers. We typically do rolling upgrades of GNR > building blocks by adding blocks to an existing cluster, emptying and > removing the existing blocks, upgrading those in isolation, then repeating > with the next cluster. As a result, we have had every combination in play > at some point in time. Care just needs to be taken with nodeclass naming > and mmchconfig parameters. (We derive the correct params for each new > building block from its final config after upgrading/testing it in > isolation.) > > -Paul > > -----Original Message----- > From: *gpfsug-discuss-bounces at spectrumscale.org* > < > *gpfsug-discuss-bounces at spectrumscale.org* > > On Behalf Of Simon > Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list <*gpfsug-discuss at spectrumscale.org* > > > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other > SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't > have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks > then you are going to have to have a new filesystem and copy data. So it > depends what your endgame is really. We just did such a process and one of > my colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: *gpfsug-discuss-bounces at spectrumscale.org* > [ > *gpfsug-discuss-bounces at spectrumscale.org* > ] on behalf of > *prasad.surampudi at theatsgroup.com* > [ > *prasad.surampudi at theatsgroup.com* > ] > Sent: 03 April 2019 17:12 > To: *gpfsug-discuss at spectrumscale.org* > > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum > Scale cluster. Can the ESS nodes be added to existing scale cluster without > changing existing cluster name? Or do we need to create a new scale cluster > with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.16a111c26babd5baef61.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Fri Apr 12 19:59:45 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 12 Apr 2019 18:59:45 +0000 Subject: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS Message-ID: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Anyone care to tell me why this is failing or how I can do further debug. Cluster is otherwise healthy. Bob Oesterlin Sr Principal Storage Engineer, Nuance Time Cluster Name Reporting Node Event Name Entity Type Entity Name Severity Message 12.04.2019 13:03:26.429 nrg.gssio1-hs ems1-hs gui_refresh_task_failed NODE ems1-hs WARNING The following GUI refresh task(s) failed: FILESETS -------------- next part -------------- An HTML attachment was scrubbed... URL: From PPOD at de.ibm.com Fri Apr 12 20:05:54 2019 From: PPOD at de.ibm.com (Przemyslaw Podfigurny1) Date: Fri, 12 Apr 2019 19:05:54 +0000 Subject: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS In-Reply-To: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> References: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15550956962140.png Type: image/png Size: 1167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15550956962141.png Type: image/png Size: 6645 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15550956962142.png Type: image/png Size: 1167 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Fri Apr 12 20:18:20 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 12 Apr 2019 19:18:20 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: FW: gui_refresh_task_failed : FILESETS In-Reply-To: References: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Message-ID: Ah - failing because it?s checking the remote file system for information - how do I disable that? root at ems1 ~]# /usr/lpp/mmfs/gui/cli/runtask filesets --debug debug: locale=en_US debug: Running 'mmlsfileset 'fs1' -Y ' on node localhost debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=fs1 group_by gpfs_fset_name last 13 bucket_size 300' debug: Running 'mmlsfileset 'fs1test' -Y ' on node localhost debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=fs1test group_by gpfs_fset_name last 13 bucket_size 300' debug: Running 'mmlsfileset 'nrg5_tools' -Y ' on node localhost debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=tools group_by gpfs_fset_name last 13 bucket_size 300' on remote cluster nrg5-gpfs.nrg5-gpfs01 err: com.ibm.fscc.zimon.unified.ZiMONException: Remote access is not configured debug: Will not raise the following event using 'mmsysmonc' since it already exists in the database: reportingNode = 'ems1-hs', eventName = 'gui_refresh_task_failed', entityId = '3', arguments = 'FILESETS', identifier = 'null' err: com.ibm.fscc.zimon.unified.ZiMONException: Remote access is not configured err: com.ibm.fscc.cli.CommandException: EFSSG1150C Running specified task was unsuccessful. at com.ibm.fscc.cli.CommandException.createCommandException(CommandException.java:117) at com.ibm.fscc.newcli.commands.task.CmdRunTask.doExecute(CmdRunTask.java:84) at com.ibm.fscc.newcli.internal.AbstractCliCommand.execute(AbstractCliCommand.java:156) at com.ibm.fscc.cli.CliProtocol.processNewStyleCommand(CliProtocol.java:460) at com.ibm.fscc.cli.CliProtocol.processRequest(CliProtocol.java:446) at com.ibm.fscc.cli.CliServer$CliClientServer.run(CliServer.java:97) EFSSG1150C Running specified task was unsuccessful. Bob Oesterlin Sr Principal Storage Engineer, Nuance From: on behalf of Przemyslaw Podfigurny1 Reply-To: gpfsug main discussion list Date: Friday, April 12, 2019 at 2:06 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: [EXTERNAL] Re: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS Execute the refresh task with debug option enabled on your GUI node ems1-hs to see what is the cause: /usr/lpp/mmfs/gui/cli/runtask filesets --debug Mit freundlichen Gr??en / Kind regards [cid:15550956962140] [IBM Spectrum Scale] ? ? Przemyslaw Podfigurny Software Engineer, Spectrum Scale GUI Department M069 / Spectrum Scale Software Development +49 7034 274 5403 (Office) +49 1624 159 497 (Mobile) ppod at de.ibm.com [cid:15550956962142] IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz / Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ----- Original message ----- From: "Oesterlin, Robert" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS Date: Fri, Apr 12, 2019 9:00 PM Anyone care to tell me why this is failing or how I can do further debug. Cluster is otherwise healthy. Bob Oesterlin Sr Principal Storage Engineer, Nuance Time Cluster Name Reporting Node Event Name Entity Type Entity Name Severity Message 12.04.2019 13:03:26.429 nrg.gssio1-hs ems1-hs gui_refresh_task_failed NODE ems1-hs WARNING The following GUI refresh task(s) failed: FILESETS _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1168 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 6646 bytes Desc: image002.png URL: From sandeep.patil at in.ibm.com Mon Apr 15 09:54:05 2019 From: sandeep.patil at in.ibm.com (Sandeep Ramesh) Date: Mon, 15 Apr 2019 14:24:05 +0530 Subject: [gpfsug-discuss] IBM Spectrum Scale Security Survey Message-ID: bcc: gpfsug-discuss at spectrumscale.org Dear Spectrum Scale User, Below is a survey link where we are seeking feedback to improve and enhance IBM Spectrum Scale. This is an anonymous survey and your participation in this survey is completely voluntary. IBM Spectrum Scale Cyber Security Survey https://www.surveymonkey.com/r/9ZNCZ75 (Average time of 4 mins with 10 simple questions). Your response is invaluable to us. Thank you and looking forward for your participation. Regards IBM Spectrum Scale Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From PPOD at de.ibm.com Mon Apr 15 10:18:00 2019 From: PPOD at de.ibm.com (Przemyslaw Podfigurny1) Date: Mon, 15 Apr 2019 09:18:00 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: FW: gui_refresh_task_failed : FILESETS In-Reply-To: References: , <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15553195530160.png Type: image/png Size: 1167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15553195530161.png Type: image/png Size: 6645 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15553195530162.png Type: image/png Size: 1167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D4F13A.94743D30.png Type: image/png Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D4F13A.94743D30.png Type: image/png Size: 6646 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D4F13A.94743D30.png Type: image/png Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D4F13A.94743D30.png Type: image/png Size: 6646 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D4F13A.94743D30.png Type: image/png Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D4F13A.94743D30.png Type: image/png Size: 6646 bytes Desc: not available URL: From prasad.surampudi at theatsgroup.com Tue Apr 16 13:38:34 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Tue, 16 Apr 2019 12:38:34 +0000 Subject: [gpfsug-discuss] Spectrum Scale Replication across failure groups In-Reply-To: References: , Message-ID: We have a filesystem with 'system' and 'v7kdata' pools. All the NSDs in v7kdata are with failure group '-1'. Filesystem metadata is already replicated. Now we are planning to replicate the filesystem data. So, If I add new NSDs with failure group '2' in the v7kdata pool, would I be able to replicate GPFS data between NSDs with '-1' failure group and NSDs with failure group '2' ? i.e have one copy of file on NSD with '-1' and another copy on NSD with failure group '2' ? or Do I have to change the NSDs with failure group '-1' to '1' ? mobile 302.419.5833|fax 484.320.4306|psurampudi at theATSgroup.com Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Tue Apr 16 14:15:30 2019 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 16 Apr 2019 09:15:30 -0400 Subject: [gpfsug-discuss] Spectrum Scale Replication across failure groups In-Reply-To: References: Message-ID: I believe that -1 is "special", in that all -1?s are different form each other. So you will wind up with data on several -1 NSDs, instead of a -1 and a 2. In fact you probably didn?t specify -1, it was likely assigned automatically. Read the first paragraph in the failureGroup entry in: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_mmcrnsd.htm I do realize that the subsequent paragraphs do confuse the issue somewhat, but the first paragraph describes what?s happening. Liberty, -- Stephen > On Apr 16, 2019, at 8:38 AM, Prasad Surampudi > wrote: > > We have a filesystem with 'system' and 'v7kdata' pools. All the NSDs in v7kdata are with failure group '-1'. Filesystem metadata is already replicated. Now we are planning to replicate the filesystem data. So, If I add new NSDs with failure group '2' in the v7kdata pool, would I be able to replicate GPFS data between NSDs with '-1' failure group and NSDs with failure group '2' ? i.e have one copy of file on NSD with '-1' and another copy on NSD with failure group '2' ? or Do I have to change the NSDs with failure group '-1' to '1' ? > > > mobile 302.419.5833|fax 484.320.4306|psurampudi at theATSgroup.com > Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage Systems > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Tue Apr 16 14:48:47 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Tue, 16 Apr 2019 09:48:47 -0400 Subject: [gpfsug-discuss] Spectrum Scale Replication across failuregroups In-Reply-To: References: Message-ID: I think it would be wise to first set the failure group on the existing NSDs to a valid value and not use -1. I would also suggest you not use consecutive numbers like 1 and 2 but something with some distance between them, for example 10 and 20, or 100 and 200. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Stephen Ulmer To: gpfsug main discussion list Cc: "gpfsug-discuss-request at spectrumscale.org" Date: 04/16/2019 09:18 AM Subject: Re: [gpfsug-discuss] Spectrum Scale Replication across failure groups Sent by: gpfsug-discuss-bounces at spectrumscale.org I believe that -1 is "special", in that all -1?s are different form each other. So you will wind up with data on several -1 NSDs, instead of a -1 and a 2. In fact you probably didn?t specify -1, it was likely assigned automatically. Read the first paragraph in the failureGroup entry in: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_mmcrnsd.htm I do realize that the subsequent paragraphs do confuse the issue somewhat, but the first paragraph describes what?s happening. Liberty, -- Stephen On Apr 16, 2019, at 8:38 AM, Prasad Surampudi < prasad.surampudi at theatsgroup.com> wrote: We have a filesystem with 'system' and 'v7kdata' pools. All the NSDs in v7kdata are with failure group '-1'. Filesystem metadata is already replicated. Now we are planning to replicate the filesystem data. So, If I add new NSDs with failure group '2' in the v7kdata pool, would I be able to replicate GPFS data between NSDs with '-1' failure group and NSDs with failure group '2' ? i.e have one copy of file on NSD with '-1' and another copy on NSD with failure group '2' ? or Do I have to change the NSDs with failure group '-1' to '1' ? mobile 302.419.5833|fax 484.320.4306|psurampudi at theATSgroup.com Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage Systems _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=qj8cjidW9IKqym8U4WV2Buxy_hsl7bpmELnPNc8MYPg&s=hNTiNvPnIYhBCgPOm2NLtq9vP1MIVCipuIA8snw7Eg4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at markomanolis.com Thu Apr 18 16:16:52 2019 From: george at markomanolis.com (George Markomanolis) Date: Thu, 18 Apr 2019 11:16:52 -0400 Subject: [gpfsug-discuss] IO500 - Call for Submission for ISC-19 Message-ID: Dear all, Please consider the submission of results to the new list. *Deadline*: 10 June 2019 AoE The IO500 is now accepting and encouraging submissions for the upcoming 4th IO500 list to be revealed at ISC-HPC 2019 in Frankfurt, Germany. Once again, we are also accepting submissions to the 10 node I/O challenge to encourage submission of small scale results. The new ranked lists will be announced at our ISC19 BoF [2]. We hope to see you, and your results, there. The benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please submit and we look forward to seeing many of you at ISC 2019! Please note that submissions of all size are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below. Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017, published its first list at SC17, and has grown exponentially since then. The need for such an initiative has long been known within High-Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking. The multi-fold goals of the benchmark suite are as follows: 1. Maximizing simplicity in running the benchmark suite 2. Encouraging complexity in tuning for performance 3. Allowing submitters to highlight their ?hero run? performance numbers 4. Forcing submitters to simultaneously report performance for challenging IO patterns. Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication. The goals of the community are also multi-fold: 1. Gather historical data for the sake of analysis and to aid predictions of storage futures 2. Collect tuning information to share valuable performance optimizations across the community 3. Encourage vendors and designers to optimize for workloads beyond ?hero runs? 4. Establish bounded expectations for users, procurers, and administrators Edit 10 Node I/O Challenge At ISC, we will announce our second IO-500 award for the 10 Node Challenge. This challenge is conducted using the regular IO-500 benchmark, however, with the rule that exactly *10 computes nodes* must be used to run the benchmark (one exception is find, which may use 1 node). You may use any shared storage with, e.g., any number of servers. When submitting for the IO-500 list, you can opt-in for ?Participate in the 10 compute node challenge only?, then we won't include the results into the ranked list. Other 10 compute node submission will be included in the full list and in the ranked list. We will announce the result in a separate derived list and in the full list but not on the ranked IO-500 list at io500.org. Edit Birds-of-a-feather Once again, we encourage you to submit [1], to join our community, and to attend our BoF ?The IO-500 and the Virtual Institute of I/O? at ISC 2019 [2] where we will announce the fourth IO500 list and second 10 node challenge list. The current list includes results from BeeGPFS, DataWarp, IME, Lustre, Spectrum Scale, and WekaIO. We hope that the next list has even more. We look forward to answering any questions or concerns you might have. - [1] http://io500.org/submission - [2] The BoF schedule will be announced soon -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.caubet at psi.ch Thu Apr 18 16:32:58 2019 From: marc.caubet at psi.ch (Caubet Serrabou Marc (PSI)) Date: Thu, 18 Apr 2019 15:32:58 +0000 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Message-ID: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch> Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Thu Apr 18 16:54:18 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Thu, 18 Apr 2019 11:54:18 -0400 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' In-Reply-To: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch> References: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch> Message-ID: We can try to provide some guidance on what you are seeing but generally to do true analysis of performance issues customers should contact IBM lab based services (LBS). We need some additional information to understand what is happening. On which node did you collect the waiters and what command did you run to capture the data? What is the network connection between the remote cluster and the storage cluster? Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Date: 04/18/2019 11:41 AM Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=dHk9lhiQqWEszuFxOcyajfLhFM0xLk7rMkdNNNQOuyQ&s=HTJYxe-mxXg7paKH_AWo3OU8-A_YHvpotkB9f0h2amg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.caubet at psi.ch Thu Apr 18 18:41:45 2019 From: marc.caubet at psi.ch (Caubet Serrabou Marc (PSI)) Date: Thu, 18 Apr 2019 17:41:45 +0000 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' In-Reply-To: References: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch>, Message-ID: <0081EB235765E14395278B9AE1DF34180A86B2D4@MBX214.d.ethz.ch> Hi, thanks a lot. About the requested information: * Waiters were captured with the command 'mmdiag --waiters', and it was performed on one of the IO (NSD) nodes. * Connection between storage and client clusters is with Infiniband EDR. For the GPFS client cluster we have 3 chassis, each one has 24 blades with unmanaged EDR switch (24 for the blades, 12 external), and currently 10 EDR external ports are connected for external connectivity. On the other hand, the GPFS storage cluster has 2 IO nodes (as commented in the previous e-mail, DSS G240). Each IO node has connected 4 x EDR ports. Regarding the Infiniband connectivty, my network contains 2 top EDR managed switches configured with up/down routing, connecting the unmanaged switches from the chassis and the 2 managed Infiniband switches for the storage (for redundancy). Whenever needed I can go through PMR if this would easy the debug, no problem for me. I was wondering about the meaning "waiting for helper threads" and what could be the reason for that Thanks a lot for your help and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of IBM Spectrum Scale [scale at us.ibm.com] Sent: Thursday, April 18, 2019 5:54 PM To: gpfsug main discussion list Cc: gpfsug-discuss-bounces at spectrumscale.org Subject: Re: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' We can try to provide some guidance on what you are seeing but generally to do true analysis of performance issues customers should contact IBM lab based services (LBS). We need some additional information to understand what is happening. * On which node did you collect the waiters and what command did you run to capture the data? * What is the network connection between the remote cluster and the storage cluster? Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Date: 04/18/2019 11:41 AM Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Thu Apr 18 21:55:25 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Thu, 18 Apr 2019 16:55:25 -0400 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' In-Reply-To: <0081EB235765E14395278B9AE1DF34180A86B2D4@MBX214.d.ethz.ch> References: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch>, <0081EB235765E14395278B9AE1DF34180A86B2D4@MBX214.d.ethz.ch> Message-ID: Thanks for the information. Since the waiters information is from one of the IO servers then the threads waiting for IO should be waiting for actual IO requests to the storage. Seeing IO operations taking seconds long generally indicates your storage is not working optimally. We would expect IOs to complete in sub-second time, as in some number of milliseconds. You are using a record size of 16M yet you stated the file system block size is 1M. Is that really what you wanted to test? Also, you have included the -fsync option to gpfsperf which will impact the results. Have you considered using the nsdperf program instead of the gpfsperf program? You can find nsdperf in the samples/net directory. One last thing I noticed was in the configuration of your management node. It showed the following. [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k To my understanding the management node has no direct access to the storage, that is any IO requests to the file system from the management node go through the IO nodes. That being true GPFS will not make use of NSD worker threads on the management node. As you can see your configuration is creating 3K NSD worker threads and none will be used so you might want to consider changing that value to 1. It will not change your performance numbers but it should free up a bit of memory on the management node. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Cc: "gpfsug-discuss-bounces at spectrumscale.org" Date: 04/18/2019 01:45 PM Subject: Re: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, thanks a lot. About the requested information: * Waiters were captured with the command 'mmdiag --waiters', and it was performed on one of the IO (NSD) nodes. * Connection between storage and client clusters is with Infiniband EDR. For the GPFS client cluster we have 3 chassis, each one has 24 blades with unmanaged EDR switch (24 for the blades, 12 external), and currently 10 EDR external ports are connected for external connectivity. On the other hand, the GPFS storage cluster has 2 IO nodes (as commented in the previous e-mail, DSS G240). Each IO node has connected 4 x EDR ports. Regarding the Infiniband connectivty, my network contains 2 top EDR managed switches configured with up/down routing, connecting the unmanaged switches from the chassis and the 2 managed Infiniband switches for the storage (for redundancy). Whenever needed I can go through PMR if this would easy the debug, no problem for me. I was wondering about the meaning "waiting for helper threads" and what could be the reason for that Thanks a lot for your help and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of IBM Spectrum Scale [scale at us.ibm.com] Sent: Thursday, April 18, 2019 5:54 PM To: gpfsug main discussion list Cc: gpfsug-discuss-bounces at spectrumscale.org Subject: Re: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' We can try to provide some guidance on what you are seeing but generally to do true analysis of performance issues customers should contact IBM lab based services (LBS). We need some additional information to understand what is happening. On which node did you collect the waiters and what command did you run to capture the data? What is the network connection between the remote cluster and the storage cluster? Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Date: 04/18/2019 11:41 AM Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=YUp1yAfDFGnpxatHqsvM9LzHFt--RrMBCKoQF_Fa_zQ&s=4NBW1TmPGKAkvbymtK2QWCnLnBp-S0AVmEJxT2H1z0k&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkachwala at ddn.com Tue Apr 23 13:25:41 2019 From: tkachwala at ddn.com (Taizun Kachwala) Date: Tue, 23 Apr 2019 12:25:41 +0000 Subject: [gpfsug-discuss] Hi from Taizun (DDN Storage @Pune, India) Message-ID: Hi, My name is Taizun and I lead the effort of developing & supporting DDN Solution using IBM GPFS/Spectrum Scale as an Embedded application stack making it a converged infrastructure using DDN Storage Fusion Architecture (SFA) appliances (GS18K, GS14K, GS400NV/200NV and GS 7990) and also as an independent product solution that can be deployed on bare metal servers as NSD server or client role. Our solution is mainly targeted towards HPC customers in AI, Analytics, BigData, High-Performance File-Server, etc. We support 4.x as well as 5.x SS product-line on CentOS & RHEL respectively. Thanks & Regards, Taizun Kachwala Lead SDET, DDN India +91 98222 07304 +91 95118 89204 -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Tue Apr 23 17:14:24 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Tue, 23 Apr 2019 16:14:24 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 21 In-Reply-To: References: Message-ID: I am trying to analyze a filehist report of a Spectrum Scale filesystem I recently collected. Given below is the data and I have put my interpretation in parentheses. Could someone from Sale development review and let me know if my interpretation is correct? Filesystem block size is 16 MB and system pool block size is 256 KB. GPFS Filehist report for Test Filesystem All: Files = 38,808,641 (38 Million Total Files) All: Files in inodes = 8153748 Available space = 1550139596472320 1550140 GB 1550 TB Total Size of files = 1110707126790022 Total Size of files in inodes = 26008177568 Total Space = 1123175375306752 1123175 GB 1123 TB Largest File = 3070145200128 - ( 2.8 TB) Average Size = 28620098 ? ( 27 MB ) Non-zero: Files = 38642491 Average NZ size = 28743155 Directories = 687233 (Total Number of Directories) Directories in inode = 650552 Total Dir Space = 5988433920 Avg Entries per dir = 57.5 (Avg # files per Directory) Files with indirect blocks = 181003 File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 16M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 7,669,346 19.76% 0.00% 0.00% ( ~7 Million files <= 512 KB ) 1 25,548,588 85.59% 1.19% 0.86% - ( ~25 Million files > 512 KB <= 1 MB ) 2 1,270,115 88.87% 1.31% 0.95% - (~1 Million files > 1 MB <= 1.5 MB ) .... .... .... 32 10387 97.37% 2.43% 1.76% Histogram of files with N 16M blocks (plus end fragment) Blocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 1 177550 97.82% 2.70% 1.95% ( ~177 K files <= 16 MB) .... .... .... 100 640 99.77% 17.31% 12.54% Number of files with more than 100 16M blocks 101+ 88121 100.00% 100.00% 72.46% ( ~88 K files > 1600 MB) -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Thu Apr 25 16:55:24 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale UG Chair)) Date: Thu, 25 Apr 2019 16:55:24 +0100 Subject: [gpfsug-discuss] (no subject) Message-ID: An HTML attachment was scrubbed... URL: From luke.raimbach at googlemail.com Thu Apr 25 19:29:04 2019 From: luke.raimbach at googlemail.com (Luke Raimbach) Date: Thu, 25 Apr 2019 19:29:04 +0100 Subject: [gpfsug-discuss] (no subject) In-Reply-To: References: Message-ID: Pop me down for a spot old bean. Make sure IBM put on good sandwiches! On Thu, 25 Apr 2019, 16:55 Simon Thompson (Spectrum Scale UG Chair), < chair at spectrumscale.org> wrote: > It's just a few weeks until the UK/Worldwide Spectrum Scale user group in > London on 8th/9th May 2019. > > As we need to confirm numbers for catering, we'll be closing registration > on 1st May. > > If you plan to attend, please register via: > > https://www.spectrumscaleug.org/event/uk-user-group-meeting/ > > (I think we have about 10 places left) > > The full agenda is now posted and our evening event is confirmed, thanks > to the support of our sponsors IBM, OCF, e8 storage, Lenovo, DDN and NVIDA. > > Simon > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Fri Apr 26 07:44:58 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 26 Apr 2019 14:44:58 +0800 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 21 In-Reply-To: References: Message-ID: From my understanding, your interpretation is correct. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Date: 04/24/2019 12:17 AM Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 21 Sent by: gpfsug-discuss-bounces at spectrumscale.org I am trying to analyze a filehist report of a Spectrum Scale filesystem I recently collected. Given below is the data and I have put my interpretation in parentheses. Could someone from Sale development review and let me know if my interpretation is correct? Filesystem block size is 16 MB and system pool block size is 256 KB. GPFS Filehist report for Test Filesystem All: Files = 38,808,641 (38 Million Total Files) All: Files in inodes = 8153748 Available space = 1550139596472320 1550140 GB 1550 TB Total Size of files = 1110707126790022 Total Size of files in inodes = 26008177568 Total Space = 1123175375306752 1123175 GB 1123 TB Largest File = 3070145200128 - ( 2.8 TB) Average Size = 28620098 ? ( 27 MB ) Non-zero: Files = 38642491 Average NZ size = 28743155 Directories = 687233 (Total Number of Directories) Directories in inode = 650552 Total Dir Space = 5988433920 Avg Entries per dir = 57.5 (Avg # files per Directory) Files with indirect blocks = 181003 File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 16M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 7,669,346 19.76% 0.00% 0.00% ( ~7 Million files <= 512 KB ) 1 25,548,588 85.59% 1.19% 0.86% - ( ~25 Million files > 512 KB <= 1 MB ) 2 1,270,115 88.87% 1.31% 0.95% - (~1 Million files > 1 MB <= 1.5 MB ) .... .... .... 32 10387 97.37% 2.43% 1.76% Histogram of files with N 16M blocks (plus end fragment) Blocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 1 177550 97.82% 2.70% 1.95% ( ~177 K files <= 16 MB) .... .... .... 100 640 99.77% 17.31% 12.54% Number of files with more than 100 16M blocks 101+ 88121 100.00% 100.00% 72.46% ( ~88 K files > 1600 MB) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=uBqBwHtxxGncMVk3Suv2icRbZNIqzOgMlfJ6LnIqNhc&s=WdJyzA9yDIx3Cyj6Kg-LvXKTj8ED4J7wm_5wJ6iyccg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From xhejtman at ics.muni.cz Fri Apr 26 13:17:33 2019 From: xhejtman at ics.muni.cz (Lukas Hejtmanek) Date: Fri, 26 Apr 2019 14:17:33 +0200 Subject: [gpfsug-discuss] gpfs and device number Message-ID: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> Hello, I noticed that from time to time, device id of a gpfs volume is not same across whole gpfs cluster. [root at kat1 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 28h/40d Inode: 3 [root at kat2 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2bh/43d Inode: 3 [root at kat3 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2ah/42d Inode: 3 this is really bad for kernel NFS as it uses device id for file handles thus NFS failover leads to nfs stale handle error. Is there a way to force a device number? -- Luk?? Hejtm?nek Linux Administrator only because Full Time Multitasking Ninja is not an official job title From TOMP at il.ibm.com Sat Apr 27 20:37:48 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Sat, 27 Apr 2019 22:37:48 +0300 Subject: [gpfsug-discuss] gpfs and device number In-Reply-To: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> References: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> Message-ID: Hi, Please use the fsid option in /etc/exports ( man exports and: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adm_nfslin.htm ) Also check https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adv_cnfs.htm in case you want HA with kernel NFS. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: Lukas Hejtmanek To: gpfsug-discuss at spectrumscale.org Date: 26/04/2019 15:37 Subject: [gpfsug-discuss] gpfs and device number Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello, I noticed that from time to time, device id of a gpfs volume is not same across whole gpfs cluster. [root at kat1 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 28h/40d Inode: 3 [root at kat2 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2bh/43d Inode: 3 [root at kat3 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2ah/42d Inode: 3 this is really bad for kernel NFS as it uses device id for file handles thus NFS failover leads to nfs stale handle error. Is there a way to force a device number? -- Luk?? Hejtm?nek Linux Administrator only because Full Time Multitasking Ninja is not an official job title _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=F4TfIKrFl9BVdEAYxZLWlFF-zF-irdwcP9LnGpgiZrs&s=Ice-yo0p955RcTDGPEGwJ-wIwN9F6PvWOpUvR6RMd4M&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeep.patil at in.ibm.com Mon Apr 29 07:42:18 2019 From: sandeep.patil at in.ibm.com (Sandeep Ramesh) Date: Mon, 29 Apr 2019 06:42:18 +0000 Subject: [gpfsug-discuss] Latest Technical Blogs on IBM Spectrum Scale (Q1 2019) In-Reply-To: References: Message-ID: Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q1 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Spectrum Scale 5.0.3 https://developer.ibm.com/storage/2019/04/24/spectrum-scale-5-0-3/ IBM Spectrum Scale HDFS Transparency Ranger Support https://developer.ibm.com/storage/2019/04/01/ibm-spectrum-scale-hdfs-transparency-ranger-support/ Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and Sharing Files Globally, http://www.redbooks.ibm.com/abstracts/redp5527.html?Open Spectrum Scale user group in Singapore, 2019 https://developer.ibm.com/storage/2019/03/14/spectrum-scale-user-group-in-singapore-2019/ 7 traits to use Spectrum Scale to run container workload https://developer.ibm.com/storage/2019/02/26/7-traits-to-use-spectrum-scale-to-run-container-workload/ Health Monitoring of IBM Spectrum Scale Cluster via External Monitoring Framework https://developer.ibm.com/storage/2019/01/22/health-monitoring-of-ibm-spectrum-scale-cluster-via-external-monitoring-framework/ Migrating data from native HDFS to IBM Spectrum Scale based shared storage https://developer.ibm.com/storage/2019/01/18/migrating-data-from-native-hdfs-to-ibm-spectrum-scale-based-shared-storage/ Bulk File Creation useful for Test on Filesystems https://developer.ibm.com/storage/2019/01/16/bulk-file-creation-useful-for-test-on-filesystems/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 01/14/2019 06:24 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q4 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q4 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Redpaper: IBM Spectrum Scale and IBM StoredIQ: Identifying and securing your business data to support regulatory requirements http://www.redbooks.ibm.com/abstracts/redp5525.html?Open IBM Spectrum Scale Memory Usage https://www.slideshare.net/tomerperry/ibm-spectrum-scale-memory-usage?qid=50a1dfda-3102-484f-b9d0-14b69fc4800b&v=&b=&from_search=2 Spectrum Scale and Containers https://developer.ibm.com/storage/2018/12/20/spectrum-scale-and-containers/ IBM Elastic Storage Server Performance Graphical Visualization with Grafana https://developer.ibm.com/storage/2018/12/18/ibm-elastic-storage-server-performance-graphical-visualization-with-grafana/ Hadoop Performance for disaggregated compute and storage configurations based on IBM Spectrum Scale Storage https://developer.ibm.com/storage/2018/12/13/hadoop-performance-for-disaggregated-compute-and-storage-configurations-based-on-ibm-spectrum-scale-storage/ EMS HA in ESS LE (Little Endian) environment https://developer.ibm.com/storage/2018/12/07/ems-ha-in-ess-le-little-endian-environment/ What?s new in ESS 5.3.2 https://developer.ibm.com/storage/2018/12/04/whats-new-in-ess-5-3-2/ Administer your Spectrum Scale cluster easily https://developer.ibm.com/storage/2018/11/13/administer-your-spectrum-scale-cluster-easily/ Disaster Recovery using Spectrum Scale?s Active File Management https://developer.ibm.com/storage/2018/11/13/disaster-recovery-using-spectrum-scales-active-file-management/ Recovery Group Failover Procedure of IBM Elastic Storage Server (ESS) https://developer.ibm.com/storage/2018/10/08/recovery-group-failover-procedure-ibm-elastic-storage-server-ess/ Whats new in IBM Elastic Storage Server (ESS) Version 5.3.1 and 5.3.1.1 https://developer.ibm.com/storage/2018/10/04/whats-new-ibm-elastic-storage-server-ess-version-5-3-1-5-3-1-1/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 10/03/2018 08:48 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q3 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q3 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. How NFS exports became more dynamic with Spectrum Scale 5.0.2 https://developer.ibm.com/storage/2018/10/02/nfs-exports-became-dynamic-spectrum-scale-5-0-2/ HPC storage on AWS (IBM Spectrum Scale) https://developer.ibm.com/storage/2018/10/02/hpc-storage-aws-ibm-spectrum-scale/ Upgrade with Excluding the node(s) using Install-toolkit https://developer.ibm.com/storage/2018/09/30/upgrade-excluding-nodes-using-install-toolkit/ Offline upgrade using Install-toolkit https://developer.ibm.com/storage/2018/09/30/offline-upgrade-using-install-toolkit/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/21/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-2/ What?s New in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/15/whats-new-ibm-spectrum-scale-5-0-2/ Starting IBM Spectrum Scale 5.0.2 release, the installation toolkit supports upgrade rerun if fresh upgrade fails. https://developer.ibm.com/storage/2018/09/15/starting-ibm-spectrum-scale-5-0-2-release-installation-toolkit-supports-upgrade-rerun-fresh-upgrade-fails/ IBM Spectrum Scale installation toolkit ? enhancements over releases ? 5.0.2.0 https://developer.ibm.com/storage/2018/09/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases-5-0-2-0/ Announcing HDP 3.0 support with IBM Spectrum Scale https://developer.ibm.com/storage/2018/08/31/announcing-hdp-3-0-support-ibm-spectrum-scale/ IBM Spectrum Scale Tuning Overview for Hadoop Workload https://developer.ibm.com/storage/2018/08/20/ibm-spectrum-scale-tuning-overview-hadoop-workload/ Making the Most of Multicloud Storage https://developer.ibm.com/storage/2018/08/13/making-multicloud-storage/ Disaster Recovery for Transparent Cloud Tiering using SOBAR https://developer.ibm.com/storage/2018/08/13/disaster-recovery-transparent-cloud-tiering-using-sobar/ Your Optimal Choice of AI Storage for Today and Tomorrow https://developer.ibm.com/storage/2018/08/10/spectrum-scale-ai-workloads/ Analyze IBM Spectrum Scale File Access Audit with ELK Stack https://developer.ibm.com/storage/2018/07/30/analyze-ibm-spectrum-scale-file-access-audit-elk-stack/ Mellanox SX1710 40G switch MLAG configuration for IBM ESS https://developer.ibm.com/storage/2018/07/12/mellanox-sx1710-40g-switcher-mlag-configuration/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? SMB and NFS Access issues https://developer.ibm.com/storage/2018/07/10/protocol-problem-determination-guide-ibm-spectrum-scale-smb-nfs-access-issues/ Access Control in IBM Spectrum Scale Object https://developer.ibm.com/storage/2018/07/06/access-control-ibm-spectrum-scale-object/ IBM Spectrum Scale HDFS Transparency Docker support https://developer.ibm.com/storage/2018/07/06/ibm-spectrum-scale-hdfs-transparency-docker-support/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? Log Collection https://developer.ibm.com/storage/2018/07/04/protocol-problem-determination-guide-ibm-spectrum-scale-log-collection/ Redpapers IBM Spectrum Scale Immutability Introduction, Configuration Guidance, and Use Cases http://www.redbooks.ibm.com/abstracts/redp5507.html?Open Certifications Assessment of the immutability function of IBM Spectrum Scale Version 5.0 in accordance to US SEC17a-4f, EU GDPR Article 21 Section 1, German and Swiss laws and regulations in collaboration with KPMG. Certificate: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?DE968667B47544FF83F6CCDCF37E5FB5 Full assessment report: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?B290411BE1224F5A9B4D24663BCD3C5D For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 07/03/2018 12:13 AM Subject: Re: Latest Technical Blogs on Spectrum Scale (Q2 2018) Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q2 2018). We now have over 100+ developer blogs. As discussed in User Groups, passing it along: IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ IBM Spectrum Scale ILM Policies https://developer.ibm.com/storage/2018/06/02/ibm-spectrum-scale-ilm-policies/ IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ Management GUI enhancements in IBM Spectrum Scale release 5.0.1 https://developer.ibm.com/storage/2018/05/18/management-gui-enhancements-in-ibm-spectrum-scale-release-5-0-1/ Managing IBM Spectrum Scale services through GUI https://developer.ibm.com/storage/2018/05/18/managing-ibm-spectrum-scale-services-through-gui/ Use AWS CLI with IBM Spectrum Scale? object storage https://developer.ibm.com/storage/2018/05/16/use-awscli-with-ibm-spectrum-scale-object-storage/ Hadoop Storage Tiering with IBM Spectrum Scale https://developer.ibm.com/storage/2018/05/09/hadoop-storage-tiering-ibm-spectrum-scale/ How many Files on my Filesystem? https://developer.ibm.com/storage/2018/05/07/many-files-filesystem/ Recording Spectrum Scale Object Stats for Potential Billing like Purpose using Elasticsearch https://developer.ibm.com/storage/2018/05/04/spectrum-scale-object-stats-for-billing-using-elasticsearch/ New features in IBM Elastic Storage Server (ESS) Version 5.3 https://developer.ibm.com/storage/2018/04/09/new-features-ibm-elastic-storage-server-ess-version-5-3/ Using IBM Spectrum Scale for storage in IBM Cloud Private (Missed to send earlier) https://medium.com/ibm-cloud/ibm-spectrum-scale-with-ibm-cloud-private-8bf801796f19 Redpapers Hortonworks Data Platform with IBM Spectrum Scale: Reference Guide for Building an Integrated Solution http://www.redbooks.ibm.com/redpieces/abstracts/redp5448.html, Enabling Hybrid Cloud Storage for IBM Spectrum Scale Using Transparent Cloud Tiering http://www.redbooks.ibm.com/abstracts/redp5411.html?Open SAP HANA and ESS: A Winning Combination (Update) http://www.redbooks.ibm.com/abstracts/redp5436.html?Open Others IBM Spectrum Scale Software Version Recommendation Preventive Service Planning (Updated) http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009703, IDC Infobrief: A Modular Approach to Genomics Infrastructure at Scale in HCLS https://www.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=37016937USEN& For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 03/27/2018 05:23 PM Subject: Re: Latest Technical Blogs on Spectrum Scale Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q1 2018). As discussed in User Groups, passing it along: GDPR Compliance and Unstructured Data Storage https://developer.ibm.com/storage/2018/03/27/gdpr-compliance-unstructure-data-storage/ IBM Spectrum Scale for Linux on IBM Z ? Release 5.0 features and highlights https://developer.ibm.com/storage/2018/03/09/ibm-spectrum-scale-linux-ibm-z-release-5-0-features-highlights/ Management GUI enhancements in IBM Spectrum Scale release 5.0.0 https://developer.ibm.com/storage/2018/01/18/gui-enhancements-in-spectrum-scale-release-5-0-0/ IBM Spectrum Scale 5.0.0 ? What?s new in NFS? https://developer.ibm.com/storage/2018/01/18/ibm-spectrum-scale-5-0-0-whats-new-nfs/ Benefits and implementation of Spectrum Scale sudo wrappers https://developer.ibm.com/storage/2018/01/15/benefits-implementation-spectrum-scale-sudo-wrappers/ IBM Spectrum Scale: Big Data and Analytics Solution Brief https://developer.ibm.com/storage/2018/01/15/ibm-spectrum-scale-big-data-analytics-solution-brief/ Variant Sub-blocks in Spectrum Scale 5.0 https://developer.ibm.com/storage/2018/01/11/spectrum-scale-variant-sub-blocks/ Compression support in Spectrum Scale 5.0.0 https://developer.ibm.com/storage/2018/01/11/compression-support-spectrum-scale-5-0-0/ IBM Spectrum Scale Versus Apache Hadoop HDFS https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/ ESS Fault Tolerance https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/ Genomic Workloads ? How To Get it Right From Infrastructure Point Of View. https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/ IBM Spectrum Scale On AWS Cloud : This video explains how to deploy IBM Spectrum Scale on AWS. This solution helps the users who require highly available access to a shared name space across multiple instances with good performance, without requiring an in-depth knowledge of IBM Spectrum Scale. Detailed Demo : https://www.youtube.com/watch?v=6j5Xj_d0bh4 Brief Demo : https://www.youtube.com/watch?v=-aMQKPW_RfY. For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Cc: Doris Conti/Poughkeepsie/IBM at IBMUS Date: 01/10/2018 12:13 PM Subject: Re: Latest Technical Blogs on Spectrum Scale Dear User Group Members, Here are list of development blogs in the last quarter. Passing it to this email group as Doris had got a feedback in the UG meetings to notify the members with the latest updates periodically. Genomic Workloads ? How To Get it Right From Infrastructure Point Of View. https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/ IBM Spectrum Scale Versus Apache Hadoop HDFS https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/ ESS Fault Tolerance https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/ IBM Spectrum Scale MMFSCK ? Savvy Enhancements https://developer.ibm.com/storage/2018/01/05/ibm-spectrum-scale-mmfsck-savvy-enhancements/ ESS Disk Management https://developer.ibm.com/storage/2018/01/02/ess-disk-management/ IBM Spectrum Scale Object Protocol On Ubuntu https://developer.ibm.com/storage/2018/01/01/ibm-spectrum-scale-object-protocol-ubuntu/ IBM Spectrum Scale 5.0 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2017/12/20/ibm-spectrum-scale-5-0-whats-new-object/ A Complete Guide to ? Protocol Problem Determination Guide for IBM Spectrum Scale? ? Part 1 https://developer.ibm.com/storage/2017/12/19/complete-guide-protocol-problem-determination-guide-ibm-spectrum-scale-1/ IBM Spectrum Scale installation toolkit ? enhancements over releases https://developer.ibm.com/storage/2017/12/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases/ Network requirements in an Elastic Storage Server Setup https://developer.ibm.com/storage/2017/12/13/network-requirements-in-an-elastic-storage-server-setup/ Co-resident migration with Transparent cloud tierin https://developer.ibm.com/storage/2017/12/05/co-resident-migration-transparent-cloud-tierin/ IBM Spectrum Scale on Hortonworks HDP Hadoop clusters : A Complete Big Data Solution https://developer.ibm.com/storage/2017/12/05/ibm-spectrum-scale-hortonworks-hdp-hadoop-clusters-complete-big-data-solution/ Big data analytics with Spectrum Scale using remote cluster mount & multi-filesystem support https://developer.ibm.com/storage/2017/11/28/big-data-analytics-spectrum-scale-using-remote-cluster-mount-multi-filesystem-support/ IBM Spectrum Scale HDFS Transparency Short Circuit Write Support https://developer.ibm.com/storage/2017/11/28/ibm-spectrum-scale-hdfs-transparency-short-circuit-write-support/ IBM Spectrum Scale HDFS Transparency Federation Support https://developer.ibm.com/storage/2017/11/27/ibm-spectrum-scale-hdfs-transparency-federation-support/ How to configure and performance tuning different system workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-different-system-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ How to configure and performance tuning Spark workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-spark-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ How to configure and performance tuning database workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-database-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ How to configure and performance tuning Hadoop workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/24/configure-performance-tuning-hadoop-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ IBM Spectrum Scale Sharing Nothing Cluster Performance Tuning https://developer.ibm.com/storage/2017/11/24/ibm-spectrum-scale-sharing-nothing-cluster-performance-tuning/ How to Configure IBM Spectrum Scale? with NIS based Authentication. https://developer.ibm.com/storage/2017/11/21/configure-ibm-spectrum-scale-nis-based-authentication/ For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Cc: Doris Conti/Poughkeepsie/IBM at IBMUS Date: 11/16/2017 08:15 PM Subject: Latest Technical Blogs on Spectrum Scale Dear User Group members, Here are the Development Blogs in last 3 months on Spectrum Scale Technical Topics. Spectrum Scale Monitoring ? Know More ? https://developer.ibm.com/storage/2017/11/16/spectrum-scale-monitoring-know/ IBM Spectrum Scale 5.0 Release ? What?s coming ! https://developer.ibm.com/storage/2017/11/14/ibm-spectrum-scale-5-0-release-whats-coming/ Four Essentials things to know for managing data ACLs on IBM Spectrum Scale? from Windows https://developer.ibm.com/storage/2017/11/13/four-essentials-things-know-managing-data-acls-ibm-spectrum-scale-windows/ GSSUTILS: A new way of running SSR, Deploying or Upgrading ESS Server https://developer.ibm.com/storage/2017/11/13/gssutils/ IBM Spectrum Scale Object Authentication https://developer.ibm.com/storage/2017/11/02/spectrum-scale-object-authentication/ Video Surveillance ? Choosing the right storage https://developer.ibm.com/storage/2017/11/02/video-surveillance-choosing-right-storage/ IBM Spectrum scale object deep dive training with problem determination https://www.slideshare.net/SmitaRaut/ibm-spectrum-scale-object-deep-dive-training Spectrum Scale as preferred software defined storage for Ubuntu OpenStack https://developer.ibm.com/storage/2017/09/29/spectrum-scale-preferred-software-defined-storage-ubuntu-openstack/ IBM Elastic Storage Server 2U24 Storage ? an All-Flash offering, a performance workhorse https://developer.ibm.com/storage/2017/10/06/ess-5-2-flash-storage/ A Complete Guide to Configure LDAP-based authentication with IBM Spectrum Scale? for File Access https://developer.ibm.com/storage/2017/09/21/complete-guide-configure-ldap-based-authentication-ibm-spectrum-scale-file-access/ Deploying IBM Spectrum Scale on AWS Quick Start https://developer.ibm.com/storage/2017/09/18/deploy-ibm-spectrum-scale-on-aws-quick-start/ Monitoring Spectrum Scale Object metrics https://developer.ibm.com/storage/2017/09/14/monitoring-spectrum-scale-object-metrics/ Tier your data with ease to Spectrum Scale Private Cloud(s) using Moonwalk Universal https://developer.ibm.com/storage/2017/09/14/tier-data-ease-spectrum-scale-private-clouds-using-moonwalk-universal/ Why do I see owner as ?Nobody? for my export mounted using NFSV4 Protocol on IBM Spectrum Scale?? https://developer.ibm.com/storage/2017/09/08/see-owner-nobody-export-mounted-using-nfsv4-protocol-ibm-spectrum-scale/ IBM Spectrum Scale? Authentication using Active Directory and LDAP https://developer.ibm.com/storage/2017/09/01/ibm-spectrum-scale-authentication-using-active-directory-ldap/ IBM Spectrum Scale? Authentication using Active Directory and RFC2307 https://developer.ibm.com/storage/2017/09/01/ibm-spectrum-scale-authentication-using-active-directory-rfc2307/ High Availability Implementation with IBM Spectrum Virtualize and IBM Spectrum Scale https://developer.ibm.com/storage/2017/08/30/high-availability-implementation-ibm-spectrum-virtualize-ibm-spectrum-scale/ 10 Frequently asked Questions on configuring Authentication using AD + AUTO ID mapping on IBM Spectrum Scale?. https://developer.ibm.com/storage/2017/08/04/10-frequently-asked-questions-configuring-authentication-using-ad-auto-id-mapping-ibm-spectrum-scale/ IBM Spectrum Scale? Authentication using Active Directory https://developer.ibm.com/storage/2017/07/30/ibm-spectrum-scale-auth-using-active-directory/ Five cool things that you didn?t know Transparent Cloud Tiering on Spectrum Scale can do https://developer.ibm.com/storage/2017/07/29/five-cool-things-didnt-know-transparent-cloud-tiering-spectrum-scale-can/ IBM Spectrum Scale GUI videos https://developer.ibm.com/storage/2017/07/25/ibm-spectrum-scale-gui-videos/ IBM Spectrum Scale? Authentication ? Planning for NFS Access https://developer.ibm.com/storage/2017/07/24/ibm-spectrum-scale-planning-nfs-access/ For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Tue Apr 30 10:24:45 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Tue, 30 Apr 2019 10:24:45 +0100 Subject: [gpfsug-discuss] Break-out session for new user and prospects [London Usergroup] Message-ID: <776770B2-5F84-4462-B900-58EBB982DC1C@spectrumscale.org> Hi all, We know that a lot of the talks at the user groups are for experienced users, following feedback from the USA user group, we thought we?d advertise that this year we?re planning to run a break-out for new users on day 1. Break-out session for new user and prospects (Wed May 8th, 13:00 - 16:45) This year we will offer a break-out session for new Spectrum Scale user and prospects to get started with Spectrum Scale. In this session we will cover Spectrum Scale Use Cases, the architecture of a Spectrum Scale environment, and discuss how the manifold Spectrum Scale features support the different use case. Please inform customers and colleagues who are interested to learn about Spectrum Scale to grab one of the last seats. Registration link: https://www.spectrumscaleug.org/event/uk-user-group-meeting/ There?s just a couple of places left for the usergroup, so please do share and register if you plan to attend. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From lgayne at us.ibm.com Mon Apr 1 15:04:49 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Mon, 1 Apr 2019 09:04:49 -0500 Subject: [gpfsug-discuss] A net new cluster In-Reply-To: References: <7F92D137-07D4-4136-9182-9C5E165704FE@nygenome.org> Message-ID: Yes, native GPFS access can be used by AFM, but only for shorter distances (10s of miles, e.g.). For intercontinental or cross-US distances, the latency would be too high for that protocol so NFS would be recommended. Lyle From: "Marc A Kaplan" To: gpfsug main discussion list Date: 03/29/2019 03:05 PM Subject: Re: [gpfsug-discuss] A net new cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org I don't know the particulars of the case in question, nor much about ESS rules... But for a vanilla Spectrum Scale cluster -. 1) There is nothing wrong or ill-advised about upgrading software and then creating a new version 5.x file system... keeping any older file systems in place. 2) I thought AFM was improved years ago to support GPFS native access -- need not go through NFS stack...? Whereas your wrote: ... nor is it advisable to try to create a new pool or filesystem in same cluster and then migrate (partially because migrating between filesystems within a cluster with afm would require going through nfs stack afaik) ... _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=vngQUjSBYhOMpp8HMi2XWB2feIO7aKGG6UivD0ADm6s&s=PjdyuwVaVKavcSGf9ltOn_k6wRMlka7CYhHzUdSKo5M&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From vpuvvada at in.ibm.com Tue Apr 2 11:57:51 2019 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Tue, 2 Apr 2019 16:27:51 +0530 Subject: [gpfsug-discuss] A net new cluster In-Reply-To: References: <7F92D137-07D4-4136-9182-9C5E165704FE@nygenome.org> Message-ID: AFM supports data migration between the two different file systems from the same cluster using NSD protocol. AFM based migration using the NSD protocol is usually performed by remote mounting the old filesystem (if not in the same cluster) at the new cluster's gateway node(s). Only gateway node is required to mount the remote filesystem. Some recent improvements to the AFM prefetch 1. Directory level prefetch, users no longer required to provide list files. Directory prefetch automatically detects the changed or new files and queues only the changed files for the migration. Prefetch queuing starts immediately, and does not wait for the full list file/directory processing unlike in the earlier releases (pre 5.0.2). 2. Multiple prefetches for the same fileset from different gateway nodes. (will be available in 5.0.3.x, 5.0.2.x). User can select any gateway node to run the prefetch for a fileset, or split list of files or directories and execute them from the multiple gateway nodes simultaneously. This method gets good migration performance and better utilization of network bandwidth as the multiple streams are used for the transfer. 3. Better prefetch queueing statistics than previous releases, provides total number of files, how many queued, total amount of data etc.. ~Venkat (vpuvvada at in.ibm.com) From: "Lyle Gayne" To: gpfsug main discussion list Date: 04/01/2019 07:35 PM Subject: Re: [gpfsug-discuss] A net new cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org Yes, native GPFS access can be used by AFM, but only for shorter distances (10s of miles, e.g.). For intercontinental or cross-US distances, the latency would be too high for that protocol so NFS would be recommended. Lyle "Marc A Kaplan" ---03/29/2019 03:05:53 PM---I don't know the particulars of the case in question, nor much about ESS rules... From: "Marc A Kaplan" To: gpfsug main discussion list Date: 03/29/2019 03:05 PM Subject: Re: [gpfsug-discuss] A net new cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org I don't know the particulars of the case in question, nor much about ESS rules... But for a vanilla Spectrum Scale cluster -. 1) There is nothing wrong or ill-advised about upgrading software and then creating a new version 5.x file system... keeping any older file systems in place. 2) I thought AFM was improved years ago to support GPFS native access -- need not go through NFS stack...? Whereas your wrote: ... nor is it advisable to try to create a new pool or filesystem in same cluster and then migrate (partially because migrating between filesystems within a cluster with afm would require going through nfs stack afaik) ... _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=92LOlNh2yLzrrGTDA7HnfF8LFr55zGxghLZtvZcZD7A&m=rdsJfQ2D_ev0wHZkn4J-X3gFEMwJzwKuuP0EVdOqShA&s=4Du5XtaI8UBQwYJ-I772xbA5kidqKoJC-XasFXwEdsM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Tue Apr 2 13:29:20 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 2 Apr 2019 12:29:20 +0000 Subject: [gpfsug-discuss] Reminder - US Spring User Group Meeting - April 16-17th, NCAR Boulder Co Message-ID: <35009945-A3C7-46CB-943A-11C9C1749ABD@nuance.com> 2 weeks until the US Spring user group meeting! We have an excellent facility and we?ll be able to offer breakfast, lunch, and evening social event on site. All at no charge to attendees. Register here: https://www.eventbrite.com/e/spectrum-scale-gpfs-user-group-us-spring-2019-meeting-tickets-57035376346 (directions, locations, and suggested hotels) Topics will include: - User Talks - Breakout sessions - Spectrum Scale: The past, the present, the future - Accelerating AI workloads with IBM Spectrum Scale - AI ecosystem and solutions with IBM Spectrum Scale - Spectrum Scale Update - ESS Update - Support Update - Container & Cloud Update - AFM Update - High Performance Tier - Memory Consumption in Spectrum Scale - Spectrum Scale Use Cases - New storage options for Spectrum Scale - Overview - Introduction to Spectrum Scale (For Beginners) Bob Oesterlin/Kristy Kallback-Rose -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Wed Apr 3 15:43:30 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Wed, 03 Apr 2019 15:43:30 +0100 Subject: [gpfsug-discuss] Reminder Worldwide/UK User group Message-ID: <805C18AE-A2F9-477B-A989-B37D52924849@spectrumscale.org> I?ve just published the draft agenda for the worldwide/UK user group on 8th and 9th May in London. https://www.spectrumscaleug.org/event/uk-user-group-meeting/ As AI is clearly a hot topic, we have a number of slots dedicated to Spectrum Scale with AI this year. Registration is available from the link above. We?re still filling in some slots on the agenda and if you are a customer and would like to do a site update/talk please let me know. We?re thinking about also having a lightning talks slot where people can do 3-5 mins on their use of scale and favourite/worst feature. ? and if I don?t get any volunteers, we?ll be picking people from the audience ? I?m also pleased to announce that Mellanox Technologies and NVIDIA have joined our other sponsors OCF, e8 Storage, Lenovo, and DDN Storage. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Wed Apr 3 17:12:33 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Wed, 3 Apr 2019 16:12:33 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed Apr 3 17:17:44 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Wed, 3 Apr 2019 16:17:44 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group From jfosburg at mdanderson.org Wed Apr 3 17:20:48 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 3 Apr 2019 16:20:48 +0000 Subject: [gpfsug-discuss] [EXT] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: <88ad5b6a15c4444596d69503c695a0d1@mdanderson.org> We've added ESSes to existing non-ESS clusters a couple of times. In this case, we had to create a pool for the ESSes so we could send new writes to them and allow us to drain the old non-ESS blocks so we could remove them. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 [1553012336789_download] ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Prasad Surampudi Sent: Wednesday, April 3, 2019 11:12:33 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Adding ESS to existing Scale Cluster WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Wed Apr 3 17:41:32 2019 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Wed, 3 Apr 2019 16:41:32 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Robert.Oesterlin at nuance.com Wed Apr 3 18:25:54 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 3 Apr 2019 17:25:54 +0000 Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D@nuance.com> Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Apr 3 19:11:45 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 3 Apr 2019 20:11:45 +0200 Subject: [gpfsug-discuss] New ESS install - Network adapter down level In-Reply-To: <66070BCC-1D30-48E5-B0E7-0680865F0E4D@nuance.com> References: <66070BCC-1D30-48E5-B0E7-0680865F0E4D@nuance.com> Message-ID: Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.buchanan at us.ibm.com Wed Apr 3 19:54:00 2019 From: stephen.buchanan at us.ibm.com (Stephen R Buchanan) Date: Wed, 3 Apr 2019 18:54:00 +0000 Subject: [gpfsug-discuss] New ESS install - Network adapter down level In-Reply-To: Message-ID: An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Apr 3 20:01:11 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 3 Apr 2019 19:01:11 +0000 Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Thanks all. I just missed this. Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Wed Apr 3 20:34:59 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Wed, 3 Apr 2019 19:34:59 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: Message-ID: Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. We also have protocol nodes for SMB access to SAS applications/users. Now, we are planning to gradually move our cluster from V7000/Flash to ESS and retire V7Ks. So, when we grow our filesystem, we are thinking of adding an ESS as an additional block of storage instead of adding another V7000. Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in the same filesystem, but can't create a new filesystem as we want to have single name space for our SMB Shares. Also, we'd like keep all our existing compute, protocol, and NSD servers all in the same scale cluster along with ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option of adding ESS nodes to existing cluster like mmaddnode or similar commands. So, just wondering how we could add ESS IO nodes to existing cluster like any other node..is running mmaddnode command on ESS possible? Also, looks like it's against the IBMs recommendation of separating the Storage, Compute and Protocol nodes into their own scale clusters and use cross-cluster filesystem mounts..any comments/suggestions? Prasad Surampudi The ATS Group ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: Wednesday, April 3, 2019 2:54 PM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 87, Issue 4 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) 2. New ESS install - Network adapter down level (Oesterlin, Robert) 3. Re: New ESS install - Network adapter down level (Jan-Frode Myklebust) 4. Re: New ESS install - Network adapter down level (Stephen R Buchanan) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Apr 2019 16:41:32 +0000 From: "Sanchez, Paul" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: Content-Type: text/plain; charset="us-ascii" > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ------------------------------ Message: 2 Date: Wed, 3 Apr 2019 17:25:54 +0000 From: "Oesterlin, Robert" To: gpfsug main discussion list Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> Content-Type: text/plain; charset="utf-8" Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Wed, 3 Apr 2019 20:11:45 +0200 From: Jan-Frode Myklebust To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="utf-8" Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 3 Apr 2019 18:54:00 +0000 From: "Stephen R Buchanan" To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 87, Issue 4 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed Apr 3 21:22:33 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 3 Apr 2019 20:22:33 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: , Message-ID: <5fd0776a85e94948b71770f8574e54ae@mdanderson.org> We had Lab Services do our installs and integrations. Learning curve for them, and we uncovered some deficiencies in the TDA, but it did work. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 [1553012336789_download] ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Prasad Surampudi Sent: Wednesday, April 3, 2019 2:34:59 PM To: gpfsug-discuss-request at spectrumscale.org; gpfsug-discuss at spectrumscale.org Subject: [EXT] Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. We also have protocol nodes for SMB access to SAS applications/users. Now, we are planning to gradually move our cluster from V7000/Flash to ESS and retire V7Ks. So, when we grow our filesystem, we are thinking of adding an ESS as an additional block of storage instead of adding another V7000. Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in the same filesystem, but can't create a new filesystem as we want to have single name space for our SMB Shares. Also, we'd like keep all our existing compute, protocol, and NSD servers all in the same scale cluster along with ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option of adding ESS nodes to existing cluster like mmaddnode or similar commands. So, just wondering how we could add ESS IO nodes to existing cluster like any other node..is running mmaddnode command on ESS possible? Also, looks like it's against the IBMs recommendation of separating the Storage, Compute and Protocol nodes into their own scale clusters and use cross-cluster filesystem mounts..any comments/suggestions? Prasad Surampudi The ATS Group ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: Wednesday, April 3, 2019 2:54 PM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 87, Issue 4 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) 2. New ESS install - Network adapter down level (Oesterlin, Robert) 3. Re: New ESS install - Network adapter down level (Jan-Frode Myklebust) 4. Re: New ESS install - Network adapter down level (Stephen R Buchanan) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Apr 2019 16:41:32 +0000 From: "Sanchez, Paul" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: Content-Type: text/plain; charset="us-ascii" > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ------------------------------ Message: 2 Date: Wed, 3 Apr 2019 17:25:54 +0000 From: "Oesterlin, Robert" To: gpfsug main discussion list Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> Content-Type: text/plain; charset="utf-8" Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Wed, 3 Apr 2019 20:11:45 +0200 From: Jan-Frode Myklebust To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="utf-8" Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 3 Apr 2019 18:54:00 +0000 From: "Stephen R Buchanan" To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 87, Issue 4 ********************************************* The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Wed Apr 3 21:34:37 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Wed, 3 Apr 2019 22:34:37 +0200 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: Message-ID: It doesn?t seem to be documented anywhere, but we can add ESS to nok-ESS clusters. It?s mostly just following the QDG, skipping the gssgencluster step. Just beware that it will take down your current cluster when doing the first ?gssgenclusterrgs?. This is to change quite a few config settings ? it recently caugth me by surprise :-/ Involve IBM lab services, and we should be able to help :-) -jf ons. 3. apr. 2019 kl. 21:35 skrev Prasad Surampudi < prasad.surampudi at theatsgroup.com>: > Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. > We also have protocol nodes for SMB access to SAS applications/users. Now, > we are planning to gradually move our cluster from V7000/Flash to ESS and > retire V7Ks. So, when we grow our filesystem, we are thinking of adding an > ESS as an additional block of storage instead of adding another V7000. > Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in > the same filesystem, but can't create a new filesystem as we want to have > single name space for our SMB Shares. Also, we'd like keep all our existing > compute, protocol, and NSD servers all in the same scale cluster along with > ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option > of adding ESS nodes to existing cluster like mmaddnode or similar > commands. So, just wondering how we could add ESS IO nodes to existing > cluster like any other node..is running mmaddnode command on ESS possible? > Also, looks like it's against the IBMs recommendation of separating the > Storage, Compute and Protocol nodes into their own scale clusters and use > cross-cluster filesystem mounts..any comments/suggestions? > > Prasad Surampudi > > The ATS Group > > > > ------------------------------ > *From:* gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> on behalf of > gpfsug-discuss-request at spectrumscale.org < > gpfsug-discuss-request at spectrumscale.org> > *Sent:* Wednesday, April 3, 2019 2:54 PM > *To:* gpfsug-discuss at spectrumscale.org > *Subject:* gpfsug-discuss Digest, Vol 87, Issue 4 > > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) > 2. New ESS install - Network adapter down level (Oesterlin, Robert) > 3. Re: New ESS install - Network adapter down level > (Jan-Frode Myklebust) > 4. Re: New ESS install - Network adapter down level > (Stephen R Buchanan) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 3 Apr 2019 16:41:32 +0000 > From: "Sanchez, Paul" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > > note though you can't have GNR based vdisks (ESS/DSS-G) in the same > storage pool. > > At one time there was definitely a warning from IBM in the docs about not > mixing big-endian and little-endian GNR in the same cluster/filesystem. > But at least since Nov 2017, IBM has published videos showing clusters > containing both. (In my opinion, they had to support this because they > changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for Scale > itself, I can confirm that filesystems can contain NSDs which are provided > by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN > storage based NSD servers. We typically do rolling upgrades of GNR > building blocks by adding blocks to an existing cluster, emptying and > removing the existing blocks, upgrading those in isolation, then repeating > with the next cluster. As a result, we have had every combination in play > at some point in time. Care just needs to be taken with nodeclass naming > and mmchconfig parameters. (We derive the correct params for each new > building block from its final config after upgrading/testing it in > isolation.) > > -Paul > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org < > gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Simon Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB > storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't have > GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks then > you are going to have to have a new filesystem and copy data. So it depends > what your endgame is really. We just did such a process and one of my > colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [ > gpfsug-discuss-bounces at spectrumscale.org] on behalf of > prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] > Sent: 03 April 2019 17:12 > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum Scale > cluster. Can the ESS nodes be added to existing scale cluster without > changing existing cluster name? Or do we need to create a new scale cluster > with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > ------------------------------ > > Message: 2 > Date: Wed, 3 Apr 2019 17:25:54 +0000 > From: "Oesterlin, Robert" > To: gpfsug main discussion list > Subject: [gpfsug-discuss] New ESS install - Network adapter down level > Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> > Content-Type: text/plain; charset="utf-8" > > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected > 12.23.8010, net adapter count: 4 > > > Bob Oesterlin > Sr Principal Storage Engineer, Nuance > 507-269-0413 > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/42850e8f/attachment-0001.html > > > > ------------------------------ > > Message: 3 > Date: Wed, 3 Apr 2019 20:11:45 +0200 > From: Jan-Frode Myklebust > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down > level > Message-ID: > FrPJyB-36ZJX7w at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Have you tried: > > updatenode nodename -P gss_ofed > > But, is this the known issue listed in the qdg? > > > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > > > -jf > > ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < > Robert.Oesterlin at nuance.com>: > > > Any insight on what command I need to fix this? It?s the only error I > have > > when running gssinstallcheck. > > > > > > > > [ERROR] Network adapter > > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > > > > > > > Bob Oesterlin > > > > Sr Principal Storage Engineer, Nuance > > > > 507-269-0413 > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/fad4ff57/attachment-0001.html > > > > ------------------------------ > > Message: 4 > Date: Wed, 3 Apr 2019 18:54:00 +0000 > From: "Stephen R Buchanan" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down > level > Message-ID: > < > OFBD2A098D.0085093E-ON002583D1.0066D1E2-002583D1.0067D25D at notes.na.collabserv.com > > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190403/09e229d1/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 87, Issue 4 > ********************************************* > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed Apr 3 21:38:45 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 3 Apr 2019 20:38:45 +0000 Subject: [gpfsug-discuss] [EXT] Re: gpfsug-discuss Digest, Vol 87, Issue 4 In-Reply-To: References: , Message-ID: <416a73c67b594e89b734e1f2229c159c@mdanderson.org> Adding ESSes did not bring our clusters down. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 [1553012336789_download] ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Jan-Frode Myklebust Sent: Wednesday, April 3, 2019 3:34:37 PM To: gpfsug main discussion list Cc: gpfsug-discuss-request at spectrumscale.org Subject: [EXT] Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 4 WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. It doesn?t seem to be documented anywhere, but we can add ESS to nok-ESS clusters. It?s mostly just following the QDG, skipping the gssgencluster step. Just beware that it will take down your current cluster when doing the first ?gssgenclusterrgs?. This is to change quite a few config settings ? it recently caugth me by surprise :-/ Involve IBM lab services, and we should be able to help :-) -jf ons. 3. apr. 2019 kl. 21:35 skrev Prasad Surampudi >: Actually, we have a SAS Grid - Scale cluster with V7000 and Flash storage. We also have protocol nodes for SMB access to SAS applications/users. Now, we are planning to gradually move our cluster from V7000/Flash to ESS and retire V7Ks. So, when we grow our filesystem, we are thinking of adding an ESS as an additional block of storage instead of adding another V7000. Definitely we'll keep the ESS Disk Enclosures in a separate GPFS pool in the same filesystem, but can't create a new filesystem as we want to have single name space for our SMB Shares. Also, we'd like keep all our existing compute, protocol, and NSD servers all in the same scale cluster along with ESS IO nodes and EMS. When I looked at ESS commands, I dont see an option of adding ESS nodes to existing cluster like mmaddnode or similar commands. So, just wondering how we could add ESS IO nodes to existing cluster like any other node..is running mmaddnode command on ESS possible? Also, looks like it's against the IBMs recommendation of separating the Storage, Compute and Protocol nodes into their own scale clusters and use cross-cluster filesystem mounts..any comments/suggestions? Prasad Surampudi The ATS Group ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org > on behalf of gpfsug-discuss-request at spectrumscale.org > Sent: Wednesday, April 3, 2019 2:54 PM To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 87, Issue 4 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Adding ESS to existing Scale Cluster (Sanchez, Paul) 2. New ESS install - Network adapter down level (Oesterlin, Robert) 3. Re: New ESS install - Network adapter down level (Jan-Frode Myklebust) 4. Re: New ESS install - Network adapter down level (Stephen R Buchanan) ---------------------------------------------------------------------- Message: 1 Date: Wed, 3 Apr 2019 16:41:32 +0000 From: "Sanchez, Paul" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Message-ID: > Content-Type: text/plain; charset="us-ascii" > note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ------------------------------ Message: 2 Date: Wed, 3 Apr 2019 17:25:54 +0000 From: "Oesterlin, Robert" > To: gpfsug main discussion list > Subject: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: <66070BCC-1D30-48E5-B0E7-0680865F0E4D at nuance.com> Content-Type: text/plain; charset="utf-8" Any insight on what command I need to fix this? It?s the only error I have when running gssinstallcheck. [ERROR] Network adapter MT4115 firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Wed, 3 Apr 2019 20:11:45 +0200 From: Jan-Frode Myklebust > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: > Content-Type: text/plain; charset="utf-8" Have you tried: updatenode nodename -P gss_ofed But, is this the known issue listed in the qdg? https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf -jf ons. 3. apr. 2019 kl. 19:26 skrev Oesterlin, Robert < Robert.Oesterlin at nuance.com>: > Any insight on what command I need to fix this? It?s the only error I have > when running gssinstallcheck. > > > > [ERROR] Network adapter > https://www.ibm.com/support/knowledgecenter/SSYSP8_5.3.2/ess_qdg.pdf > firmware: found 12.23.1020 expected 12.23.8010, net adapter count: 4 > > > > > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Wed, 3 Apr 2019 18:54:00 +0000 From: "Stephen R Buchanan" > To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] New ESS install - Network adapter down level Message-ID: > Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 87, Issue 4 ********************************************* _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Thu Apr 4 08:48:18 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Thu, 04 Apr 2019 08:48:18 +0100 Subject: [gpfsug-discuss] Slack workspace Message-ID: <391FC115-8A42-4122-A976-13939B24A78A@spectrumscale.org> We?ve been pondering for a while (quite a long while actually!) adding a slack workspace for the user group. That?s not to say I want to divert traffic from the mailing list, but maybe it will be useful for some people. Please don?t feel compelled to join the slack workspace, but if you want to join, then there?s a link on: https://www.spectrumscaleug.org/join/ to get an invite. I know there are a lot of IBM people on the mailing list, and they often reply off-list to member posts (which I appreciate!), so please still use the mailing list for questions, but maybe there are some discussions that will work better on slack ? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Lehmann at csiro.au Thu Apr 4 08:56:07 2019 From: Greg.Lehmann at csiro.au (Lehmann, Greg (IM&T, Pullenvale)) Date: Thu, 4 Apr 2019 07:56:07 +0000 Subject: [gpfsug-discuss] Slack workspace In-Reply-To: <391FC115-8A42-4122-A976-13939B24A78A@spectrumscale.org> References: <391FC115-8A42-4122-A976-13939B24A78A@spectrumscale.org> Message-ID: It?s worth a shot. We have one for Australian HPC sysadmins that seems quite popular (with its own GPFS channel.) There is also a SigHPC slack for a more international flavour that came a bit later. People tend to use it for p2p comms when at conferences as well. From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (Spectrum Scale User Group Chair) Sent: Thursday, April 4, 2019 5:48 PM To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Slack workspace We?ve been pondering for a while (quite a long while actually!) adding a slack workspace for the user group. That?s not to say I want to divert traffic from the mailing list, but maybe it will be useful for some people. Please don?t feel compelled to join the slack workspace, but if you want to join, then there?s a link on: https://www.spectrumscaleug.org/join/ to get an invite. I know there are a lot of IBM people on the mailing list, and they often reply off-list to member posts (which I appreciate!), so please still use the mailing list for questions, but maybe there are some discussions that will work better on slack ? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Apr 4 14:48:35 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 4 Apr 2019 13:48:35 +0000 Subject: [gpfsug-discuss] Agenda - Spectrum Scale UG meeting, April 16-17th, NCAR, Boulder Message-ID: <5EEB5BA9-562F-45DE-A534-F01DBFB41FE0@nuance.com> Registration is only open for a few more days! Register here: https://www.eventbrite.com/e/spectrum-scale-gpfs-user-group-us-spring-2019-meeting-tickets-57035376346 (directions, locations, and suggested hotels) Breakfast, Lunch included (free of charge) and Evening social event at NCAR! Here is the ?final? agenda: Tuesday, April 16th 8:30 9:00 Registration and Networking 9:00 9:20 Welcome Kristy Kallback-Rose / Bob Oesterlin (Chair) / Ted Hoover (IBM) 9:20 9:45 Spectrum Scale: The past, the present, the future Wayne Sawdon (IBM) 9:45 10:10 Accelerating AI workloads with IBM Spectrum Scale Ted Hoover (IBM) 10:10 10:30 Nvidia: Doing large scale AI faster ? scale innovation with multi-node-multi-GPU computing and a scaling data pipeline Jacci Cenci (Nvidia) 10:30 11:00 Coffee and Networking n/a 11:00 11:25 AI ecosystem and solutions with IBM Spectrum Scale Piyush Chaudhary (IBM) 11:25 11:45 Customer Talk / Partner Talk TBD 11:45 12:00 Meet the devs Ulf Troppens (IBM) 12:00 13:00 Lunch and Networking 13:00 13:30 Spectrum Scale Update Puneet Chauhdary (IBM) 13:30 13:45 ESS Update Puneet Chauhdary (IBM) 13:45 14:00 Support Update Bob Simon (IBM) 14:00 14:30 Memory Consumption in Spectrum Scale Tomer Perry (IBM) 14:30 15:00 Coffee and Networking n/a 15:00 15:20 New HPC Usage Model @ J?lich: Multi PB User Data Migration Martin Lischewski (Forschungszentrum J?lich) 15:20 15:40 Open discussion: large scale data migration All 15:40 16:00 Container & Cloud Update Ted Hoover (IBM) 16:00 16:20 Towards Proactive Service with Call Home Ulf Troppens (IBM) 16:20 16:30 Break 16:30 17:00 Advanced metadata management with Spectrum Discover Deepavali Bhagwat (IBM) 17:00 17:20 High Performance Tier Tomer Perry (IBM) 17:20 18:00 Meet the Devs - Ask us Anything All 18:00 20:00 Get Together n/a 13:00 - 17:15 Breakout Session: Getting Started with Spectrum Scale Wednesday, April 17th 8:30 9:00 Coffee und Networking n/a 8:30 9:00 Spectrum Scale Licensing Carl Zetie (IBM) 9:00 10:00 "Spectrum Scale Use Cases (Beginner) Spectrum Scale Protocols (Overview) (Beginner)" Spectrum Scale backup and SOBAR Chris Maestas (IBM) Getting started with AFM (Advanced) Venkat Puvva (IBM) 10:00 11:00 How to design a Spectrum Scale environment? (Beginner) Tomer Perry (IBM) Spectrum Scale on Google Cloud Jeff Ceason (IBM) Spectrum Scale Trial VM Spectrum Scale Vagrant" "Chris Maestas (IBM Ulf Troppens (IBM)" 11:00 12:00 "Spectrum Scale GUI (Beginner) Spectrum Scale REST API (Beginner)" "Chris Maestas (IBM) Spectrum Scale Network flow Tomer Perry (IBM) Spectrum Scale Watch Folder (Advanced) Spectrum Scale File System Audit Logging "Deepavali Bhagwat (IBM) 12:00 13:00 Lunch and Networking n/a 13:00 13:20 Sponsor Talk: Excelero TBD 13:20 13:40 AWE site update Paul Tomlinson (AWE) 13:40 14:00 Sponsor Talk: Lenovo Ray Padden (Lenovo) 14:00 14:30 Coffee and Networking n/a 14:30 15:00 TCT Update Rob Basham 15:00 15:30 AFM Update Venkat Puvva (IBM) 15:30 15:50 New Storage Options for Spectrum Scale Carl Zetie (IBM) 15:50 16:00 Wrap-up Kristy Kallback-Rose / Bob Oesterlin Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Sat Apr 6 15:11:53 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Sat, 6 Apr 2019 14:11:53 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: Message-ID: There is a non-technical issue you may need to consider. IBM has set licensing rules about mixing in the same Spectrum Scale cluster both ESS from IBM and 3rd party storage that is licensed under ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). I am sure Carl Zetie or other IBMers who watch this list can explain the exact restrictions. Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum NAS and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com On 3 Apr 2019, at 19:47, Sanchez, Paul wrote: >> note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) > > -Paul > > -----Original Message----- > From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] > Sent: 03 April 2019 17:12 > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=qihHkHSqt2rVgrBVDaeGaUrYw-BMlNQ6AQ1EU7EtYr0&s=EANfMzGKOlziRRZj0X9jkK-7HsqY_MkWwZgA5OXOiCo&e= > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=HlQDuUjgJx4p54QzcXd0_zTwf4Cr2t3NINalNhLTA2E&m=qihHkHSqt2rVgrBVDaeGaUrYw-BMlNQ6AQ1EU7EtYr0&s=EANfMzGKOlziRRZj0X9jkK-7HsqY_MkWwZgA5OXOiCo&e= > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.zacek77 at gmail.com Sat Apr 6 22:50:53 2019 From: m.zacek77 at gmail.com (Michal Zacek) Date: Sat, 6 Apr 2019 23:50:53 +0200 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL Message-ID: Hello, we decided to convert NFS4 acl to POSIX (we need share same data between SMB, NFS and GPFS clients), so I created script to convert NFS4 to posix ACL. It is very simple, first I do "chmod -R 770 DIR" and then "setfacl -R ..... DIR". I was surprised that conversion to posix acl has taken more then 2TB of metadata space.There is about one hundred million files at GPFS filesystem. Is this expected behavior? Thanks, Michal Example of NFS4 acl: #NFSv4 ACL #owner:root #group:root special:owner@:rwx-:allow (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED group:ag_cud_96_lab:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED group:ag_cud_96_lab_ro:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED converted to posix acl: # owner: root # group: root user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- group:ag_cud_96_lab:rwx default:group:ag_cud_96_lab:rwx group:ag_cud_96_lab_ro:r-x default:group:ag_cud_96_lab_ro:r-x -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.rupp at us.ibm.com Sun Apr 7 16:26:14 2019 From: richard.rupp at us.ibm.com (RICHARD RUPP) Date: Sun, 7 Apr 2019 11:26:14 -0400 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: This has been publically documented in the Spectrum Scale FAQ Q13.17, Q13.18 and Q13.19. Regards, Richard Rupp, Sales Specialist, Phone: 1-347-510-6746 From: "Daniel Kidger" To: "gpfsug main discussion list" Date: 04/06/2019 10:12 AM Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster Sent by: gpfsug-discuss-bounces at spectrumscale.org There is a non-technical issue you may need to consider. IBM has set licensing rules about mixing in the same Spectrum Scale cluster both ESS from IBM and 3rd party storage that is licensed under ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). I am sure Carl Zetie or other IBMers who watch this list can explain the exact restrictions. Daniel _________________________________________________________ Daniel Kidger IBM Technical Sales Specialist Spectrum Scale, Spectrum NAS and IBM Cloud Object Store +44-(0)7818 522 266 daniel.kidger at uk.ibm.com On 3 Apr 2019, at 19:47, Sanchez, Paul wrote: note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. At one time there was definitely a warning from IBM in the docs about not mixing big-endian and little-endian GNR in the same cluster/filesystem. But at least since Nov 2017, IBM has published videos showing clusters containing both. (In my opinion, they had to support this because they changed the endian-ness of the ESS from BE to LE.) I don't know about all ancillary components (e.g. GUI) but as for Scale itself, I can confirm that filesystems can contain NSDs which are provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with SAN storage based NSD servers. We typically do rolling upgrades of GNR building blocks by adding blocks to an existing cluster, emptying and removing the existing blocks, upgrading those in isolation, then repeating with the next cluster. As a result, we have had every combination in play at some point in time. Care just needs to be taken with nodeclass naming and mmchconfig parameters. (We derive the correct params for each new building block from its final config after upgrading/testing it in isolation.) -Paul -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org < gpfsug-discuss-bounces at spectrumscale.org> On Behalf Of Simon Thompson Sent: Wednesday, April 3, 2019 12:18 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster We have DSS-G (Lenovo equivalent) in the same cluster as other SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. In theory as a different pool it should work, note though you can't have GNR based vdisks (ESS/DSS-G) in the same storage pool. And if you want to move to new block size or v5 variable sunblocks then you are going to have to have a new filesystem and copy data. So it depends what your endgame is really. We just did such a process and one of my colleagues is going to talk about it at the London user group in May. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [ gpfsug-discuss-bounces at spectrumscale.org] on behalf of prasad.surampudi at theatsgroup.com [prasad.surampudi at theatsgroup.com] Sent: 03 April 2019 17:12 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster We are planning to add an ESS GL6 system to our existing Spectrum Scale cluster. Can the ESS nodes be added to existing scale cluster without changing existing cluster name? Or do we need to create a new scale cluster with ESS and import existing filesystems into the new ESS cluster? Prasad Surampudi Sr. Systems Engineer The ATS Group _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=EXL-jEd1jmdzvOIhT87C7SIqmAS9uhVQ6J3kObct4OY&m=3KUx-vFPoAlAOV8zt_7RCV5o1kvr5LobB3JxXuR5-Rg&s=qsN98nblbvXfi2y1V40IAjyT_8DY3bwqk9pon-auNw4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From kkr at lbl.gov Mon Apr 8 19:05:22 2019 From: kkr at lbl.gov (Kristy Kallback-Rose) Date: Mon, 8 Apr 2019 11:05:22 -0700 Subject: [gpfsug-discuss] Registration DEADLINE April 9 - Spectrum Scale UG meeting, April 16-17th, NCAR, Boulder In-Reply-To: <5EEB5BA9-562F-45DE-A534-F01DBFB41FE0@nuance.com> References: <5EEB5BA9-562F-45DE-A534-F01DBFB41FE0@nuance.com> Message-ID: <1963C600-822D-4755-8C6F-AF03B5E67162@lbl.gov> Do you like free food? OK, maybe your school days are long gone, but who doesn?t like free food? We need to give the catering folks a head count, so we will close registration tomorrow evening, April 9. So register now for the Boulder GPFS/Spectrum Scale User Group Event (link and agenda below). This is your chance to give IBM feedback and discuss GPFS with your fellow storage admins and IBMers. We?d love to hear your participation in the discussions. Best, Kristy > On Apr 4, 2019, at 6:48 AM, Oesterlin, Robert wrote: > > Registration is only open for a few more days! > > Register here: https://www.eventbrite.com/e/spectrum-scale-gpfs-user-group-us-spring-2019-meeting-tickets-57035376346 (directions, locations, and suggested hotels) > > Breakfast, Lunch included (free of charge) and Evening social event at NCAR! > > Here is the ?final? agenda: > > Tuesday, April 16th > 8:30 9:00 Registration and Networking > 9:00 9:20 Welcome Kristy Kallback-Rose / Bob Oesterlin (Chair) / Ted Hoover (IBM) > 9:20 9:45 Spectrum Scale: The past, the present, the future Wayne Sawdon (IBM) > 9:45 10:10 Accelerating AI workloads with IBM Spectrum Scale Ted Hoover (IBM) > 10:10 10:30 Nvidia: Doing large scale AI faster ? scale innovation with multi-node-multi-GPU computing and a scaling data pipeline Jacci Cenci (Nvidia) > 10:30 11:00 Coffee and Networking n/a > 11:00 11:25 AI ecosystem and solutions with IBM Spectrum Scale Piyush Chaudhary (IBM) > 11:25 11:45 Customer Talk / Partner Talk TBD > 11:45 12:00 Meet the devs Ulf Troppens (IBM) > 12:00 13:00 Lunch and Networking > 13:00 13:30 Spectrum Scale Update Puneet Chauhdary (IBM) > 13:30 13:45 ESS Update Puneet Chauhdary (IBM) > 13:45 14:00 Support Update Bob Simon (IBM) > 14:00 14:30 Memory Consumption in Spectrum Scale Tomer Perry (IBM) > 14:30 15:00 Coffee and Networking n/a > 15:00 15:20 New HPC Usage Model @ J?lich: Multi PB User Data Migration Martin Lischewski (Forschungszentrum J?lich) > 15:20 15:40 Open discussion: large scale data migration All > 15:40 16:00 Container & Cloud Update Ted Hoover (IBM) > 16:00 16:20 Towards Proactive Service with Call Home Ulf Troppens (IBM) > 16:20 16:30 Break > 16:30 17:00 Advanced metadata management with Spectrum Discover Deepavali Bhagwat (IBM) > 17:00 17:20 High Performance Tier Tomer Perry (IBM) > 17:20 18:00 Meet the Devs - Ask us Anything All > 18:00 20:00 Get Together n/a > > 13:00 - 17:15 Breakout Session: Getting Started with Spectrum Scale > > Wednesday, April 17th > 8:30 9:00 Coffee und Networking n/a > 8:30 9:00 Spectrum Scale Licensing Carl Zetie (IBM) > 9:00 10:00 "Spectrum Scale Use Cases (Beginner) > Spectrum Scale Protocols (Overview) (Beginner)" > Spectrum Scale backup and SOBAR Chris Maestas (IBM) > Getting started with AFM (Advanced) Venkat Puvva (IBM) > 10:00 11:00 How to design a Spectrum Scale environment? (Beginner) Tomer Perry (IBM) > Spectrum Scale on Google Cloud Jeff Ceason (IBM) > Spectrum Scale Trial VM > Spectrum Scale Vagrant" "Chris Maestas (IBM Ulf Troppens (IBM)" > 11:00 12:00 "Spectrum Scale GUI (Beginner) > Spectrum Scale REST API (Beginner)" "Chris Maestas (IBM) > Spectrum Scale Network flow Tomer Perry (IBM) > Spectrum Scale Watch Folder (Advanced) > Spectrum Scale File System Audit Logging "Deepavali Bhagwat (IBM) > 12:00 13:00 Lunch and Networking n/a > 13:00 13:20 Sponsor Talk: Excelero TBD > 13:20 13:40 AWE site update Paul Tomlinson (AWE) > 13:40 14:00 Sponsor Talk: Lenovo Ray Padden (Lenovo) > 14:00 14:30 Coffee and Networking n/a > 14:30 15:00 TCT Update Rob Basham > 15:00 15:30 AFM Update Venkat Puvva (IBM) > 15:30 15:50 New Storage Options for Spectrum Scale Carl Zetie (IBM) > 15:50 16:00 Wrap-up Kristy Kallback-Rose / Bob Oesterlin > > > Bob Oesterlin > Sr Principal Storage Engineer, Nuance > 507-269-0413 > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Apr 10 15:35:57 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 10 Apr 2019 14:35:57 +0000 Subject: [gpfsug-discuss] Follow-up: ESS File systems Message-ID: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> I?m trying to finalize my file system configuration for production. I?ll be moving 3-3.5B files from my legacy storage to ESS (about 1.8PB). The legacy file systems are block size 256k, 8k subblocks. Target ESS is a GL4, 8TB drives (2.2PB using 8+2p) For file systems configured on the ESS, the vdisk block size must equal the file system block size. Using 8+2p, the smallest block size is 512K. Looking at the overall file size histogram, a block size of 1MB might be a good compromise in efficiency and sub block size (32k subblock). With 4K inodes, somewhere around 60-70% of the current files end up in inodes. Of the files in the range 4k-32K, those are the ones that would potentially ?waste? some space because they are smaller than the sub block but too big for an inode. That?s roughly 10-15% of the files. This ends up being a compromise because of our inability to use the V5 file system format (clients still at CentOS 6/Scale 4.2.3). For metadata, the file systems are currently using about 15TB of space (replicated, across roughly 1.7PB usage). This represents a mix of 256b and 4k inodes (70% 256b). Assuming a 8x increase the upper limit of needs would be 128TB. Since some of that is already in 4K inodes, I feel an allocation of 90-100 TB (4-5% of data space) is closer to reality. I don?t know if having a separate metadata pool makes sense if I?m using the V4 format, in which the block size of metadata and data is the same. Summary, I think the best options are: Option (1): 2 file systems of 1PB each. 1PB data pool, 50TB system pool, 1MB block size, 2x replicated metadata Option (2): 2 file systems of 1PB each. 1PB data/metadata pool, 1MB block size, 2x replicated metadata (preferred, then I don?t need to manage my metadata space) Any thoughts would be appreciated. Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Wed Apr 10 18:57:32 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 10 Apr 2019 13:57:32 -0400 Subject: [gpfsug-discuss] Follow-up: ESS File systems In-Reply-To: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> References: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> Message-ID: If you're into pondering some more tweaks: -i InodeSize is tunable system pool : --metadata-block-size is tunable separately from -B blocksize On ESS you might want to use different block size and error correcting codes for (v)disks that hold system pool. Generally I think you'd want to set up system pool for best performance for relatively short reads and updates. -------------- next part -------------- An HTML attachment was scrubbed... URL: From TOMP at il.ibm.com Wed Apr 10 21:11:17 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Wed, 10 Apr 2019 23:11:17 +0300 Subject: [gpfsug-discuss] Follow-up: ESS File systems In-Reply-To: References: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> Message-ID: Its also important to look into the actual space "wasted" by the "subblock mismatch". For example, a snip from a filehist output I've found somewhere: File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 2M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 1297314 2.65% 0.00% 0.00% 1 34014892 72.11% 0.74% 0.59% 2 2217365 76.64% 0.84% 0.67% 3 1967998 80.66% 0.96% 0.77% 4 798170 82.29% 1.03% 0.83% 5 1518258 85.39% 1.20% 0.96% 6 581539 86.58% 1.27% 1.02% 7 659969 87.93% 1.37% 1.10% 8 1178798 90.33% 1.58% 1.27% 9 189220 90.72% 1.62% 1.30% 10 130197 90.98% 1.64% 1.32% So, 72% of the files are smaller then 1 subblock ( 2M in the above case BTW). If, for example, we'll double it - we will "waste" ~76% of the files, and if we'll push it to 16M it will be ~90% of the files... But, we really care about capacity, right? So, going into the 16M extreme, we'll "waste" 1.58% of the capacity ( worst case of course). So, if it will give you ( highly depends on the workload of course) 4X the performance ( just for the sake of discussion) - will it be OK to pay the 1.5% "premium" ? Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Marc A Kaplan" To: gpfsug main discussion list Date: 10/04/2019 20:57 Subject: Re: [gpfsug-discuss] Follow-up: ESS File systems Sent by: gpfsug-discuss-bounces at spectrumscale.org If you're into pondering some more tweaks: -i InodeSize is tunable system pool : --metadata-block-size is tunable separately from -B blocksize On ESS you might want to use different block size and error correcting codes for (v)disks that hold system pool. Generally I think you'd want to set up system pool for best performance for relatively short reads and updates. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=pKTwc3LbUTao8mMRXJzrpTnBdOxO9b7mRlJZiUHOof4&s=YHGve_DLxkWdwq7yiDHjBvXoHmwLkUh7zBiK7LUpmsw&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From TOMP at il.ibm.com Wed Apr 10 21:19:15 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Wed, 10 Apr 2019 23:19:15 +0300 Subject: [gpfsug-discuss] Follow-up: ESS File systems In-Reply-To: References: <2B92931E-34F7-4737-A752-BB5A69EA49ED@nuance.com> Message-ID: Just to clarify - its 2M block size, so 64k subblock size. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Tomer Perry" To: gpfsug main discussion list Date: 10/04/2019 23:11 Subject: Re: [gpfsug-discuss] Follow-up: ESS File systems Sent by: gpfsug-discuss-bounces at spectrumscale.org Its also important to look into the actual space "wasted" by the "subblock mismatch". For example, a snip from a filehist output I've found somewhere: File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 2M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 1297314 2.65% 0.00% 0.00% 1 34014892 72.11% 0.74% 0.59% 2 2217365 76.64% 0.84% 0.67% 3 1967998 80.66% 0.96% 0.77% 4 798170 82.29% 1.03% 0.83% 5 1518258 85.39% 1.20% 0.96% 6 581539 86.58% 1.27% 1.02% 7 659969 87.93% 1.37% 1.10% 8 1178798 90.33% 1.58% 1.27% 9 189220 90.72% 1.62% 1.30% 10 130197 90.98% 1.64% 1.32% So, 72% of the files are smaller then 1 subblock ( 2M in the above case BTW). If, for example, we'll double it - we will "waste" ~76% of the files, and if we'll push it to 16M it will be ~90% of the files... But, we really care about capacity, right? So, going into the 16M extreme, we'll "waste" 1.58% of the capacity ( worst case of course). So, if it will give you ( highly depends on the workload of course) 4X the performance ( just for the sake of discussion) - will it be OK to pay the 1.5% "premium" ? Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: "Marc A Kaplan" To: gpfsug main discussion list Date: 10/04/2019 20:57 Subject: Re: [gpfsug-discuss] Follow-up: ESS File systems Sent by: gpfsug-discuss-bounces at spectrumscale.org If you're into pondering some more tweaks: -i InodeSize is tunable system pool : --metadata-block-size is tunable separately from -B blocksize On ESS you might want to use different block size and error correcting codes for (v)disks that hold system pool. Generally I think you'd want to set up system pool for best performance for relatively short reads and updates. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=qbhRxpvXiJPC72GAztszQ27LP3W7o1nmJYNV1rP2k2U&s=T5j2wkoj3NuxnK-RAMPlSc9vYHIViTOe8hGF68u5VsU&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri Apr 12 10:38:32 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 12 Apr 2019 09:38:32 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0946DFC72F618f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: From jose.filipe.higino at gmail.com Fri Apr 12 11:52:21 2019 From: jose.filipe.higino at gmail.com (=?UTF-8?Q?Jos=C3=A9_Filipe_Higino?=) Date: Fri, 12 Apr 2019 22:52:21 +1200 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: Does not this depend on the License type... Being licensed by data... gives you the ability to spin as much client nodes as possible... including to the ESS cluster right? On Fri, 12 Apr 2019 at 21:38, Daniel Kidger wrote: > > Yes I am aware of the FAQ, and it particular Q13.17 which says: > > *No, systems from OEM vendors are considered distinct products even when > they embed IBM Spectrum Scale. They cannot be part of the same cluster as > IBM licenses.* > > But if this statement is taken literally, then once a customer has bought > say a Lenovo GSS/DSS-G, they are then "locked-in" to buying more storage > other OEM/ESA partners (Lenovo, Bull, DDN, etc.), as above statement > suggests that they cannot add IBM storage such as ESS to their GPFS cluster. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "RICHARD RUPP" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Date: Sun, Apr 7, 2019 4:49 PM > > > *This has been publically documented in the Spectrum Scale FAQ Q13.17, > Q13.18 and Q13.19.* > > Regards, > > *Richard Rupp*, Sales Specialist, *Phone:* *1-347-510-6746* > > > [image: Inactive hide details for "Daniel Kidger" ---04/06/2019 10:12:12 > AM---There is a non-technical issue you may need to consider.]"Daniel > Kidger" ---04/06/2019 10:12:12 AM---There is a non-technical issue you may > need to consider. IBM has set licensing rules about mixing in > > From: "Daniel Kidger" > To: "gpfsug main discussion list" > Date: 04/06/2019 10:12 AM > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > There is a non-technical issue you may need to consider. > IBM has set licensing rules about mixing in the same Spectrum Scale > cluster both ESS from IBM and 3rd party storage that is licensed under > ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). > > I am sure Carl Zetie or other IBMers who watch this list can explain the > exact restrictions. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > *+* <+44-7818%20522%20266>*44-(0)7818 522 266* <+44-7818%20522%20266> > *daniel.kidger at uk.ibm.com* > > > > On 3 Apr 2019, at 19:47, Sanchez, Paul <*Paul.Sanchez at deshaw.com* > > wrote: > > - > - > - > - note though you can't have GNR based vdisks (ESS/DSS-G) in > the same storage pool. > > At one time there was definitely a warning from IBM in the docs > about not mixing big-endian and little-endian GNR in the same > cluster/filesystem. But at least since Nov 2017, IBM has published videos > showing clusters containing both. (In my opinion, they had to support this > because they changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for > Scale itself, I can confirm that filesystems can contain NSDs which are > provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with > SAN storage based NSD servers. We typically do rolling upgrades of GNR > building blocks by adding blocks to an existing cluster, emptying and > removing the existing blocks, upgrading those in isolation, then repeating > with the next cluster. As a result, we have had every combination in play > at some point in time. Care just needs to be taken with nodeclass naming > and mmchconfig parameters. (We derive the correct params for each new > building block from its final config after upgrading/testing it in > isolation.) > > -Paul > > -----Original Message----- > From: *gpfsug-discuss-bounces at spectrumscale.org* > < > *gpfsug-discuss-bounces at spectrumscale.org* > > On Behalf Of Simon > Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list <*gpfsug-discuss at spectrumscale.org* > > > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other > SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't > have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks > then you are going to have to have a new filesystem and copy data. So it > depends what your endgame is really. We just did such a process and one of > my colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: *gpfsug-discuss-bounces at spectrumscale.org* > [ > *gpfsug-discuss-bounces at spectrumscale.org* > ] on behalf of > *prasad.surampudi at theatsgroup.com* > [ > *prasad.surampudi at theatsgroup.com* > ] > Sent: 03 April 2019 17:12 > To: *gpfsug-discuss at spectrumscale.org* > > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum > Scale cluster. Can the ESS nodes be added to existing scale cluster without > changing existing cluster name? Or do we need to create a new scale cluster > with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.1__=0ABB0946DFC72F618f9e8a93df938690918c0AB at .gif Type: image/gif Size: 105 bytes Desc: not available URL: From daniel.kidger at uk.ibm.com Fri Apr 12 12:35:38 2019 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 12 Apr 2019 11:35:38 +0000 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.16a111c26babd5baef61.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jose.filipe.higino at gmail.com Fri Apr 12 14:11:59 2019 From: jose.filipe.higino at gmail.com (=?UTF-8?Q?Jos=C3=A9_Filipe_Higino?=) Date: Sat, 13 Apr 2019 01:11:59 +1200 Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster In-Reply-To: References: Message-ID: got it now. Sorry, I miss understood that. I was already aware. =) On Fri, 12 Apr 2019 at 23:35, Daniel Kidger wrote: > Jose, > I was not considering client nodes at all. > Under the current license models, all licenses are capacity based (in two > flavours: per-TiB or per-disk), and so adding new clients is never a > licensing issue. > My point was that if you own an OEM supplied cluster from say Lenovo, you > can add to that legally from many vendors , just not from IBM themselves. > (or maybe the FAQ rules need further clarification?) > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "Jos? Filipe Higino" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Date: Fri, Apr 12, 2019 11:52 AM > > Does not this depend on the License type... > > Being licensed by data... gives you the ability to spin as much client > nodes as possible... including to the ESS cluster right? > > On Fri, 12 Apr 2019 at 21:38, Daniel Kidger > wrote: > > > Yes I am aware of the FAQ, and it particular Q13.17 which says: > > *No, systems from OEM vendors are considered distinct products even when > they embed IBM Spectrum Scale. They cannot be part of the same cluster as > IBM licenses.* > > But if this statement is taken literally, then once a customer has bought > say a Lenovo GSS/DSS-G, they are then "locked-in" to buying more storage > other OEM/ESA partners (Lenovo, Bull, DDN, etc.), as above statement > suggests that they cannot add IBM storage such as ESS to their GPFS cluster. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > +44-(0)7818 522 266 > daniel.kidger at uk.ibm.com > > > > > > > > > ----- Original message ----- > From: "RICHARD RUPP" > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug main discussion list > Cc: > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Date: Sun, Apr 7, 2019 4:49 PM > > > *This has been publically documented in the Spectrum Scale FAQ Q13.17, > Q13.18 and Q13.19.* > > Regards, > > *Richard Rupp*, Sales Specialist, *Phone:* *1-347-510-6746* > > > [image: Inactive hide details for "Daniel Kidger" ---04/06/2019 10:12:12 > AM---There is a non-technical issue you may need to consider.]"Daniel > Kidger" ---04/06/2019 10:12:12 AM---There is a non-technical issue you may > need to consider. IBM has set licensing rules about mixing in > > From: "Daniel Kidger" > To: "gpfsug main discussion list" > Date: 04/06/2019 10:12 AM > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > There is a non-technical issue you may need to consider. > IBM has set licensing rules about mixing in the same Spectrum Scale > cluster both ESS from IBM and 3rd party storage that is licensed under > ESA/OEM (Lenovo, DDN, Bull, Pixit et al.). > > I am sure Carl Zetie or other IBMers who watch this list can explain the > exact restrictions. > > Daniel > > _________________________________________________________ > *Daniel Kidger* > IBM Technical Sales Specialist > Spectrum Scale, Spectrum NAS and IBM Cloud Object Store > > *+* <+44-7818%20522%20266>*44-(0)7818 522 266* <+44-7818%20522%20266> > *daniel.kidger at uk.ibm.com* > > > > On 3 Apr 2019, at 19:47, Sanchez, Paul <*Paul.Sanchez at deshaw.com* > > wrote: > > - > - > - > - note though you can't have GNR based vdisks (ESS/DSS-G) in > the same storage pool. > > At one time there was definitely a warning from IBM in the docs > about not mixing big-endian and little-endian GNR in the same > cluster/filesystem. But at least since Nov 2017, IBM has published videos > showing clusters containing both. (In my opinion, they had to support this > because they changed the endian-ness of the ESS from BE to LE.) > > I don't know about all ancillary components (e.g. GUI) but as for > Scale itself, I can confirm that filesystems can contain NSDs which are > provided by ESS(BE), ESS(LE), GSS, and DSS in all combinations, along with > SAN storage based NSD servers. We typically do rolling upgrades of GNR > building blocks by adding blocks to an existing cluster, emptying and > removing the existing blocks, upgrading those in isolation, then repeating > with the next cluster. As a result, we have had every combination in play > at some point in time. Care just needs to be taken with nodeclass naming > and mmchconfig parameters. (We derive the correct params for each new > building block from its final config after upgrading/testing it in > isolation.) > > -Paul > > -----Original Message----- > From: *gpfsug-discuss-bounces at spectrumscale.org* > < > *gpfsug-discuss-bounces at spectrumscale.org* > > On Behalf Of Simon > Thompson > Sent: Wednesday, April 3, 2019 12:18 PM > To: gpfsug main discussion list <*gpfsug-discuss at spectrumscale.org* > > > Subject: Re: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We have DSS-G (Lenovo equivalent) in the same cluster as other > SAN/IB storage (IBM, DDN). But we don't have them in the same file-system. > > In theory as a different pool it should work, note though you can't > have GNR based vdisks (ESS/DSS-G) in the same storage pool. > > And if you want to move to new block size or v5 variable sunblocks > then you are going to have to have a new filesystem and copy data. So it > depends what your endgame is really. We just did such a process and one of > my colleagues is going to talk about it at the London user group in May. > > Simon > ________________________________________ > From: *gpfsug-discuss-bounces at spectrumscale.org* > [ > *gpfsug-discuss-bounces at spectrumscale.org* > ] on behalf of > *prasad.surampudi at theatsgroup.com* > [ > *prasad.surampudi at theatsgroup.com* > ] > Sent: 03 April 2019 17:12 > To: *gpfsug-discuss at spectrumscale.org* > > Subject: [gpfsug-discuss] Adding ESS to existing Scale Cluster > > We are planning to add an ESS GL6 system to our existing Spectrum > Scale cluster. Can the ESS nodes be added to existing scale cluster without > changing existing cluster name? Or do we need to create a new scale cluster > with ESS and import existing filesystems into the new ESS cluster? > > Prasad Surampudi > Sr. Systems Engineer > The ATS Group > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at *spectrumscale.org* > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss > * > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.16a111c26babd5baef61.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Fri Apr 12 19:59:45 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 12 Apr 2019 18:59:45 +0000 Subject: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS Message-ID: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Anyone care to tell me why this is failing or how I can do further debug. Cluster is otherwise healthy. Bob Oesterlin Sr Principal Storage Engineer, Nuance Time Cluster Name Reporting Node Event Name Entity Type Entity Name Severity Message 12.04.2019 13:03:26.429 nrg.gssio1-hs ems1-hs gui_refresh_task_failed NODE ems1-hs WARNING The following GUI refresh task(s) failed: FILESETS -------------- next part -------------- An HTML attachment was scrubbed... URL: From PPOD at de.ibm.com Fri Apr 12 20:05:54 2019 From: PPOD at de.ibm.com (Przemyslaw Podfigurny1) Date: Fri, 12 Apr 2019 19:05:54 +0000 Subject: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS In-Reply-To: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> References: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15550956962140.png Type: image/png Size: 1167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15550956962141.png Type: image/png Size: 6645 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15550956962142.png Type: image/png Size: 1167 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Fri Apr 12 20:18:20 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 12 Apr 2019 19:18:20 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: FW: gui_refresh_task_failed : FILESETS In-Reply-To: References: <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Message-ID: Ah - failing because it?s checking the remote file system for information - how do I disable that? root at ems1 ~]# /usr/lpp/mmfs/gui/cli/runtask filesets --debug debug: locale=en_US debug: Running 'mmlsfileset 'fs1' -Y ' on node localhost debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=fs1 group_by gpfs_fset_name last 13 bucket_size 300' debug: Running 'mmlsfileset 'fs1test' -Y ' on node localhost debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=fs1test group_by gpfs_fset_name last 13 bucket_size 300' debug: Running 'mmlsfileset 'nrg5_tools' -Y ' on node localhost debug: Running zimon query: 'get -ja metrics max(gpfs_fset_maxInodes),max(gpfs_fset_freeInodes),max(gpfs_fset_allocInodes),max(gpfs_rq_blk_current),max(gpfs_rq_file_current) from gpfs_fs_name=tools group_by gpfs_fset_name last 13 bucket_size 300' on remote cluster nrg5-gpfs.nrg5-gpfs01 err: com.ibm.fscc.zimon.unified.ZiMONException: Remote access is not configured debug: Will not raise the following event using 'mmsysmonc' since it already exists in the database: reportingNode = 'ems1-hs', eventName = 'gui_refresh_task_failed', entityId = '3', arguments = 'FILESETS', identifier = 'null' err: com.ibm.fscc.zimon.unified.ZiMONException: Remote access is not configured err: com.ibm.fscc.cli.CommandException: EFSSG1150C Running specified task was unsuccessful. at com.ibm.fscc.cli.CommandException.createCommandException(CommandException.java:117) at com.ibm.fscc.newcli.commands.task.CmdRunTask.doExecute(CmdRunTask.java:84) at com.ibm.fscc.newcli.internal.AbstractCliCommand.execute(AbstractCliCommand.java:156) at com.ibm.fscc.cli.CliProtocol.processNewStyleCommand(CliProtocol.java:460) at com.ibm.fscc.cli.CliProtocol.processRequest(CliProtocol.java:446) at com.ibm.fscc.cli.CliServer$CliClientServer.run(CliServer.java:97) EFSSG1150C Running specified task was unsuccessful. Bob Oesterlin Sr Principal Storage Engineer, Nuance From: on behalf of Przemyslaw Podfigurny1 Reply-To: gpfsug main discussion list Date: Friday, April 12, 2019 at 2:06 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: [EXTERNAL] Re: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS Execute the refresh task with debug option enabled on your GUI node ems1-hs to see what is the cause: /usr/lpp/mmfs/gui/cli/runtask filesets --debug Mit freundlichen Gr??en / Kind regards [cid:15550956962140] [IBM Spectrum Scale] ? ? Przemyslaw Podfigurny Software Engineer, Spectrum Scale GUI Department M069 / Spectrum Scale Software Development +49 7034 274 5403 (Office) +49 1624 159 497 (Mobile) ppod at de.ibm.com [cid:15550956962142] IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz / Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ----- Original message ----- From: "Oesterlin, Robert" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] FW: gui_refresh_task_failed : FILESETS Date: Fri, Apr 12, 2019 9:00 PM Anyone care to tell me why this is failing or how I can do further debug. Cluster is otherwise healthy. Bob Oesterlin Sr Principal Storage Engineer, Nuance Time Cluster Name Reporting Node Event Name Entity Type Entity Name Severity Message 12.04.2019 13:03:26.429 nrg.gssio1-hs ems1-hs gui_refresh_task_failed NODE ems1-hs WARNING The following GUI refresh task(s) failed: FILESETS _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1168 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 6646 bytes Desc: image002.png URL: From sandeep.patil at in.ibm.com Mon Apr 15 09:54:05 2019 From: sandeep.patil at in.ibm.com (Sandeep Ramesh) Date: Mon, 15 Apr 2019 14:24:05 +0530 Subject: [gpfsug-discuss] IBM Spectrum Scale Security Survey Message-ID: bcc: gpfsug-discuss at spectrumscale.org Dear Spectrum Scale User, Below is a survey link where we are seeking feedback to improve and enhance IBM Spectrum Scale. This is an anonymous survey and your participation in this survey is completely voluntary. IBM Spectrum Scale Cyber Security Survey https://www.surveymonkey.com/r/9ZNCZ75 (Average time of 4 mins with 10 simple questions). Your response is invaluable to us. Thank you and looking forward for your participation. Regards IBM Spectrum Scale Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From PPOD at de.ibm.com Mon Apr 15 10:18:00 2019 From: PPOD at de.ibm.com (Przemyslaw Podfigurny1) Date: Mon, 15 Apr 2019 09:18:00 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: FW: gui_refresh_task_failed : FILESETS In-Reply-To: References: , <4DCF59C6-909D-4F2F-8282-A577511B2535@nuance.com> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15553195530160.png Type: image/png Size: 1167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15553195530161.png Type: image/png Size: 6645 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15553195530162.png Type: image/png Size: 1167 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D4F13A.94743D30.png Type: image/png Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D4F13A.94743D30.png Type: image/png Size: 6646 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D4F13A.94743D30.png Type: image/png Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D4F13A.94743D30.png Type: image/png Size: 6646 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image001.png at 01D4F13A.94743D30.png Type: image/png Size: 1168 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.image002.png at 01D4F13A.94743D30.png Type: image/png Size: 6646 bytes Desc: not available URL: From prasad.surampudi at theatsgroup.com Tue Apr 16 13:38:34 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Tue, 16 Apr 2019 12:38:34 +0000 Subject: [gpfsug-discuss] Spectrum Scale Replication across failure groups In-Reply-To: References: , Message-ID: We have a filesystem with 'system' and 'v7kdata' pools. All the NSDs in v7kdata are with failure group '-1'. Filesystem metadata is already replicated. Now we are planning to replicate the filesystem data. So, If I add new NSDs with failure group '2' in the v7kdata pool, would I be able to replicate GPFS data between NSDs with '-1' failure group and NSDs with failure group '2' ? i.e have one copy of file on NSD with '-1' and another copy on NSD with failure group '2' ? or Do I have to change the NSDs with failure group '-1' to '1' ? mobile 302.419.5833|fax 484.320.4306|psurampudi at theATSgroup.com Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage Systems -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Tue Apr 16 14:15:30 2019 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 16 Apr 2019 09:15:30 -0400 Subject: [gpfsug-discuss] Spectrum Scale Replication across failure groups In-Reply-To: References: Message-ID: I believe that -1 is "special", in that all -1?s are different form each other. So you will wind up with data on several -1 NSDs, instead of a -1 and a 2. In fact you probably didn?t specify -1, it was likely assigned automatically. Read the first paragraph in the failureGroup entry in: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_mmcrnsd.htm I do realize that the subsequent paragraphs do confuse the issue somewhat, but the first paragraph describes what?s happening. Liberty, -- Stephen > On Apr 16, 2019, at 8:38 AM, Prasad Surampudi > wrote: > > We have a filesystem with 'system' and 'v7kdata' pools. All the NSDs in v7kdata are with failure group '-1'. Filesystem metadata is already replicated. Now we are planning to replicate the filesystem data. So, If I add new NSDs with failure group '2' in the v7kdata pool, would I be able to replicate GPFS data between NSDs with '-1' failure group and NSDs with failure group '2' ? i.e have one copy of file on NSD with '-1' and another copy on NSD with failure group '2' ? or Do I have to change the NSDs with failure group '-1' to '1' ? > > > mobile 302.419.5833|fax 484.320.4306|psurampudi at theATSgroup.com > Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage Systems > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Tue Apr 16 14:48:47 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Tue, 16 Apr 2019 09:48:47 -0400 Subject: [gpfsug-discuss] Spectrum Scale Replication across failuregroups In-Reply-To: References: Message-ID: I think it would be wise to first set the failure group on the existing NSDs to a valid value and not use -1. I would also suggest you not use consecutive numbers like 1 and 2 but something with some distance between them, for example 10 and 20, or 100 and 200. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Stephen Ulmer To: gpfsug main discussion list Cc: "gpfsug-discuss-request at spectrumscale.org" Date: 04/16/2019 09:18 AM Subject: Re: [gpfsug-discuss] Spectrum Scale Replication across failure groups Sent by: gpfsug-discuss-bounces at spectrumscale.org I believe that -1 is "special", in that all -1?s are different form each other. So you will wind up with data on several -1 NSDs, instead of a -1 and a 2. In fact you probably didn?t specify -1, it was likely assigned automatically. Read the first paragraph in the failureGroup entry in: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_mmcrnsd.htm I do realize that the subsequent paragraphs do confuse the issue somewhat, but the first paragraph describes what?s happening. Liberty, -- Stephen On Apr 16, 2019, at 8:38 AM, Prasad Surampudi < prasad.surampudi at theatsgroup.com> wrote: We have a filesystem with 'system' and 'v7kdata' pools. All the NSDs in v7kdata are with failure group '-1'. Filesystem metadata is already replicated. Now we are planning to replicate the filesystem data. So, If I add new NSDs with failure group '2' in the v7kdata pool, would I be able to replicate GPFS data between NSDs with '-1' failure group and NSDs with failure group '2' ? i.e have one copy of file on NSD with '-1' and another copy on NSD with failure group '2' ? or Do I have to change the NSDs with failure group '-1' to '1' ? mobile 302.419.5833|fax 484.320.4306|psurampudi at theATSgroup.com Galileo Performance Explorer Blog Offers Deep Insights for Server/Storage Systems _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=qj8cjidW9IKqym8U4WV2Buxy_hsl7bpmELnPNc8MYPg&s=hNTiNvPnIYhBCgPOm2NLtq9vP1MIVCipuIA8snw7Eg4&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at markomanolis.com Thu Apr 18 16:16:52 2019 From: george at markomanolis.com (George Markomanolis) Date: Thu, 18 Apr 2019 11:16:52 -0400 Subject: [gpfsug-discuss] IO500 - Call for Submission for ISC-19 Message-ID: Dear all, Please consider the submission of results to the new list. *Deadline*: 10 June 2019 AoE The IO500 is now accepting and encouraging submissions for the upcoming 4th IO500 list to be revealed at ISC-HPC 2019 in Frankfurt, Germany. Once again, we are also accepting submissions to the 10 node I/O challenge to encourage submission of small scale results. The new ranked lists will be announced at our ISC19 BoF [2]. We hope to see you, and your results, there. The benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please submit and we look forward to seeing many of you at ISC 2019! Please note that submissions of all size are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below. Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017, published its first list at SC17, and has grown exponentially since then. The need for such an initiative has long been known within High-Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking. The multi-fold goals of the benchmark suite are as follows: 1. Maximizing simplicity in running the benchmark suite 2. Encouraging complexity in tuning for performance 3. Allowing submitters to highlight their ?hero run? performance numbers 4. Forcing submitters to simultaneously report performance for challenging IO patterns. Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication. The goals of the community are also multi-fold: 1. Gather historical data for the sake of analysis and to aid predictions of storage futures 2. Collect tuning information to share valuable performance optimizations across the community 3. Encourage vendors and designers to optimize for workloads beyond ?hero runs? 4. Establish bounded expectations for users, procurers, and administrators Edit 10 Node I/O Challenge At ISC, we will announce our second IO-500 award for the 10 Node Challenge. This challenge is conducted using the regular IO-500 benchmark, however, with the rule that exactly *10 computes nodes* must be used to run the benchmark (one exception is find, which may use 1 node). You may use any shared storage with, e.g., any number of servers. When submitting for the IO-500 list, you can opt-in for ?Participate in the 10 compute node challenge only?, then we won't include the results into the ranked list. Other 10 compute node submission will be included in the full list and in the ranked list. We will announce the result in a separate derived list and in the full list but not on the ranked IO-500 list at io500.org. Edit Birds-of-a-feather Once again, we encourage you to submit [1], to join our community, and to attend our BoF ?The IO-500 and the Virtual Institute of I/O? at ISC 2019 [2] where we will announce the fourth IO500 list and second 10 node challenge list. The current list includes results from BeeGPFS, DataWarp, IME, Lustre, Spectrum Scale, and WekaIO. We hope that the next list has even more. We look forward to answering any questions or concerns you might have. - [1] http://io500.org/submission - [2] The BoF schedule will be announced soon -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.caubet at psi.ch Thu Apr 18 16:32:58 2019 From: marc.caubet at psi.ch (Caubet Serrabou Marc (PSI)) Date: Thu, 18 Apr 2019 15:32:58 +0000 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Message-ID: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch> Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Thu Apr 18 16:54:18 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Thu, 18 Apr 2019 11:54:18 -0400 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' In-Reply-To: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch> References: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch> Message-ID: We can try to provide some guidance on what you are seeing but generally to do true analysis of performance issues customers should contact IBM lab based services (LBS). We need some additional information to understand what is happening. On which node did you collect the waiters and what command did you run to capture the data? What is the network connection between the remote cluster and the storage cluster? Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Date: 04/18/2019 11:41 AM Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=dHk9lhiQqWEszuFxOcyajfLhFM0xLk7rMkdNNNQOuyQ&s=HTJYxe-mxXg7paKH_AWo3OU8-A_YHvpotkB9f0h2amg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From marc.caubet at psi.ch Thu Apr 18 18:41:45 2019 From: marc.caubet at psi.ch (Caubet Serrabou Marc (PSI)) Date: Thu, 18 Apr 2019 17:41:45 +0000 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' In-Reply-To: References: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch>, Message-ID: <0081EB235765E14395278B9AE1DF34180A86B2D4@MBX214.d.ethz.ch> Hi, thanks a lot. About the requested information: * Waiters were captured with the command 'mmdiag --waiters', and it was performed on one of the IO (NSD) nodes. * Connection between storage and client clusters is with Infiniband EDR. For the GPFS client cluster we have 3 chassis, each one has 24 blades with unmanaged EDR switch (24 for the blades, 12 external), and currently 10 EDR external ports are connected for external connectivity. On the other hand, the GPFS storage cluster has 2 IO nodes (as commented in the previous e-mail, DSS G240). Each IO node has connected 4 x EDR ports. Regarding the Infiniband connectivty, my network contains 2 top EDR managed switches configured with up/down routing, connecting the unmanaged switches from the chassis and the 2 managed Infiniband switches for the storage (for redundancy). Whenever needed I can go through PMR if this would easy the debug, no problem for me. I was wondering about the meaning "waiting for helper threads" and what could be the reason for that Thanks a lot for your help and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of IBM Spectrum Scale [scale at us.ibm.com] Sent: Thursday, April 18, 2019 5:54 PM To: gpfsug main discussion list Cc: gpfsug-discuss-bounces at spectrumscale.org Subject: Re: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' We can try to provide some guidance on what you are seeing but generally to do true analysis of performance issues customers should contact IBM lab based services (LBS). We need some additional information to understand what is happening. * On which node did you collect the waiters and what command did you run to capture the data? * What is the network connection between the remote cluster and the storage cluster? Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Date: 04/18/2019 11:41 AM Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Thu Apr 18 21:55:25 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Thu, 18 Apr 2019 16:55:25 -0400 Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' In-Reply-To: <0081EB235765E14395278B9AE1DF34180A86B2D4@MBX214.d.ethz.ch> References: <0081EB235765E14395278B9AE1DF34180A86B2A7@MBX214.d.ethz.ch>, <0081EB235765E14395278B9AE1DF34180A86B2D4@MBX214.d.ethz.ch> Message-ID: Thanks for the information. Since the waiters information is from one of the IO servers then the threads waiting for IO should be waiting for actual IO requests to the storage. Seeing IO operations taking seconds long generally indicates your storage is not working optimally. We would expect IOs to complete in sub-second time, as in some number of milliseconds. You are using a record size of 16M yet you stated the file system block size is 1M. Is that really what you wanted to test? Also, you have included the -fsync option to gpfsperf which will impact the results. Have you considered using the nsdperf program instead of the gpfsperf program? You can find nsdperf in the samples/net directory. One last thing I noticed was in the configuration of your management node. It showed the following. [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k To my understanding the management node has no direct access to the storage, that is any IO requests to the file system from the management node go through the IO nodes. That being true GPFS will not make use of NSD worker threads on the management node. As you can see your configuration is creating 3K NSD worker threads and none will be used so you might want to consider changing that value to 1. It will not change your performance numbers but it should free up a bit of memory on the management node. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Cc: "gpfsug-discuss-bounces at spectrumscale.org" Date: 04/18/2019 01:45 PM Subject: Re: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, thanks a lot. About the requested information: * Waiters were captured with the command 'mmdiag --waiters', and it was performed on one of the IO (NSD) nodes. * Connection between storage and client clusters is with Infiniband EDR. For the GPFS client cluster we have 3 chassis, each one has 24 blades with unmanaged EDR switch (24 for the blades, 12 external), and currently 10 EDR external ports are connected for external connectivity. On the other hand, the GPFS storage cluster has 2 IO nodes (as commented in the previous e-mail, DSS G240). Each IO node has connected 4 x EDR ports. Regarding the Infiniband connectivty, my network contains 2 top EDR managed switches configured with up/down routing, connecting the unmanaged switches from the chassis and the 2 managed Infiniband switches for the storage (for redundancy). Whenever needed I can go through PMR if this would easy the debug, no problem for me. I was wondering about the meaning "waiting for helper threads" and what could be the reason for that Thanks a lot for your help and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of IBM Spectrum Scale [scale at us.ibm.com] Sent: Thursday, April 18, 2019 5:54 PM To: gpfsug main discussion list Cc: gpfsug-discuss-bounces at spectrumscale.org Subject: Re: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' We can try to provide some guidance on what you are seeing but generally to do true analysis of performance issues customers should contact IBM lab based services (LBS). We need some additional information to understand what is happening. On which node did you collect the waiters and what command did you run to capture the data? What is the network connection between the remote cluster and the storage cluster? Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479 . If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: "Caubet Serrabou Marc (PSI)" To: gpfsug main discussion list Date: 04/18/2019 11:41 AM Subject: [gpfsug-discuss] Performance problems + (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi all, I would like to have some hints about the following problem: Waiting 26.6431 sec since 17:18:32, ignored, thread 38298 NSPDDiscoveryRunQueueThread: on ThCond 0x7FC98EB6A2B8 (MultiThreadWorkInstanceCond), reason 'waiting for helper threads' Waiting 2.7969 sec since 17:18:55, monitored, thread 39736 NSDThread: for I/O completion Waiting 2.8024 sec since 17:18:55, monitored, thread 39580 NSDThread: for I/O completion Waiting 3.0435 sec since 17:18:55, monitored, thread 39448 NSDThread: for I/O completion I am testing a new GPFS cluster (GPFS cluster client with computing nodes remotely mounting the Storage GPFS Cluster) and I am running 65 gpfsperf commands (1 command per client in parallell) as follows: /usr/lpp/mmfs/samples/perf/gpfsperf create seq /gpfs/home/caubet_m/gpfsperf/$(hostname).txt -fsync -n 24g -r 16m -th 8 I am unable to reach more than 6.5GBps (Lenovo DSS G240 GPFS 5.0.2-1, on a testing a 'home' filesystem with 1MB blocksize and subblocks of 8KB). After several seconds I see many waiters for I/O completion (up to 5 seconds) and also the 'waiting for helper threads' message shown above. Can somebody explain me the meaning for this message? How could I improve that? Current config in the storage cluster is: [root at merlindssio02 ~]# mmlsconfig Configuration data for cluster merlin.psi.ch: --------------------------------------------- clusterName merlin.psi.ch clusterId 1511090979434548295 autoload no dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes nsdRAIDFirmwareDirectory /opt/lenovo/dss/firmware cipherList AUTHONLY maxblocksize 16m [merlindssmgt01] ignorePrefetchLUNCount yes [common] pagepool 4096M [merlindssio01,merlindssio02] pagepool 270089M [merlindssmgt01,dssg] pagepool 57684M maxBufferDescs 2m numaMemoryInterleave yes [common] prefetchPct 50 [merlindssmgt01,dssg] prefetchPct 20 nsdRAIDTracks 128k nsdMaxWorkerThreads 3k nsdMinWorkerThreads 3k nsdRAIDSmallThreadRatio 2 nsdRAIDThreadsPerQueue 16 nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 nsdRAIDFlusherFWLogHighWatermarkMB 1000 nsdRAIDBlockDeviceMaxSectorsKB 0 nsdRAIDBlockDeviceNrRequests 0 nsdRAIDBlockDeviceQueueDepth 0 nsdRAIDBlockDeviceScheduler off nsdRAIDMaxPdiskQueueDepth 128 nsdMultiQueue 512 verbsRdma enable verbsPorts mlx5_0/1 mlx5_1/1 verbsRdmaSend yes scatterBufferSize 256K maxFilesToCache 128k maxMBpS 40000 workerThreads 1024 nspdQueues 64 [common] subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin.psi.ch: -------------------------------------- /dev/home /dev/t16M128K /dev/t16M16K /dev/t1M8K /dev/t4M16K /dev/t4M32K /dev/test And for the computing cluster: [root at merlin-c-001 ~]# mmlsconfig Configuration data for cluster merlin-hpc.psi.ch: ------------------------------------------------- clusterName merlin-hpc.psi.ch clusterId 14097036579263601931 autoload yes dmapiFileHandleSize 32 minReleaseLevel 5.0.2.0 ccrEnabled yes cipherList AUTHONLY maxblocksize 16M numaMemoryInterleave yes maxFilesToCache 128k maxMBpS 20000 workerThreads 1024 verbsRdma enable verbsPorts mlx5_0/1 verbsRdmaSend yes scatterBufferSize 256K ignorePrefetchLUNCount yes nsdClientCksumTypeLocal ck64 nsdClientCksumTypeRemote ck64 pagepool 32G subnets 192.168.196.0/merlin-hpc.psi.ch;merlin.psi.ch adminMode central File systems in cluster merlin-hpc.psi.ch: ------------------------------------------ (none) Thanks a lot and best regards, Marc _________________________________________ Paul Scherrer Institut High Performance Computing Marc Caubet Serrabou Building/Room: WHGA/019A Forschungsstrasse, 111 5232 Villigen PSI Switzerland Telephone: +41 56 310 46 67 E-Mail: marc.caubet at psi.ch_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=YUp1yAfDFGnpxatHqsvM9LzHFt--RrMBCKoQF_Fa_zQ&s=4NBW1TmPGKAkvbymtK2QWCnLnBp-S0AVmEJxT2H1z0k&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkachwala at ddn.com Tue Apr 23 13:25:41 2019 From: tkachwala at ddn.com (Taizun Kachwala) Date: Tue, 23 Apr 2019 12:25:41 +0000 Subject: [gpfsug-discuss] Hi from Taizun (DDN Storage @Pune, India) Message-ID: Hi, My name is Taizun and I lead the effort of developing & supporting DDN Solution using IBM GPFS/Spectrum Scale as an Embedded application stack making it a converged infrastructure using DDN Storage Fusion Architecture (SFA) appliances (GS18K, GS14K, GS400NV/200NV and GS 7990) and also as an independent product solution that can be deployed on bare metal servers as NSD server or client role. Our solution is mainly targeted towards HPC customers in AI, Analytics, BigData, High-Performance File-Server, etc. We support 4.x as well as 5.x SS product-line on CentOS & RHEL respectively. Thanks & Regards, Taizun Kachwala Lead SDET, DDN India +91 98222 07304 +91 95118 89204 -------------- next part -------------- An HTML attachment was scrubbed... URL: From prasad.surampudi at theatsgroup.com Tue Apr 23 17:14:24 2019 From: prasad.surampudi at theatsgroup.com (Prasad Surampudi) Date: Tue, 23 Apr 2019 16:14:24 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 21 In-Reply-To: References: Message-ID: I am trying to analyze a filehist report of a Spectrum Scale filesystem I recently collected. Given below is the data and I have put my interpretation in parentheses. Could someone from Sale development review and let me know if my interpretation is correct? Filesystem block size is 16 MB and system pool block size is 256 KB. GPFS Filehist report for Test Filesystem All: Files = 38,808,641 (38 Million Total Files) All: Files in inodes = 8153748 Available space = 1550139596472320 1550140 GB 1550 TB Total Size of files = 1110707126790022 Total Size of files in inodes = 26008177568 Total Space = 1123175375306752 1123175 GB 1123 TB Largest File = 3070145200128 - ( 2.8 TB) Average Size = 28620098 ? ( 27 MB ) Non-zero: Files = 38642491 Average NZ size = 28743155 Directories = 687233 (Total Number of Directories) Directories in inode = 650552 Total Dir Space = 5988433920 Avg Entries per dir = 57.5 (Avg # files per Directory) Files with indirect blocks = 181003 File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 16M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 7,669,346 19.76% 0.00% 0.00% ( ~7 Million files <= 512 KB ) 1 25,548,588 85.59% 1.19% 0.86% - ( ~25 Million files > 512 KB <= 1 MB ) 2 1,270,115 88.87% 1.31% 0.95% - (~1 Million files > 1 MB <= 1.5 MB ) .... .... .... 32 10387 97.37% 2.43% 1.76% Histogram of files with N 16M blocks (plus end fragment) Blocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 1 177550 97.82% 2.70% 1.95% ( ~177 K files <= 16 MB) .... .... .... 100 640 99.77% 17.31% 12.54% Number of files with more than 100 16M blocks 101+ 88121 100.00% 100.00% 72.46% ( ~88 K files > 1600 MB) -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Thu Apr 25 16:55:24 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale UG Chair)) Date: Thu, 25 Apr 2019 16:55:24 +0100 Subject: [gpfsug-discuss] (no subject) Message-ID: An HTML attachment was scrubbed... URL: From luke.raimbach at googlemail.com Thu Apr 25 19:29:04 2019 From: luke.raimbach at googlemail.com (Luke Raimbach) Date: Thu, 25 Apr 2019 19:29:04 +0100 Subject: [gpfsug-discuss] (no subject) In-Reply-To: References: Message-ID: Pop me down for a spot old bean. Make sure IBM put on good sandwiches! On Thu, 25 Apr 2019, 16:55 Simon Thompson (Spectrum Scale UG Chair), < chair at spectrumscale.org> wrote: > It's just a few weeks until the UK/Worldwide Spectrum Scale user group in > London on 8th/9th May 2019. > > As we need to confirm numbers for catering, we'll be closing registration > on 1st May. > > If you plan to attend, please register via: > > https://www.spectrumscaleug.org/event/uk-user-group-meeting/ > > (I think we have about 10 places left) > > The full agenda is now posted and our evening event is confirmed, thanks > to the support of our sponsors IBM, OCF, e8 storage, Lenovo, DDN and NVIDA. > > Simon > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Fri Apr 26 07:44:58 2019 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 26 Apr 2019 14:44:58 +0800 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 21 In-Reply-To: References: Message-ID: From my understanding, your interpretation is correct. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Prasad Surampudi To: "gpfsug-discuss at spectrumscale.org" Date: 04/24/2019 12:17 AM Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 87, Issue 21 Sent by: gpfsug-discuss-bounces at spectrumscale.org I am trying to analyze a filehist report of a Spectrum Scale filesystem I recently collected. Given below is the data and I have put my interpretation in parentheses. Could someone from Sale development review and let me know if my interpretation is correct? Filesystem block size is 16 MB and system pool block size is 256 KB. GPFS Filehist report for Test Filesystem All: Files = 38,808,641 (38 Million Total Files) All: Files in inodes = 8153748 Available space = 1550139596472320 1550140 GB 1550 TB Total Size of files = 1110707126790022 Total Size of files in inodes = 26008177568 Total Space = 1123175375306752 1123175 GB 1123 TB Largest File = 3070145200128 - ( 2.8 TB) Average Size = 28620098 ? ( 27 MB ) Non-zero: Files = 38642491 Average NZ size = 28743155 Directories = 687233 (Total Number of Directories) Directories in inode = 650552 Total Dir Space = 5988433920 Avg Entries per dir = 57.5 (Avg # files per Directory) Files with indirect blocks = 181003 File%ile represents the cummulative percentage of files. Space%ile represents the cummulative percentage of total space used. AvlSpc%ile represents the cummulative percentage used of total available space. Histogram of files <= one 16M block in size Subblocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 0 7,669,346 19.76% 0.00% 0.00% ( ~7 Million files <= 512 KB ) 1 25,548,588 85.59% 1.19% 0.86% - ( ~25 Million files > 512 KB <= 1 MB ) 2 1,270,115 88.87% 1.31% 0.95% - (~1 Million files > 1 MB <= 1.5 MB ) .... .... .... 32 10387 97.37% 2.43% 1.76% Histogram of files with N 16M blocks (plus end fragment) Blocks Count File%ile Space%ile AvlSpc%ile --------- -------- ---------- ---------- ---------- 1 177550 97.82% 2.70% 1.95% ( ~177 K files <= 16 MB) .... .... .... 100 640 99.77% 17.31% 12.54% Number of files with more than 100 16M blocks 101+ 88121 100.00% 100.00% 72.46% ( ~88 K files > 1600 MB) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=uBqBwHtxxGncMVk3Suv2icRbZNIqzOgMlfJ6LnIqNhc&s=WdJyzA9yDIx3Cyj6Kg-LvXKTj8ED4J7wm_5wJ6iyccg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From xhejtman at ics.muni.cz Fri Apr 26 13:17:33 2019 From: xhejtman at ics.muni.cz (Lukas Hejtmanek) Date: Fri, 26 Apr 2019 14:17:33 +0200 Subject: [gpfsug-discuss] gpfs and device number Message-ID: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> Hello, I noticed that from time to time, device id of a gpfs volume is not same across whole gpfs cluster. [root at kat1 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 28h/40d Inode: 3 [root at kat2 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2bh/43d Inode: 3 [root at kat3 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2ah/42d Inode: 3 this is really bad for kernel NFS as it uses device id for file handles thus NFS failover leads to nfs stale handle error. Is there a way to force a device number? -- Luk?? Hejtm?nek Linux Administrator only because Full Time Multitasking Ninja is not an official job title From TOMP at il.ibm.com Sat Apr 27 20:37:48 2019 From: TOMP at il.ibm.com (Tomer Perry) Date: Sat, 27 Apr 2019 22:37:48 +0300 Subject: [gpfsug-discuss] gpfs and device number In-Reply-To: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> References: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> Message-ID: Hi, Please use the fsid option in /etc/exports ( man exports and: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adm_nfslin.htm ) Also check https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adv_cnfs.htm in case you want HA with kernel NFS. Regards, Tomer Perry Scalable I/O Development (Spectrum Scale) email: tomp at il.ibm.com 1 Azrieli Center, Tel Aviv 67021, Israel Global Tel: +1 720 3422758 Israel Tel: +972 3 9188625 Mobile: +972 52 2554625 From: Lukas Hejtmanek To: gpfsug-discuss at spectrumscale.org Date: 26/04/2019 15:37 Subject: [gpfsug-discuss] gpfs and device number Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello, I noticed that from time to time, device id of a gpfs volume is not same across whole gpfs cluster. [root at kat1 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 28h/40d Inode: 3 [root at kat2 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2bh/43d Inode: 3 [root at kat3 ~]# stat /gpfs/vol1/ File: ?/gpfs/vol1/? Size: 262144 Blocks: 512 IO Block: 262144 directory Device: 2ah/42d Inode: 3 this is really bad for kernel NFS as it uses device id for file handles thus NFS failover leads to nfs stale handle error. Is there a way to force a device number? -- Luk?? Hejtm?nek Linux Administrator only because Full Time Multitasking Ninja is not an official job title _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=F4TfIKrFl9BVdEAYxZLWlFF-zF-irdwcP9LnGpgiZrs&s=Ice-yo0p955RcTDGPEGwJ-wIwN9F6PvWOpUvR6RMd4M&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeep.patil at in.ibm.com Mon Apr 29 07:42:18 2019 From: sandeep.patil at in.ibm.com (Sandeep Ramesh) Date: Mon, 29 Apr 2019 06:42:18 +0000 Subject: [gpfsug-discuss] Latest Technical Blogs on IBM Spectrum Scale (Q1 2019) In-Reply-To: References: Message-ID: Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q1 2019). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Spectrum Scale 5.0.3 https://developer.ibm.com/storage/2019/04/24/spectrum-scale-5-0-3/ IBM Spectrum Scale HDFS Transparency Ranger Support https://developer.ibm.com/storage/2019/04/01/ibm-spectrum-scale-hdfs-transparency-ranger-support/ Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and Sharing Files Globally, http://www.redbooks.ibm.com/abstracts/redp5527.html?Open Spectrum Scale user group in Singapore, 2019 https://developer.ibm.com/storage/2019/03/14/spectrum-scale-user-group-in-singapore-2019/ 7 traits to use Spectrum Scale to run container workload https://developer.ibm.com/storage/2019/02/26/7-traits-to-use-spectrum-scale-to-run-container-workload/ Health Monitoring of IBM Spectrum Scale Cluster via External Monitoring Framework https://developer.ibm.com/storage/2019/01/22/health-monitoring-of-ibm-spectrum-scale-cluster-via-external-monitoring-framework/ Migrating data from native HDFS to IBM Spectrum Scale based shared storage https://developer.ibm.com/storage/2019/01/18/migrating-data-from-native-hdfs-to-ibm-spectrum-scale-based-shared-storage/ Bulk File Creation useful for Test on Filesystems https://developer.ibm.com/storage/2019/01/16/bulk-file-creation-useful-for-test-on-filesystems/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 01/14/2019 06:24 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q4 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q4 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. Redpaper: IBM Spectrum Scale and IBM StoredIQ: Identifying and securing your business data to support regulatory requirements http://www.redbooks.ibm.com/abstracts/redp5525.html?Open IBM Spectrum Scale Memory Usage https://www.slideshare.net/tomerperry/ibm-spectrum-scale-memory-usage?qid=50a1dfda-3102-484f-b9d0-14b69fc4800b&v=&b=&from_search=2 Spectrum Scale and Containers https://developer.ibm.com/storage/2018/12/20/spectrum-scale-and-containers/ IBM Elastic Storage Server Performance Graphical Visualization with Grafana https://developer.ibm.com/storage/2018/12/18/ibm-elastic-storage-server-performance-graphical-visualization-with-grafana/ Hadoop Performance for disaggregated compute and storage configurations based on IBM Spectrum Scale Storage https://developer.ibm.com/storage/2018/12/13/hadoop-performance-for-disaggregated-compute-and-storage-configurations-based-on-ibm-spectrum-scale-storage/ EMS HA in ESS LE (Little Endian) environment https://developer.ibm.com/storage/2018/12/07/ems-ha-in-ess-le-little-endian-environment/ What?s new in ESS 5.3.2 https://developer.ibm.com/storage/2018/12/04/whats-new-in-ess-5-3-2/ Administer your Spectrum Scale cluster easily https://developer.ibm.com/storage/2018/11/13/administer-your-spectrum-scale-cluster-easily/ Disaster Recovery using Spectrum Scale?s Active File Management https://developer.ibm.com/storage/2018/11/13/disaster-recovery-using-spectrum-scales-active-file-management/ Recovery Group Failover Procedure of IBM Elastic Storage Server (ESS) https://developer.ibm.com/storage/2018/10/08/recovery-group-failover-procedure-ibm-elastic-storage-server-ess/ Whats new in IBM Elastic Storage Server (ESS) Version 5.3.1 and 5.3.1.1 https://developer.ibm.com/storage/2018/10/04/whats-new-ibm-elastic-storage-server-ess-version-5-3-1-5-3-1-1/ For more : Search /browse here: https://developer.ibm.com/storage/blog User Group Presentations: https://www.spectrumscale.org/presentations/ Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Blogs%2C%20White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 10/03/2018 08:48 PM Subject: Latest Technical Blogs on IBM Spectrum Scale (Q3 2018) Dear User Group Members, In continuation, here are list of development blogs in the this quarter (Q3 2018). We now have over 100+ developer blogs on Spectrum Scale/ESS. As discussed in User Groups, passing it along to the emailing list. How NFS exports became more dynamic with Spectrum Scale 5.0.2 https://developer.ibm.com/storage/2018/10/02/nfs-exports-became-dynamic-spectrum-scale-5-0-2/ HPC storage on AWS (IBM Spectrum Scale) https://developer.ibm.com/storage/2018/10/02/hpc-storage-aws-ibm-spectrum-scale/ Upgrade with Excluding the node(s) using Install-toolkit https://developer.ibm.com/storage/2018/09/30/upgrade-excluding-nodes-using-install-toolkit/ Offline upgrade using Install-toolkit https://developer.ibm.com/storage/2018/09/30/offline-upgrade-using-install-toolkit/ IBM Spectrum Scale for Linux on IBM Z ? What?s new in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/21/ibm-spectrum-scale-for-linux-on-ibm-z-whats-new-in-ibm-spectrum-scale-5-0-2/ What?s New in IBM Spectrum Scale 5.0.2 ? https://developer.ibm.com/storage/2018/09/15/whats-new-ibm-spectrum-scale-5-0-2/ Starting IBM Spectrum Scale 5.0.2 release, the installation toolkit supports upgrade rerun if fresh upgrade fails. https://developer.ibm.com/storage/2018/09/15/starting-ibm-spectrum-scale-5-0-2-release-installation-toolkit-supports-upgrade-rerun-fresh-upgrade-fails/ IBM Spectrum Scale installation toolkit ? enhancements over releases ? 5.0.2.0 https://developer.ibm.com/storage/2018/09/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases-5-0-2-0/ Announcing HDP 3.0 support with IBM Spectrum Scale https://developer.ibm.com/storage/2018/08/31/announcing-hdp-3-0-support-ibm-spectrum-scale/ IBM Spectrum Scale Tuning Overview for Hadoop Workload https://developer.ibm.com/storage/2018/08/20/ibm-spectrum-scale-tuning-overview-hadoop-workload/ Making the Most of Multicloud Storage https://developer.ibm.com/storage/2018/08/13/making-multicloud-storage/ Disaster Recovery for Transparent Cloud Tiering using SOBAR https://developer.ibm.com/storage/2018/08/13/disaster-recovery-transparent-cloud-tiering-using-sobar/ Your Optimal Choice of AI Storage for Today and Tomorrow https://developer.ibm.com/storage/2018/08/10/spectrum-scale-ai-workloads/ Analyze IBM Spectrum Scale File Access Audit with ELK Stack https://developer.ibm.com/storage/2018/07/30/analyze-ibm-spectrum-scale-file-access-audit-elk-stack/ Mellanox SX1710 40G switch MLAG configuration for IBM ESS https://developer.ibm.com/storage/2018/07/12/mellanox-sx1710-40g-switcher-mlag-configuration/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? SMB and NFS Access issues https://developer.ibm.com/storage/2018/07/10/protocol-problem-determination-guide-ibm-spectrum-scale-smb-nfs-access-issues/ Access Control in IBM Spectrum Scale Object https://developer.ibm.com/storage/2018/07/06/access-control-ibm-spectrum-scale-object/ IBM Spectrum Scale HDFS Transparency Docker support https://developer.ibm.com/storage/2018/07/06/ibm-spectrum-scale-hdfs-transparency-docker-support/ Protocol Problem Determination Guide for IBM Spectrum Scale? ? Log Collection https://developer.ibm.com/storage/2018/07/04/protocol-problem-determination-guide-ibm-spectrum-scale-log-collection/ Redpapers IBM Spectrum Scale Immutability Introduction, Configuration Guidance, and Use Cases http://www.redbooks.ibm.com/abstracts/redp5507.html?Open Certifications Assessment of the immutability function of IBM Spectrum Scale Version 5.0 in accordance to US SEC17a-4f, EU GDPR Article 21 Section 1, German and Swiss laws and regulations in collaboration with KPMG. Certificate: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?DE968667B47544FF83F6CCDCF37E5FB5 Full assessment report: http://www.kpmg.de/bescheinigungen/RequestReport.aspx?B290411BE1224F5A9B4D24663BCD3C5D For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 07/03/2018 12:13 AM Subject: Re: Latest Technical Blogs on Spectrum Scale (Q2 2018) Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q2 2018). We now have over 100+ developer blogs. As discussed in User Groups, passing it along: IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ IBM Spectrum Scale ILM Policies https://developer.ibm.com/storage/2018/06/02/ibm-spectrum-scale-ilm-policies/ IBM Spectrum Scale 5.0.1 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2018/06/15/6494/ Management GUI enhancements in IBM Spectrum Scale release 5.0.1 https://developer.ibm.com/storage/2018/05/18/management-gui-enhancements-in-ibm-spectrum-scale-release-5-0-1/ Managing IBM Spectrum Scale services through GUI https://developer.ibm.com/storage/2018/05/18/managing-ibm-spectrum-scale-services-through-gui/ Use AWS CLI with IBM Spectrum Scale? object storage https://developer.ibm.com/storage/2018/05/16/use-awscli-with-ibm-spectrum-scale-object-storage/ Hadoop Storage Tiering with IBM Spectrum Scale https://developer.ibm.com/storage/2018/05/09/hadoop-storage-tiering-ibm-spectrum-scale/ How many Files on my Filesystem? https://developer.ibm.com/storage/2018/05/07/many-files-filesystem/ Recording Spectrum Scale Object Stats for Potential Billing like Purpose using Elasticsearch https://developer.ibm.com/storage/2018/05/04/spectrum-scale-object-stats-for-billing-using-elasticsearch/ New features in IBM Elastic Storage Server (ESS) Version 5.3 https://developer.ibm.com/storage/2018/04/09/new-features-ibm-elastic-storage-server-ess-version-5-3/ Using IBM Spectrum Scale for storage in IBM Cloud Private (Missed to send earlier) https://medium.com/ibm-cloud/ibm-spectrum-scale-with-ibm-cloud-private-8bf801796f19 Redpapers Hortonworks Data Platform with IBM Spectrum Scale: Reference Guide for Building an Integrated Solution http://www.redbooks.ibm.com/redpieces/abstracts/redp5448.html, Enabling Hybrid Cloud Storage for IBM Spectrum Scale Using Transparent Cloud Tiering http://www.redbooks.ibm.com/abstracts/redp5411.html?Open SAP HANA and ESS: A Winning Combination (Update) http://www.redbooks.ibm.com/abstracts/redp5436.html?Open Others IBM Spectrum Scale Software Version Recommendation Preventive Service Planning (Updated) http://www-01.ibm.com/support/docview.wss?uid=ssg1S1009703, IDC Infobrief: A Modular Approach to Genomics Infrastructure at Scale in HCLS https://www.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=37016937USEN& For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Date: 03/27/2018 05:23 PM Subject: Re: Latest Technical Blogs on Spectrum Scale Dear User Group Members, In continuation , here are list of development blogs in the this quarter (Q1 2018). As discussed in User Groups, passing it along: GDPR Compliance and Unstructured Data Storage https://developer.ibm.com/storage/2018/03/27/gdpr-compliance-unstructure-data-storage/ IBM Spectrum Scale for Linux on IBM Z ? Release 5.0 features and highlights https://developer.ibm.com/storage/2018/03/09/ibm-spectrum-scale-linux-ibm-z-release-5-0-features-highlights/ Management GUI enhancements in IBM Spectrum Scale release 5.0.0 https://developer.ibm.com/storage/2018/01/18/gui-enhancements-in-spectrum-scale-release-5-0-0/ IBM Spectrum Scale 5.0.0 ? What?s new in NFS? https://developer.ibm.com/storage/2018/01/18/ibm-spectrum-scale-5-0-0-whats-new-nfs/ Benefits and implementation of Spectrum Scale sudo wrappers https://developer.ibm.com/storage/2018/01/15/benefits-implementation-spectrum-scale-sudo-wrappers/ IBM Spectrum Scale: Big Data and Analytics Solution Brief https://developer.ibm.com/storage/2018/01/15/ibm-spectrum-scale-big-data-analytics-solution-brief/ Variant Sub-blocks in Spectrum Scale 5.0 https://developer.ibm.com/storage/2018/01/11/spectrum-scale-variant-sub-blocks/ Compression support in Spectrum Scale 5.0.0 https://developer.ibm.com/storage/2018/01/11/compression-support-spectrum-scale-5-0-0/ IBM Spectrum Scale Versus Apache Hadoop HDFS https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/ ESS Fault Tolerance https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/ Genomic Workloads ? How To Get it Right From Infrastructure Point Of View. https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/ IBM Spectrum Scale On AWS Cloud : This video explains how to deploy IBM Spectrum Scale on AWS. This solution helps the users who require highly available access to a shared name space across multiple instances with good performance, without requiring an in-depth knowledge of IBM Spectrum Scale. Detailed Demo : https://www.youtube.com/watch?v=6j5Xj_d0bh4 Brief Demo : https://www.youtube.com/watch?v=-aMQKPW_RfY. For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Cc: Doris Conti/Poughkeepsie/IBM at IBMUS Date: 01/10/2018 12:13 PM Subject: Re: Latest Technical Blogs on Spectrum Scale Dear User Group Members, Here are list of development blogs in the last quarter. Passing it to this email group as Doris had got a feedback in the UG meetings to notify the members with the latest updates periodically. Genomic Workloads ? How To Get it Right From Infrastructure Point Of View. https://developer.ibm.com/storage/2018/01/06/genomic-workloads-get-right-infrastructure-point-view/ IBM Spectrum Scale Versus Apache Hadoop HDFS https://developer.ibm.com/storage/2018/01/10/spectrumscale_vs_hdfs/ ESS Fault Tolerance https://developer.ibm.com/storage/2018/01/09/ess-fault-tolerance/ IBM Spectrum Scale MMFSCK ? Savvy Enhancements https://developer.ibm.com/storage/2018/01/05/ibm-spectrum-scale-mmfsck-savvy-enhancements/ ESS Disk Management https://developer.ibm.com/storage/2018/01/02/ess-disk-management/ IBM Spectrum Scale Object Protocol On Ubuntu https://developer.ibm.com/storage/2018/01/01/ibm-spectrum-scale-object-protocol-ubuntu/ IBM Spectrum Scale 5.0 ? Whats new in Unified File and Object https://developer.ibm.com/storage/2017/12/20/ibm-spectrum-scale-5-0-whats-new-object/ A Complete Guide to ? Protocol Problem Determination Guide for IBM Spectrum Scale? ? Part 1 https://developer.ibm.com/storage/2017/12/19/complete-guide-protocol-problem-determination-guide-ibm-spectrum-scale-1/ IBM Spectrum Scale installation toolkit ? enhancements over releases https://developer.ibm.com/storage/2017/12/15/ibm-spectrum-scale-installation-toolkit-enhancements-releases/ Network requirements in an Elastic Storage Server Setup https://developer.ibm.com/storage/2017/12/13/network-requirements-in-an-elastic-storage-server-setup/ Co-resident migration with Transparent cloud tierin https://developer.ibm.com/storage/2017/12/05/co-resident-migration-transparent-cloud-tierin/ IBM Spectrum Scale on Hortonworks HDP Hadoop clusters : A Complete Big Data Solution https://developer.ibm.com/storage/2017/12/05/ibm-spectrum-scale-hortonworks-hdp-hadoop-clusters-complete-big-data-solution/ Big data analytics with Spectrum Scale using remote cluster mount & multi-filesystem support https://developer.ibm.com/storage/2017/11/28/big-data-analytics-spectrum-scale-using-remote-cluster-mount-multi-filesystem-support/ IBM Spectrum Scale HDFS Transparency Short Circuit Write Support https://developer.ibm.com/storage/2017/11/28/ibm-spectrum-scale-hdfs-transparency-short-circuit-write-support/ IBM Spectrum Scale HDFS Transparency Federation Support https://developer.ibm.com/storage/2017/11/27/ibm-spectrum-scale-hdfs-transparency-federation-support/ How to configure and performance tuning different system workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-different-system-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ How to configure and performance tuning Spark workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-spark-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ How to configure and performance tuning database workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/27/configure-performance-tuning-database-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ How to configure and performance tuning Hadoop workloads on IBM Spectrum Scale Sharing Nothing Cluster https://developer.ibm.com/storage/2017/11/24/configure-performance-tuning-hadoop-workloads-ibm-spectrum-scale-sharing-nothing-cluster/ IBM Spectrum Scale Sharing Nothing Cluster Performance Tuning https://developer.ibm.com/storage/2017/11/24/ibm-spectrum-scale-sharing-nothing-cluster-performance-tuning/ How to Configure IBM Spectrum Scale? with NIS based Authentication. https://developer.ibm.com/storage/2017/11/21/configure-ibm-spectrum-scale-nis-based-authentication/ For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media From: Sandeep Ramesh/India/IBM To: gpfsug-discuss at spectrumscale.org Cc: Doris Conti/Poughkeepsie/IBM at IBMUS Date: 11/16/2017 08:15 PM Subject: Latest Technical Blogs on Spectrum Scale Dear User Group members, Here are the Development Blogs in last 3 months on Spectrum Scale Technical Topics. Spectrum Scale Monitoring ? Know More ? https://developer.ibm.com/storage/2017/11/16/spectrum-scale-monitoring-know/ IBM Spectrum Scale 5.0 Release ? What?s coming ! https://developer.ibm.com/storage/2017/11/14/ibm-spectrum-scale-5-0-release-whats-coming/ Four Essentials things to know for managing data ACLs on IBM Spectrum Scale? from Windows https://developer.ibm.com/storage/2017/11/13/four-essentials-things-know-managing-data-acls-ibm-spectrum-scale-windows/ GSSUTILS: A new way of running SSR, Deploying or Upgrading ESS Server https://developer.ibm.com/storage/2017/11/13/gssutils/ IBM Spectrum Scale Object Authentication https://developer.ibm.com/storage/2017/11/02/spectrum-scale-object-authentication/ Video Surveillance ? Choosing the right storage https://developer.ibm.com/storage/2017/11/02/video-surveillance-choosing-right-storage/ IBM Spectrum scale object deep dive training with problem determination https://www.slideshare.net/SmitaRaut/ibm-spectrum-scale-object-deep-dive-training Spectrum Scale as preferred software defined storage for Ubuntu OpenStack https://developer.ibm.com/storage/2017/09/29/spectrum-scale-preferred-software-defined-storage-ubuntu-openstack/ IBM Elastic Storage Server 2U24 Storage ? an All-Flash offering, a performance workhorse https://developer.ibm.com/storage/2017/10/06/ess-5-2-flash-storage/ A Complete Guide to Configure LDAP-based authentication with IBM Spectrum Scale? for File Access https://developer.ibm.com/storage/2017/09/21/complete-guide-configure-ldap-based-authentication-ibm-spectrum-scale-file-access/ Deploying IBM Spectrum Scale on AWS Quick Start https://developer.ibm.com/storage/2017/09/18/deploy-ibm-spectrum-scale-on-aws-quick-start/ Monitoring Spectrum Scale Object metrics https://developer.ibm.com/storage/2017/09/14/monitoring-spectrum-scale-object-metrics/ Tier your data with ease to Spectrum Scale Private Cloud(s) using Moonwalk Universal https://developer.ibm.com/storage/2017/09/14/tier-data-ease-spectrum-scale-private-clouds-using-moonwalk-universal/ Why do I see owner as ?Nobody? for my export mounted using NFSV4 Protocol on IBM Spectrum Scale?? https://developer.ibm.com/storage/2017/09/08/see-owner-nobody-export-mounted-using-nfsv4-protocol-ibm-spectrum-scale/ IBM Spectrum Scale? Authentication using Active Directory and LDAP https://developer.ibm.com/storage/2017/09/01/ibm-spectrum-scale-authentication-using-active-directory-ldap/ IBM Spectrum Scale? Authentication using Active Directory and RFC2307 https://developer.ibm.com/storage/2017/09/01/ibm-spectrum-scale-authentication-using-active-directory-rfc2307/ High Availability Implementation with IBM Spectrum Virtualize and IBM Spectrum Scale https://developer.ibm.com/storage/2017/08/30/high-availability-implementation-ibm-spectrum-virtualize-ibm-spectrum-scale/ 10 Frequently asked Questions on configuring Authentication using AD + AUTO ID mapping on IBM Spectrum Scale?. https://developer.ibm.com/storage/2017/08/04/10-frequently-asked-questions-configuring-authentication-using-ad-auto-id-mapping-ibm-spectrum-scale/ IBM Spectrum Scale? Authentication using Active Directory https://developer.ibm.com/storage/2017/07/30/ibm-spectrum-scale-auth-using-active-directory/ Five cool things that you didn?t know Transparent Cloud Tiering on Spectrum Scale can do https://developer.ibm.com/storage/2017/07/29/five-cool-things-didnt-know-transparent-cloud-tiering-spectrum-scale-can/ IBM Spectrum Scale GUI videos https://developer.ibm.com/storage/2017/07/25/ibm-spectrum-scale-gui-videos/ IBM Spectrum Scale? Authentication ? Planning for NFS Access https://developer.ibm.com/storage/2017/07/24/ibm-spectrum-scale-planning-nfs-access/ For more : Search /browse here: https://developer.ibm.com/storage/blog Consolidation list: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/White%20Papers%20%26%20Media -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Tue Apr 30 10:24:45 2019 From: chair at spectrumscale.org (Simon Thompson (Spectrum Scale User Group Chair)) Date: Tue, 30 Apr 2019 10:24:45 +0100 Subject: [gpfsug-discuss] Break-out session for new user and prospects [London Usergroup] Message-ID: <776770B2-5F84-4462-B900-58EBB982DC1C@spectrumscale.org> Hi all, We know that a lot of the talks at the user groups are for experienced users, following feedback from the USA user group, we thought we?d advertise that this year we?re planning to run a break-out for new users on day 1. Break-out session for new user and prospects (Wed May 8th, 13:00 - 16:45) This year we will offer a break-out session for new Spectrum Scale user and prospects to get started with Spectrum Scale. In this session we will cover Spectrum Scale Use Cases, the architecture of a Spectrum Scale environment, and discuss how the manifold Spectrum Scale features support the different use case. Please inform customers and colleagues who are interested to learn about Spectrum Scale to grab one of the last seats. Registration link: https://www.spectrumscaleug.org/event/uk-user-group-meeting/ There?s just a couple of places left for the usergroup, so please do share and register if you plan to attend. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: