From valleru at cbio.mskcc.org Tue May 1 15:34:39 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 1 May 2018 10:34:39 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> Message-ID: <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > You have been able to do this for some time, though I think it's only just supported. > > We've been exporting remote mounts since CES was added. > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > Sent: 30 April 2018 22:11 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Hello All, > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > Because according to the limitations as mentioned in the below link: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > Regards, > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.smith at framestore.com Wed May 2 11:06:20 2018 From: peter.smith at framestore.com (Peter Smith) Date: Wed, 2 May 2018 11:06:20 +0100 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: "how do I see how much of the pagepool is in use and by what? I've looked at mmfsadm dump and mmdiag --memory and neither has provided me the information I'm looking for (or at least not in a format I understand)" +1. Pointers appreciated! :-) On 10 April 2018 at 17:22, Aaron Knister wrote: > I wonder if this is an artifact of pagepool exhaustion which makes me ask > the question-- how do I see how much of the pagepool is in use and by what? > I've looked at mmfsadm dump and mmdiag --memory and neither has provided me > the information I'm looking for (or at least not in a format I understand). > > -Aaron > > On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] > wrote: > >> I hate admitting this but I?ve found something that?s got me stumped. >> >> We have a user running an MPI job on the system. Each rank opens up >> several output files to which it writes ASCII debug information. The net >> result across several hundred ranks is an absolute smattering of teeny tiny >> I/o requests to te underlying disks which they don?t appreciate. >> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I >> don?t understand is why these write requests aren?t getting batched up into >> larger write requests to the underlying disks. >> >> If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see >> that the nasty unaligned 8k io requests are batched up into nice 1M I/o >> requests before they hit the NSD. >> >> As best I can tell the application isn?t doing any fsync?s and isn?t >> doing direct io to these files. >> >> Can anyone explain why seemingly very similar io workloads appear to >> result in well formed NSD I/O in one case and awful I/o in another? >> >> Thanks! >> >> -Stumped >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> > -- > Aaron Knister > NASA Center for Climate Simulation (Code 606.2) > Goddard Space Flight Center > (301) 286-2776 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- [image: Framestore] Peter Smith ? Senior Systems Engineer London ? New York ? Los Angeles ? Chicago ? Montr?al T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 <+44%20%280%297816%20123009> 28 Chancery Lane, London WC2A 1LB Twitter ? Facebook ? framestore.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From UWEFALKE at de.ibm.com Wed May 2 13:09:21 2018 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Wed, 2 May 2018 14:09:21 +0200 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: mmfsadm dump pgalloc might get you one step further ... Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Thomas Wolter, Sven Schoo? Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: Peter Smith To: gpfsug main discussion list Date: 02/05/2018 12:10 Subject: Re: [gpfsug-discuss] Confusing I/O Behavior Sent by: gpfsug-discuss-bounces at spectrumscale.org "how do I see how much of the pagepool is in use and by what? I've looked at mmfsadm dump and mmdiag --memory and neither has provided me the information I'm looking for (or at least not in a format I understand)" +1. Pointers appreciated! :-) On 10 April 2018 at 17:22, Aaron Knister wrote: I wonder if this is an artifact of pagepool exhaustion which makes me ask the question-- how do I see how much of the pagepool is in use and by what? I've looked at mmfsadm dump and mmdiag --memory and neither has provided me the information I'm looking for (or at least not in a format I understand). -Aaron On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] wrote: I hate admitting this but I?ve found something that?s got me stumped. We have a user running an MPI job on the system. Each rank opens up several output files to which it writes ASCII debug information. The net result across several hundred ranks is an absolute smattering of teeny tiny I/o requests to te underlying disks which they don?t appreciate. Performance plummets. The I/o requests are 30 to 80 bytes in size. What I don?t understand is why these write requests aren?t getting batched up into larger write requests to the underlying disks. If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see that the nasty unaligned 8k io requests are batched up into nice 1M I/o requests before they hit the NSD. As best I can tell the application isn?t doing any fsync?s and isn?t doing direct io to these files. Can anyone explain why seemingly very similar io workloads appear to result in well formed NSD I/O in one case and awful I/o in another? Thanks! -Stumped _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Aaron Knister NASA Center for Climate Simulation (Code 606.2) Goddard Space Flight Center (301) 286-2776 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Peter Smith ? Senior Systems Engineer London ? New York ? Los Angeles ? Chicago ? Montr?al T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 28 Chancery Lane, London WC2A 1LB Twitter ? Facebook ? framestore.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Wed May 2 13:25:42 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 2 May 2018 12:25:42 +0000 Subject: [gpfsug-discuss] AFM with clones Message-ID: <05241944-0A1C-4BC7-90FC-C22BC05F9643@bham.ac.uk> Hi, We are looking at providing an AFM cache of a home which has a number of cloned files. From the docs: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1ins_afmandafmdrlimitations.htm ? We can see that ?The mmclone command is not supported on AFM cache and AFM DR primary filesets. Clones created at home for AFM filesets are treated as separate files in the cache.? So it?s no surprise that when we pre-cache the files, they space consumed is different. What I?m not clear on is what happens if we update a clone file at home? I know AFM is supposed to only transfer the exact bytes updated, does this work with clones? i.e. at home do we just get the bytes updated in the copy-on-write clone, or do we accidentally end up shipping the whole file back? (note we are using IW mode) Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Wed May 2 13:31:37 2018 From: oehmes at gmail.com (Sven Oehme) Date: Wed, 02 May 2018 12:31:37 +0000 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: GPFS doesn't do flush on close by default unless explicit asked by the application itself, but you can configure that . mmchconfig flushOnClose=yes if you use O_SYNC or O_DIRECT then each write ends up on the media before we return. sven On Wed, Apr 11, 2018 at 7:06 AM Peter Serocka wrote: > Let?s keep in mind that line buffering is a concept > within the standard C library; > if every log line triggers one write(2) system call, > and it?s not direct io, then multiple write still get > coalesced into few larger disk writes (as with the dd example). > > A logging application might choose to close(2) > a log file after each write(2) ? that produces > a different scenario, where the file system might > guarantee that the data has been written to disk > when close(2) return a success. > > (Local Linux file systems do not do this with default mounts, > but networked filesystems usually do.) > > Aaron, can you trace your application to see > what is going on in terms of system calls? > > ? Peter > > > > On 2018 Apr 10 Tue, at 18:28, Marc A Kaplan wrote: > > > > Debug messages are typically unbuffered or "line buffered". If that is > truly causing a performance problem AND you still want to collect the > messages -- you'll need to find a better way to channel and collect those > messages. > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Wed May 2 13:34:56 2018 From: oehmes at gmail.com (Sven Oehme) Date: Wed, 02 May 2018 12:34:56 +0000 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: a few more weeks and we have a better answer than dump pgalloc ;-) On Wed, May 2, 2018 at 6:07 AM Peter Smith wrote: > "how do I see how much of the pagepool is in use and by what? I've looked > at mmfsadm dump and mmdiag --memory and neither has provided me the > information I'm looking for (or at least not in a format I understand)" > > +1. Pointers appreciated! :-) > > On 10 April 2018 at 17:22, Aaron Knister wrote: > >> I wonder if this is an artifact of pagepool exhaustion which makes me ask >> the question-- how do I see how much of the pagepool is in use and by what? >> I've looked at mmfsadm dump and mmdiag --memory and neither has provided me >> the information I'm looking for (or at least not in a format I understand). >> >> -Aaron >> >> On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE >> CORP] wrote: >> >>> I hate admitting this but I?ve found something that?s got me stumped. >>> >>> We have a user running an MPI job on the system. Each rank opens up >>> several output files to which it writes ASCII debug information. The net >>> result across several hundred ranks is an absolute smattering of teeny tiny >>> I/o requests to te underlying disks which they don?t appreciate. >>> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I >>> don?t understand is why these write requests aren?t getting batched up into >>> larger write requests to the underlying disks. >>> >>> If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see >>> that the nasty unaligned 8k io requests are batched up into nice 1M I/o >>> requests before they hit the NSD. >>> >>> As best I can tell the application isn?t doing any fsync?s and isn?t >>> doing direct io to these files. >>> >>> Can anyone explain why seemingly very similar io workloads appear to >>> result in well formed NSD I/O in one case and awful I/o in another? >>> >>> Thanks! >>> >>> -Stumped >>> >>> >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >>> >>> >> -- >> Aaron Knister >> NASA Center for Climate Simulation (Code 606.2) >> Goddard Space Flight Center >> (301) 286-2776 >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > > > > -- > [image: Framestore] Peter Smith ? Senior Systems Engineer > London ? New York ? Los Angeles ? Chicago ? Montr?al > T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 > <+44%20%280%297816%20123009> > 28 Chancery Lane, London WC2A 1LB > > Twitter ? Facebook > ? framestore.com > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alevin at gmail.com Wed May 2 17:10:48 2018 From: alevin at gmail.com (Alex Levin) Date: Wed, 2 May 2018 12:10:48 -0400 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: Aaron, Peter, I'm monitoring the pagepool usage as: buffers=`/usr/lpp/mmfs/bin/mmfsadm dump buffers | grep bufLen | awk '{ SUM += $7} END { print SUM }'` result in bytes If your pagepool is huge - the execution could take some time ( ~5 sec on 100Gb pagepool ) --Alex On Wed, May 2, 2018 at 6:06 AM, Peter Smith wrote: > "how do I see how much of the pagepool is in use and by what? I've looked > at mmfsadm dump and mmdiag --memory and neither has provided me the > information I'm looking for (or at least not in a format I understand)" > > +1. Pointers appreciated! :-) > > On 10 April 2018 at 17:22, Aaron Knister wrote: > >> I wonder if this is an artifact of pagepool exhaustion which makes me ask >> the question-- how do I see how much of the pagepool is in use and by what? >> I've looked at mmfsadm dump and mmdiag --memory and neither has provided me >> the information I'm looking for (or at least not in a format I understand). >> >> -Aaron >> >> On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE >> CORP] wrote: >> >>> I hate admitting this but I?ve found something that?s got me stumped. >>> >>> We have a user running an MPI job on the system. Each rank opens up >>> several output files to which it writes ASCII debug information. The net >>> result across several hundred ranks is an absolute smattering of teeny tiny >>> I/o requests to te underlying disks which they don?t appreciate. >>> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I >>> don?t understand is why these write requests aren?t getting batched up into >>> larger write requests to the underlying disks. >>> >>> If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see >>> that the nasty unaligned 8k io requests are batched up into nice 1M I/o >>> requests before they hit the NSD. >>> >>> As best I can tell the application isn?t doing any fsync?s and isn?t >>> doing direct io to these files. >>> >>> Can anyone explain why seemingly very similar io workloads appear to >>> result in well formed NSD I/O in one case and awful I/o in another? >>> >>> Thanks! >>> >>> -Stumped >>> >>> >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >>> >>> >> -- >> Aaron Knister >> NASA Center for Climate Simulation (Code 606.2) >> Goddard Space Flight Center >> (301) 286-2776 >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > > > > -- > [image: Framestore] Peter Smith ? Senior Systems Engineer > London ? New York ? Los Angeles ? Chicago ? Montr?al > T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 > <+44%20%280%297816%20123009> > 28 Chancery Lane, London WC2A 1LB > > Twitter ? Facebook > ? framestore.com > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vpuvvada at in.ibm.com Wed May 2 18:48:01 2018 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 2 May 2018 23:18:01 +0530 Subject: [gpfsug-discuss] AFM with clones In-Reply-To: <05241944-0A1C-4BC7-90FC-C22BC05F9643@bham.ac.uk> References: <05241944-0A1C-4BC7-90FC-C22BC05F9643@bham.ac.uk> Message-ID: >What I?m not clear on is what happens if we update a clone file at home? I know AFM is supposed to only transfer the exact bytes updated, does this work with clones? i.e. at home do we just get the bytes updated in the >copy-on-write clone, or do we accidentally end up shipping the whole file back? IW mode revalidation detects that file is changed at home, all data blocks are cleared (punches the hole) and the next read pulls whole file from the home. ~Venkat (vpuvvada at in.ibm.com) From: "Simon Thompson (IT Research Support)" To: "gpfsug-discuss at spectrumscale.org" Date: 05/02/2018 05:55 PM Subject: [gpfsug-discuss] AFM with clones Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, We are looking at providing an AFM cache of a home which has a number of cloned files. From the docs: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1ins_afmandafmdrlimitations.htm ? We can see that ?The mmclone command is not supported on AFM cache and AFM DR primary filesets. Clones created at home for AFM filesets are treated as separate files in the cache.? So it?s no surprise that when we pre-cache the files, they space consumed is different. What I?m not clear on is what happens if we update a clone file at home? I know AFM is supposed to only transfer the exact bytes updated, does this work with clones? i.e. at home do we just get the bytes updated in the copy-on-write clone, or do we accidentally end up shipping the whole file back? (note we are using IW mode) Thanks Simon_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=92LOlNh2yLzrrGTDA7HnfF8LFr55zGxghLZtvZcZD7A&m=yLFsan-7rzFW2Nw9k8A-SHKQfNQonl9v_hk9hpXLYjQ&s=7w_-SsCLeUNBZoFD3zUF5ika7PTUIQkKuOhuz-5pr1I&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Thu May 3 10:43:31 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Thu, 3 May 2018 09:43:31 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used Message-ID: Hi all, I'd be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you've employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Thu May 3 12:41:28 2018 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Thu, 3 May 2018 13:41:28 +0200 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Thu May 3 14:03:09 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Thu, 3 May 2018 09:03:09 -0400 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen > On May 3, 2018, at 5:43 AM, Sobey, Richard A wrote: > > Hi all, > > I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. > > On-list or off is fine with me. > > Thanks > Richard > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Thu May 3 15:25:03 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 3 May 2018 14:25:03 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: Hi Lohit, Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz Sent: Thursday, May 03, 2018 6:41 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Note: External Email ________________________________ Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) >, wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says "You can configure one storage cluster and up to five protocol clusters (current limit)." Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Thu May 3 15:37:11 2018 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 3 May 2018 16:37:11 +0200 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: Since I'm pretty proud of my awk one-liner, and maybe it's useful for this kind of charging, here's how to sum up how much data each user has in the filesystem (without regards to if the data blocks are offline, online, replicated or compressed): # cat full-file-list.policy RULE EXTERNAL LIST 'files' EXEC '' RULE LIST 'files' SHOW( VARCHAR(USER_ID) || ' ' || VARCHAR(GROUP_ID) || ' ' || VARCHAR(FILESET_NAME) || ' ' || VARCHAR(FILE_SIZE) || ' ' || VARCHAR(KB_ALLOCATED) ) # mmapplypolicy gpfs0 -P /gpfs/gpfsmgt/etc/full-file-list.policy -I defer -f /tmp/full-file-list # awk '{a[$4] += $7} END{ print "# UID\t Bytes" ; for (i in a) print i, "\t", a[i]}' /tmp/full-file-list.list.files Takes ~15 minutes to run on a 60 million file filesystem. -jf On Thu, May 3, 2018 at 11:43 AM, Sobey, Richard A wrote: > Hi all, > > > > I?d be interested to talk to anyone that is using HSM to move data to > tape, (and stubbing the file(s)) specifically any strategies you?ve > employed to figure out how to charge your customers (where you do charge > anyway) based on usage. > > > > On-list or off is fine with me. > > > > Thanks > > Richard > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 15:41:16 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 10:41:16 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: > Hi Lohit, > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > Mit freundlichen Gr??en / Kind regards > > Mathias Dietz > > Spectrum Scale Development - Release Lead Architect (4.2.x) > Spectrum Scale RAS Architect > --------------------------------------------------------------------------- > IBM Deutschland > Am Weiher 24 > 65451 Kelsterbach > Phone: +49 70342744105 > Mobile: +49-15152801035 > E-Mail: mdietz at de.ibm.com > ----------------------------------------------------------------------------- > IBM Deutschland Research & Development GmbH > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > From: ? ? ? ?valleru at cbio.mskcc.org > To: ? ? ? ?gpfsug main discussion list > Date: ? ? ? ?01/05/2018 16:34 > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > Thanks Simon. > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > Regards, > Lohit > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > You have been able to do this for some time, though I think it's only just supported. > > We've been exporting remote mounts since CES was added. > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > Sent: 30 April 2018 22:11 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Hello All, > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > Because according to the limitations as mentioned in the below link: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > Regards, > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 15:46:09 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 10:46:09 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> Thanks Brian, May i know, if you could explain a bit more on the metadata updates issue? I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers. I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters? Please do correct me if i am wrong. As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers. Thanks, Lohit On May 3, 2018, 10:25 AM -0400, Bryan Banister , wrote: > Hi Lohit, > > Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. > > Cheers, > -Bryan > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz > Sent: Thursday, May 03, 2018 6:41 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Note: External Email > Hi Lohit, > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > Mit freundlichen Gr??en / Kind regards > > Mathias Dietz > > Spectrum Scale Development - Release Lead Architect (4.2.x) > Spectrum Scale RAS Architect > --------------------------------------------------------------------------- > IBM Deutschland > Am Weiher 24 > 65451 Kelsterbach > Phone: +49 70342744105 > Mobile: +49-15152801035 > E-Mail: mdietz at de.ibm.com > ----------------------------------------------------------------------------- > IBM Deutschland Research & Development GmbH > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > From: ? ? ? ?valleru at cbio.mskcc.org > To: ? ? ? ?gpfsug main discussion list > Date: ? ? ? ?01/05/2018 16:34 > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > Thanks Simon. > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > Regards, > Lohit > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > You have been able to do this for some time, though I think it's only just supported. > > We've been exporting remote mounts since CES was added. > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > Sent: 30 April 2018 22:11 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Hello All, > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > Because according to the limitations as mentioned in the below link: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > Regards, > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Thu May 3 16:02:51 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Thu, 3 May 2018 15:02:51 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> Message-ID: Stephen, Bryan, Thanks for the input, it?s greatly appreciated. For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers. So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB. Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Stephen Ulmer Sent: 03 May 2018 14:03 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Recharging where HSM is used I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen On May 3, 2018, at 5:43 AM, Sobey, Richard A > wrote: Hi all, I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Thu May 3 16:14:20 2018 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Thu, 3 May 2018 17:14:20 +0200 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark><8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> Message-ID: yes, deleting all NFS exports which point to a given file system would allow you to unmount it without bringing down the other file systems. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 03/05/2018 16:41 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Thu May 3 16:15:24 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 3 May 2018 15:15:24 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> Message-ID: Hi Lohit, Please see slides 13 and 14 in the presentation that DDN gave at the GPFS UG in the UK this April: http://files.gpfsug.org/presentations/2018/London/2-5_GPFSUG_London_2018_VCC_DDN_Overheads.pdf Multicluster setups with shared file access have a high probability of ?MetaNode Flapping? ? ?MetaNode role transfer occurs when the same files from a filesystem are accessed from two or more ?client? clusters via a MultiCluster relationship.? Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org Sent: Thursday, May 03, 2018 9:46 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Note: External Email ________________________________ Thanks Brian, May i know, if you could explain a bit more on the metadata updates issue? I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers. I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters? Please do correct me if i am wrong. As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers. Thanks, Lohit On May 3, 2018, 10:25 AM -0400, Bryan Banister >, wrote: Hi Lohit, Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz Sent: Thursday, May 03, 2018 6:41 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Note: External Email ________________________________ Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) >, wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From khanhn at us.ibm.com Thu May 3 16:29:57 2018 From: khanhn at us.ibm.com (Khanh V Ngo) Date: Thu, 3 May 2018 15:29:57 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Thu May 3 16:52:44 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 03 May 2018 16:52:44 +0100 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> Message-ID: <1525362764.27337.140.camel@strath.ac.uk> On Thu, 2018-05-03 at 15:02 +0000, Sobey, Richard A wrote: > Stephen, Bryan, > ? > Thanks for the input, it?s greatly appreciated. > ? > For us we?re trying ? as many people are ? to drive down the usage of > under-the-desk NAS appliances and USB HDDs. We offer space on disk, > but you can?t charge for 3TB of storage the same as you would down PC > World and many customers don?t understand the difference between what > we do, and what a USB disk offers. > ? > So, offering tape as a medium to store cold data, but not archive > data, is one offering we?re just getting round to discussing. The > solution is in place. To answer the specific question: for our > customers that adopt HSM, how much less should/could/can we charge > them per TB. We know how much a tape costs, but we don?t necessarily > have the means (or knowledge?) to say that for a given fileset, 80% > of the data is on tape. Then you get into 80% of 1TB is not the same > as 80% of 10TB. > ? The test that I have used in the past for if a file is migrated with a high degree of accuracy is if the space allocated on the file system is less than the file size, and equal to the stub size then presume the file is migrated. There is a small chance it could be sparse instead. However this is really rather remote as sparse files are not common in the first place and even less like that the amount of allocated data in the sparse file exactly matches the stub size. It is an easy step to write a policy to list all the UID and FILE_SIZE where KB_ALLOCATED References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> Message-ID: <6009EFF3-27EF-4E35-9FA1-1730C9ECF1A8@bham.ac.uk> Our charging model for disk storage assumes that a percentage of it is really HSM?d, though in practise we aren?t heavily doing this. My (personal) view on tape really is that anything on tape is FoC, that way people can play games to recall/keep it hot it if they want, but it eats their FoC or paid disk allocations, whereas if they leave it on tape, they benefit in having more total capacity. We currently use the pre-migrate/SOBAR for our DR piece, so we?d already be pre-migrating to tape anyway, so it doesn?t really cost us anything extra to give FoC HSM?d storage. So my suggestion is pitch HSM (or even TCT maybe ? if only we could do both) as your DR proposal, and then you can give it to users for free ? Simon From: on behalf of "Sobey, Richard A" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Thursday, 3 May 2018 at 16:03 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Recharging where HSM is used Stephen, Bryan, Thanks for the input, it?s greatly appreciated. For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers. So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB. Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Stephen Ulmer Sent: 03 May 2018 14:03 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Recharging where HSM is used I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen On May 3, 2018, at 5:43 AM, Sobey, Richard A > wrote: Hi all, I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu May 3 18:30:32 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Thu, 3 May 2018 17:30:32 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> Message-ID: <222D5882-1C2C-48CA-BEF3-478A9D66A0F3@bham.ac.uk> Yes we do this when we really really need to take a remote FS offline, which we try at all costs to avoid unless we have a maintenance window. Note if you only export via SMB, then you don?t have the same effect (unless something has changed recently) Simon From: on behalf of "valleru at cbio.mskcc.org" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Thursday, 3 May 2018 at 15:41 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 19:46:42 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 14:46:42 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> Message-ID: <1f7af581-300d-4526-8c9c-7bde344fbf22@Spark> Thanks Bryan. Yes i do understand it now, with respect to multi clusters reading the same file and metanode flapping. Will make sure the workload design will prevent metanode flapping. Regards, Lohit On May 3, 2018, 11:15 AM -0400, Bryan Banister , wrote: > Hi Lohit, > > Please see slides 13 and 14 in the presentation that DDN gave at the GPFS UG in the UK this April:? http://files.gpfsug.org/presentations/2018/London/2-5_GPFSUG_London_2018_VCC_DDN_Overheads.pdf > > Multicluster setups with shared file access have a high probability of ?MetaNode Flapping? > ? ?MetaNode role transfer occurs when the same files from a filesystem are accessed from two or more ?client? clusters via a MultiCluster relationship.? > > Cheers, > -Bryan > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org > Sent: Thursday, May 03, 2018 9:46 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Note: External Email > Thanks Brian, > May i know, if you could explain a bit more on the metadata updates issue? > I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers. > I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters? > Please do correct me if i am wrong. > As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers. > > Thanks, > Lohit > > On May 3, 2018, 10:25 AM -0400, Bryan Banister , wrote: > > > Hi Lohit, > > > > Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. > > > > Cheers, > > -Bryan > > > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz > > Sent: Thursday, May 03, 2018 6:41 AM > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > > > Note: External Email > > Hi Lohit, > > > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > > > > Mit freundlichen Gr??en / Kind regards > > > > Mathias Dietz > > > > Spectrum Scale Development - Release Lead Architect (4.2.x) > > Spectrum Scale RAS Architect > > --------------------------------------------------------------------------- > > IBM Deutschland > > Am Weiher 24 > > 65451 Kelsterbach > > Phone: +49 70342744105 > > Mobile: +49-15152801035 > > E-Mail: mdietz at de.ibm.com > > ----------------------------------------------------------------------------- > > IBM Deutschland Research & Development GmbH > > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > > > > > From: ? ? ? ?valleru at cbio.mskcc.org > > To: ? ? ? ?gpfsug main discussion list > > Date: ? ? ? ?01/05/2018 16:34 > > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > > > > > Thanks Simon. > > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > > > Regards, > > Lohit > > > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > > You have been able to do this for some time, though I think it's only just supported. > > > > We've been exporting remote mounts since CES was added. > > > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > > > Simon > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > > Sent: 30 April 2018 22:11 > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > > > Hello All, > > > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > > > Because according to the limitations as mentioned in the below link: > > > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > > > > Regards, > > Lohit > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 19:52:23 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 14:52:23 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <222D5882-1C2C-48CA-BEF3-478A9D66A0F3@bham.ac.uk> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> <222D5882-1C2C-48CA-BEF3-478A9D66A0F3@bham.ac.uk> Message-ID: <44e9d877-36b9-43c1-8ee8-ac8437987265@Spark> Thanks Simon. Currently, we are thinking of using the same remote filesystem for both NFS/SMB exports. I do have a related question with respect to SMB and AD integration on user-defined authentication. I have seen a past discussion from you on the usergroup regarding a similar integration, but i am trying a different setup. Will send an email with the related subject. Thanks, Lohit On May 3, 2018, 1:30 PM -0400, Simon Thompson (IT Research Support) , wrote: > Yes we do this when we really really need to take a remote FS offline, which we try at all costs to avoid unless we have a maintenance window. > > Note if you only export via SMB, then you don?t have the same effect (unless something has changed recently) > > Simon > > From: on behalf of "valleru at cbio.mskcc.org" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Thursday, 3 May 2018 at 15:41 > To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Thanks Mathiaz, > Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. > > However, i suppose we could bring down one of the filesystems before a planned downtime? > For example, by unexporting the filesystems on NFS/SMB before the downtime? > > I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. > > Regards, > Lohit > > On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: > > > Hi Lohit, > > > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > > > > Mit freundlichen Gr??en / Kind regards > > > > Mathias Dietz > > > > Spectrum Scale Development - Release Lead Architect (4.2.x) > > Spectrum Scale RAS Architect > > --------------------------------------------------------------------------- > > IBM Deutschland > > Am Weiher 24 > > 65451 Kelsterbach > > Phone: +49 70342744105 > > Mobile: +49-15152801035 > > E-Mail: mdietz at de.ibm.com > > ----------------------------------------------------------------------------- > > IBM Deutschland Research & Development GmbH > > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > > > > > From: ? ? ? ?valleru at cbio.mskcc.org > > To: ? ? ? ?gpfsug main discussion list > > Date: ? ? ? ?01/05/2018 16:34 > > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > > > > > Thanks Simon. > > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > > > Regards, > > Lohit > > > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > > You have been able to do this for some time, though I think it's only just supported. > > > > We've been exporting remote mounts since CES was added. > > > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > > > Simon > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > > Sent: 30 April 2018 22:11 > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > > > Hello All, > > > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > > > Because according to the limitations as mentioned in the below link: > > > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > > > > Regards, > > Lohit > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From JRLang at uwyo.edu Thu May 3 16:38:32 2018 From: JRLang at uwyo.edu (Jeffrey R. Lang) Date: Thu, 3 May 2018 15:38:32 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: Khanh Could you tell us what the policy file name is or where to get it? Thanks Jeff From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Khanh V Ngo Sent: Thursday, May 3, 2018 10:30 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Recharging where HSM is used Specifically with IBM Spectrum Archive EE, there is a script (mmapplypolicy with list rules and python since it outputs many different tables) to provide the total size of user files by file states. This way you can charge more for files that remain on disk and charge less for files migrated to tape. I have seen various prices for the chargeback so it's probably better to calculate based on your environment. The script can easily be changed to output based on GID, filesets, etc. Here's a snippet of the output (in human-readable units): +-------+-----------+-------------+-------------+-----------+ | User | Migrated | Premigrated | Resident | TOTAL | +-------+-----------+-------------+-------------+-----------+ | 0 | 1.563 KB | 50.240 GB | 6.000 bytes | 50.240 GB | | 27338 | 9.338 TB | 1.566 TB | 63.555 GB | 10.965 TB | | 27887 | 58.341 GB | 191.653 KB | | 58.341 GB | | 27922 | 2.111 MB | | | 2.111 MB | | 24089 | 4.657 TB | 22.921 TB | 433.660 GB | 28.002 TB | | 29657 | 29.219 TB | 32.049 TB | | 61.268 TB | | 29210 | 3.057 PB | 399.908 TB | 47.448 TB | 3.494 PB | | 23326 | 7.793 GB | 257.005 MB | 166.364 MB | 8.207 GB | | TOTAL | 3.099 PB | 456.492 TB | 47.933 TB | 3.592 PB | +-------+-----------+-------------+-------------+-----------+ Thanks, Khanh Khanh Ngo, Tape Storage Test Architect Senior Technical Staff Member and Master Inventor Tie-Line 8-321-4802 External Phone: (520)799-4802 9042/1/1467 Tucson, AZ khanhn at us.ibm.com (internet) It's okay to not understand something. It's NOT okay to test something you do NOT understand. ----- Original message ----- From: gpfsug-discuss-request at spectrumscale.org Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: gpfsug-discuss Digest, Vol 76, Issue 7 Date: Thu, May 3, 2018 8:19 AM Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Recharging where HSM is used (Sobey, Richard A) 2. Re: Spectrum Scale CES and remote file system mounts (Mathias Dietz) ---------------------------------------------------------------------- Message: 1 Date: Thu, 3 May 2018 15:02:51 +0000 From: "Sobey, Richard A" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Recharging where HSM is used Message-ID: > Content-Type: text/plain; charset="utf-8" Stephen, Bryan, Thanks for the input, it?s greatly appreciated. For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers. So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB. Richard From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Stephen Ulmer Sent: 03 May 2018 14:03 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Recharging where HSM is used I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen On May 3, 2018, at 5:43 AM, Sobey, Richard A > wrote: Hi all, I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 3 May 2018 17:14:20 +0200 From: "Mathias Dietz" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Message-ID: > Content-Type: text/plain; charset="iso-8859-1" yes, deleting all NFS exports which point to a given file system would allow you to unmount it without bringing down the other file systems. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 03/05/2018 16:41 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz >, wrote: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) >, wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= End of gpfsug-discuss Digest, Vol 76, Issue 7 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 20:14:57 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 15:14:57 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA and AD keytab integration with userdefined authentication Message-ID: <03e2a5c6-3538-4e20-84b8-563b0aedfbe6@Spark> Hello All, I am trying to export a single remote filesystem over NFS/SMB using GPFS CES. ( GPFS 5.0.0.2 and CentOS 7 ). We need NFS exports to be accessible on client nodes, that use public key authentication and ldap authorization. I already have this working with a previous CES setup on user-defined authentication, where users can just login to the client nodes, and access NFS mounts. However, i will also need SAMBA exports for the same GPFS filesystem with AD/kerberos authentication. Previously, we used to have a working SAMBA export for a local filesystem with SSSD and AD integration with SAMBA as mentioned in the below solution from redhat. https://access.redhat.com/solutions/2221561 We find the above as cleaner solution with respect to AD and Samba integration compared to centrify or winbind. I understand that GPFS does offer AD authentication, however i believe i cannot use the same since NFS will need user-defined authentication and SAMBA will need AD authentication. I have thus been trying to use user-defined authentication. I tried to edit smb.cnf from GPFS ( with a bit of help from this blog, written by Simon.?https://www.roamingzebra.co.uk/2015/07/smb-protocol-support-with-spectrum.html) /usr/lpp/mmfs/bin/net conf list realm = xxxx workgroup = xxxx security = ads kerberos method = secrets and key tab idmap config * : backend = tdb template homedir = /home/%U dedicated keytab file = /etc/krb5.keytab I had joined the node to AD with realmd and i do get relevant AD info when i try: /usr/lpp/mmfs/bin/net ads info However, when i try to display keytab or add principals to keytab. It just does not work. /usr/lpp/mmfs/bin/net ads keytab list ?-> does not show the keys present in /etc/krb5.keytab. /usr/lpp/mmfs/bin/net ads keytab add cifs -> does not add the keys to the /etc/krb5.keytab As per the samba documentation, these two parameters should help samba automatically find the keytab file. kerberos method = secrets and key tab dedicated keytab file = /etc/krb5.keytab I have not yet tried to see, if a SAMBA export is working with AD authentication but i am afraid it might not work. Have anyone tried the AD integration with SSSD/SAMBA for GPFS, and any suggestions on how to debug the above would be really helpful. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From valdis.kletnieks at vt.edu Thu May 3 20:16:03 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Thu, 03 May 2018 15:16:03 -0400 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: <1525362764.27337.140.camel@strath.ac.uk> References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> <1525362764.27337.140.camel@strath.ac.uk> Message-ID: <75615.1525374963@turing-police.cc.vt.edu> On Thu, 03 May 2018 16:52:44 +0100, Jonathan Buzzard said: > The test that I have used in the past for if a file is migrated with a > high degree of accuracy is > > if the space allocated on the file system is less than the > file size, and equal to the stub size then presume the file > is migrated. At least for LTFS/EE, we use something like this: define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')) define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%')) define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%')) RULE 'MIGRATED' LIST 'ltfsee_files' FROM POOL 'system' SHOW('migrated ' || xattr('dmapi.IBMTPS') || ' ' || all_attrs) WHERE is_migrated AND (xattr('dmapi.IBMTPS') LIKE '%:%' ) Not sure if the V and M misc_attributes are the same for other tape backends... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Thu May 3 21:13:14 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 3 May 2018 20:13:14 +0000 Subject: [gpfsug-discuss] FYI - SC18 - Hotels are now open for reservations! Message-ID: <1CE10F03-B49C-44DF-A772-B674D059457F@nuance.com> FYI, Hotels for SC18 are now open, and if it?s like any other year, they fill up FAST. Reserve one early since it?s no charge to hold it until 1 month before the conference. https://sc18.supercomputing.org/experience/housing/ Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From zacekm at img.cas.cz Fri May 4 06:53:23 2018 From: zacekm at img.cas.cz (Michal Zacek) Date: Fri, 4 May 2018 07:53:23 +0200 Subject: [gpfsug-discuss] Temporary office files Message-ID: Hello, I have problem with "~$somename.xlsx" files in Samba shares at GPFS Samba cluster. These lock files are supposed to be removed by Samba with "delete on close" function. This function is working? at standard Samba server in Centos but not with Samba cluster at GPFS. Is this function disabled on purpose or is ti an error? I'm not sure if this problem was in older versions, but now with version 5.0.0.0 it's easy to reproduce. Just open and close any excel file, and "~$xxxx.xlsx" file will remain at share. You have to uncheck "hide protected operating system files" on Windows to see them. Any help would be appreciated. Regards, Michal -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3776 bytes Desc: Elektronicky podpis S/MIME URL: From r.sobey at imperial.ac.uk Fri May 4 09:10:33 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Fri, 4 May 2018 08:10:33 +0000 Subject: [gpfsug-discuss] Temporary office files In-Reply-To: References: Message-ID: Hi Michal, We occasionally get a request to close a lock file for an Office document but I wouldn't necessarily say we could easily reproduce it. We're still running 4.2.3.7 though so YMMV. I'm building out my test cluster at the moment to do some experiments and as soon as 5.0.1 is released I'll be upgrading it to check it out. Thanks Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Michal Zacek Sent: 04 May 2018 06:53 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Temporary office files Hello, I have problem with "~$somename.xlsx" files in Samba shares at GPFS Samba cluster. These lock files are supposed to be removed by Samba with "delete on close" function. This function is working? at standard Samba server in Centos but not with Samba cluster at GPFS. Is this function disabled on purpose or is ti an error? I'm not sure if this problem was in older versions, but now with version 5.0.0.0 it's easy to reproduce. Just open and close any excel file, and "~$xxxx.xlsx" file will remain at share. You have to uncheck "hide protected operating system files" on Windows to see them. Any help would be appreciated. Regards, Michal From Achim.Rehor at de.ibm.com Fri May 4 09:17:52 2018 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Fri, 4 May 2018 10:17:52 +0200 Subject: [gpfsug-discuss] Temporary office files In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 7182 bytes Desc: not available URL: From zacekm at img.cas.cz Fri May 4 10:40:50 2018 From: zacekm at img.cas.cz (Michal Zacek) Date: Fri, 4 May 2018 11:40:50 +0200 Subject: [gpfsug-discuss] Temporary office files In-Reply-To: References: Message-ID: Hi Achim Set "gpfs:sharemodes=no" did the trick and I will upgrade to 5.0.0.2 next week. Thank you very much. Regards, Michal Dne 4.5.2018 v 10:17 Achim Rehor napsal(a): > Hi Michal, > > there was an open defect on this, which had been fixed in level > 4.2.3.7 (APAR _IJ03182 _ > ) > gpfs.smb 4.5.15_gpfs_31-1 > should be in gpfs.smb 4.6.11_gpfs_31-1 ?package for the 5.0.0 PTF1 level. > > > > > Mit freundlichen Gr??en / Kind regards > > *Achim Rehor* > > ------------------------------------------------------------------------ > Software Technical Support Specialist AIX/ Emea HPC Support > IBM Certified Advanced Technical Expert - Power Systems with AIX > TSCC Software Service, Dept. 7922 > Global Technology Services > ------------------------------------------------------------------------ > Phone: +49-7034-274-7862 ?IBM Deutschland > E-Mail: Achim.Rehor at de.ibm.com ?Am Weiher 24 > ?65451 Kelsterbach > ?Germany > > ------------------------------------------------------------------------ > IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter > Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, > Stefan Lutz, Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht > Stuttgart, HRB 14562 WEEE-Reg.-Nr. DE 99369940 > > > > > > > From: Michal Zacek > To: gpfsug-discuss at spectrumscale.org > Date: 04/05/2018 08:03 > Subject: [gpfsug-discuss] Temporary office files > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Hello, > > I have problem with "~$somename.xlsx" files in Samba shares at GPFS > Samba cluster. These lock files are supposed to be removed by Samba with > "delete on close" function. This function is working? at standard Samba > server in Centos but not with Samba cluster at GPFS. Is this function > disabled on purpose or is ti an error? I'm not sure if this problem was > in older versions, but now with version 5.0.0.0 it's easy to reproduce. > Just open and close any excel file, and "~$xxxx.xlsx" file will remain > at share. You have to uncheck "hide protected operating system files" on > Windows to see them. > Any help would be appreciated. > > Regards, > Michal > > [attachment "smime.p7s" deleted by Achim Rehor/Germany/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nfhdombajgidkknc.png Type: image/png Size: 7182 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3776 bytes Desc: Elektronicky podpis S/MIME URL: From makaplan at us.ibm.com Fri May 4 15:03:37 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 4 May 2018 10:03:37 -0400 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: <75615.1525374963@turing-police.cc.vt.edu> References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org><1525362764.27337.140.camel@strath.ac.uk> <75615.1525374963@turing-police.cc.vt.edu> Message-ID: "Not sure if the V and M misc_attributes are the same for other tape backends..." define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')) define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%')) define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%')) There are good, valid and fairly efficient tests for any files Spectrum Scale system that has a DMAPI based HSM system installed with it. (TSM/HSM, HPSS, LTFS/EE, ...) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From makaplan at us.ibm.com Fri May 4 16:16:26 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 4 May 2018 11:16:26 -0400 Subject: [gpfsug-discuss] Determining which files are migrated or premigated wrt HSM In-Reply-To: References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org><1525362764.27337.140.camel@strath.ac.uk><75615.1525374963@turing-police.cc.vt.edu> Message-ID: define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')) define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%')) define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%')) THESE are good, valid and fairly efficient tests for any files Spectrum Scale system that has a DMAPI based HSM system installed with it. (TSM/HSM, HPSS, LTFS/EE, ...) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Fri May 4 16:38:57 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 4 May 2018 15:38:57 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? Message-ID: Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Fri May 4 16:52:27 2018 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Fri, 4 May 2018 15:52:27 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From skylar2 at uw.edu Fri May 4 16:49:12 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Fri, 4 May 2018 15:49:12 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <20180504154912.vabqnigzvyacfex4@utumno.gs.washington.edu> Our experience is that CES (at least NFS/ganesha) can easily consume all of the CPU resources on a system. If you're running it on the same hardware as your NSD services, then you risk delaying native GPFS I/O requests as well. We haven't found a great way to limit the amount of resources that NFS/ganesha can use, though maybe in the future it could be put in a cgroup since it's all user-space? On Fri, May 04, 2018 at 03:38:57PM +0000, Buterbaugh, Kevin L wrote: > Hi All, > > In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ??? but I???ve not found any detailed explanation of why not. > > I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ??? say, late model boxes with 2 x 8 core CPU???s, 256 GB RAM, 10 GbE networking ??? is there any reason why I still should not combine the two? > > To answer the question of why I would want to ??? simple, server licenses. > > Thanks??? > > Kevin > > ??? > Kevin Buterbaugh - Senior System Administrator > Vanderbilt University - Advanced Computing Center for Research and Education > Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From Kevin.Buterbaugh at Vanderbilt.Edu Fri May 4 16:56:44 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 4 May 2018 15:56:44 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <9AF296B0-E8B0-4DE9-A235-97CCE9A58F5F@vanderbilt.edu> Hi Anderson, Thanks for the response ? however, the scenario you describe below wouldn?t impact us. We have 8 NSD servers and they can easily provide the needed performance to native GPFS clients. We could also take a downtime if we ever did need to expand in the manner described below. In fact, one of the things that?s kinda surprising to me is that upgrading the SMB portion of CES requires a downtime. Let?s just say that I know for a fact that sernet-samba can be done rolling / live. Kevin On May 4, 2018, at 10:52 AM, Anderson Ferreira Nobre > wrote: Hi Kevin, I think one of the reasons is if you need to add or remove nodes from cluster you will start to face the constrains of this kind of solution. Let's say you have a cluster with two nodes and share the same set of LUNs through SAN. And for some reason you need to add more two nodes that are NSD Servers and Protocol nodes. For the new nodes become NSD Servers, you will have to redistribute the NSD disks among four nodes. But for you do that you will have to umount the filesystems. And for you umount the filesystems you would need to stop protocol services. At the end you will realize that a simple task like that is disrruptive. You won't be able to do online. Abra?os / Regards / Saludos, Anderson Nobre AIX & Power Consultant Master Certified IT Specialist IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone: 55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Buterbaugh, Kevin L" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [gpfsug-discuss] Not recommended, but why not? Date: Fri, May 4, 2018 12:39 PM Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2b0fc12c4dc24aa1f7fb08d5b1d70c9e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610459542553835&sdata=8aArQLzU5q%2BySqHcoQ3SI420XzP08ICph7F18G7C4pw%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Fri May 4 17:26:54 2018 From: oehmes at gmail.com (Sven Oehme) Date: Fri, 04 May 2018 16:26:54 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L < Kevin.Buterbaugh at vanderbilt.edu> wrote: > Hi All, > > In doing some research, I have come across numerous places (IBM docs, > DeveloperWorks posts, etc.) where it is stated that it is not recommended > to run CES on NSD servers ? but I?ve not found any detailed explanation of > why not. > > I understand that CES, especially if you enable SMB, can be a resource > hog. But if I size the servers appropriately ? say, late model boxes with > 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I > still should not combine the two? > > To answer the question of why I would want to ? simple, server licenses. > > Thanks? > > Kevin > > ? > Kevin Buterbaugh - Senior System Administrator > Vanderbilt University - Advanced Computing Center for Research and > Education > Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 <(615)%20875-9633> > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Fri May 4 18:30:05 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Fri, 4 May 2018 17:30:05 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> You also have to be careful with network utilization? we have some very hungry NFS clients in our environment and the NFS traffic can actually DOS other services that need to use the network links. If you configure GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then this could lead to GPFS node evictions if disk leases cannot get renewed. You could limit the amount that SMV/NFS use on the network with something like the tc facility if you?re sharing the network interfaces for GPFS and CES services. HTH, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Sven Oehme Sent: Friday, May 04, 2018 11:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Not recommended, but why not? Note: External Email ________________________________ there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L > wrote: Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Fri May 4 23:08:39 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Fri, 4 May 2018 22:08:39 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <9AF296B0-E8B0-4DE9-A235-97CCE9A58F5F@vanderbilt.edu> References: <9AF296B0-E8B0-4DE9-A235-97CCE9A58F5F@vanderbilt.edu>, Message-ID: An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Sat May 5 09:57:11 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Sat, 5 May 2018 09:57:11 +0100 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> References: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> Message-ID: <426d4185-a163-2eb0-954d-7c1947fea607@strath.ac.uk> On 04/05/18 18:30, Bryan Banister wrote: > You also have to be careful with network utilization? we have some very > hungry NFS clients in our environment and the NFS traffic can actually > DOS other services that need to use the network links.? If you configure > GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then > this could lead to GPFS node evictions if disk leases cannot get > renewed.? You could limit the amount that SMV/NFS use on the network > with something like the tc facility if you?re sharing the network > interfaces for GPFS and CES services. > The right answer to that IMHO is a separate VLAN for the GPFS command/control traffic that is prioritized above all other VLAN's. Do something like mark it as a voice VLAN. Basically don't rely on some OS layer to do the right thing at layer three, enforce it at layer two in the switches. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jagga13 at gmail.com Mon May 7 02:35:19 2018 From: jagga13 at gmail.com (Jagga Soorma) Date: Sun, 6 May 2018 18:35:19 -0700 Subject: [gpfsug-discuss] CES NFS export Message-ID: Hi Guys, We are new to gpfs and have a few client that will be mounting gpfs via nfs. We have configured the exports but all user/group permissions are showing up as nobody. The gateway/protocol nodes can query the uid/gid's via centrify without any issues as well as the clients and the perms look good on a client that natively accesses the gpfs filesystem. Is there some specific config that we might be missing? -- # mmnfs export list --nfsdefs /gpfs/datafs1 Path Delegations Clients Access_Type Protocols Transports Squash Anonymous_uid Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids NFS_Commit ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE TRUE FALSE /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP NO_ROOT_SQUASH -2 -2 SYS FALSE NONE TRUE FALSE /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE TRUE FALSE -- On the nfs clients I see this though: -- # ls -l total 0 drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 -- Here is our mmnfs config: -- # mmnfs config list NFS Ganesha Configuration: ========================== NFS_PROTOCOLS: 3,4 NFS_PORT: 2049 MNT_PORT: 0 NLM_PORT: 0 RQUOTA_PORT: 0 NB_WORKER: 256 LEASE_LIFETIME: 60 DOMAINNAME: VIRTUAL1.COM DELEGATIONS: Disabled ========================== STATD Configuration ========================== STATD_PORT: 0 ========================== CacheInode Configuration ========================== ENTRIES_HWMARK: 1500000 ========================== Export Defaults ========================== ACCESS_TYPE: NONE PROTOCOLS: 3,4 TRANSPORTS: TCP ANONYMOUS_UID: -2 ANONYMOUS_GID: -2 SECTYPE: SYS PRIVILEGEDPORT: FALSE MANAGE_GIDS: TRUE SQUASH: ROOT_SQUASH NFS_COMMIT: FALSE ========================== Log Configuration ========================== LOG_LEVEL: EVENT ========================== Idmapd Configuration ========================== LOCAL-REALMS: LOCALDOMAIN DOMAIN: LOCALDOMAIN ========================== -- Thanks! From jagga13 at gmail.com Mon May 7 04:05:01 2018 From: jagga13 at gmail.com (Jagga Soorma) Date: Sun, 6 May 2018 20:05:01 -0700 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: Looks like this is due to nfs v4 and idmapd domain not being configured correctly. I am going to test further and reach out if more assistance is needed. Thanks! On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > Hi Guys, > > We are new to gpfs and have a few client that will be mounting gpfs > via nfs. We have configured the exports but all user/group > permissions are showing up as nobody. The gateway/protocol nodes can > query the uid/gid's via centrify without any issues as well as the > clients and the perms look good on a client that natively accesses the > gpfs filesystem. Is there some specific config that we might be > missing? > > -- > # mmnfs export list --nfsdefs /gpfs/datafs1 > Path Delegations Clients > Access_Type Protocols Transports Squash Anonymous_uid > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > NFS_Commit > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE NONE > TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > -- > > On the nfs clients I see this though: > > -- > # ls -l > total 0 > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > -- > > Here is our mmnfs config: > > -- > # mmnfs config list > > NFS Ganesha Configuration: > ========================== > NFS_PROTOCOLS: 3,4 > NFS_PORT: 2049 > MNT_PORT: 0 > NLM_PORT: 0 > RQUOTA_PORT: 0 > NB_WORKER: 256 > LEASE_LIFETIME: 60 > DOMAINNAME: VIRTUAL1.COM > DELEGATIONS: Disabled > ========================== > > STATD Configuration > ========================== > STATD_PORT: 0 > ========================== > > CacheInode Configuration > ========================== > ENTRIES_HWMARK: 1500000 > ========================== > > Export Defaults > ========================== > ACCESS_TYPE: NONE > PROTOCOLS: 3,4 > TRANSPORTS: TCP > ANONYMOUS_UID: -2 > ANONYMOUS_GID: -2 > SECTYPE: SYS > PRIVILEGEDPORT: FALSE > MANAGE_GIDS: TRUE > SQUASH: ROOT_SQUASH > NFS_COMMIT: FALSE > ========================== > > Log Configuration > ========================== > LOG_LEVEL: EVENT > ========================== > > Idmapd Configuration > ========================== > LOCAL-REALMS: LOCALDOMAIN > DOMAIN: LOCALDOMAIN > ========================== > -- > > Thanks! From YARD at il.ibm.com Mon May 7 06:16:15 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Mon, 7 May 2018 08:16:15 +0300 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: Hi If you want to use NFSv3 , define only NFSv3 on the export. In case you work with NFSv4 - you should have "DOMAIN\user" all the way - so this way you will not get any user mismatch errors, and see permissions like nobody. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Jagga Soorma To: gpfsug-discuss at spectrumscale.org Date: 05/07/2018 06:05 AM Subject: Re: [gpfsug-discuss] CES NFS export Sent by: gpfsug-discuss-bounces at spectrumscale.org Looks like this is due to nfs v4 and idmapd domain not being configured correctly. I am going to test further and reach out if more assistance is needed. Thanks! On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > Hi Guys, > > We are new to gpfs and have a few client that will be mounting gpfs > via nfs. We have configured the exports but all user/group > permissions are showing up as nobody. The gateway/protocol nodes can > query the uid/gid's via centrify without any issues as well as the > clients and the perms look good on a client that natively accesses the > gpfs filesystem. Is there some specific config that we might be > missing? > > -- > # mmnfs export list --nfsdefs /gpfs/datafs1 > Path Delegations Clients > Access_Type Protocols Transports Squash Anonymous_uid > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > NFS_Commit > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE NONE > TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > -- > > On the nfs clients I see this though: > > -- > # ls -l > total 0 > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > -- > > Here is our mmnfs config: > > -- > # mmnfs config list > > NFS Ganesha Configuration: > ========================== > NFS_PROTOCOLS: 3,4 > NFS_PORT: 2049 > MNT_PORT: 0 > NLM_PORT: 0 > RQUOTA_PORT: 0 > NB_WORKER: 256 > LEASE_LIFETIME: 60 > DOMAINNAME: VIRTUAL1.COM > DELEGATIONS: Disabled > ========================== > > STATD Configuration > ========================== > STATD_PORT: 0 > ========================== > > CacheInode Configuration > ========================== > ENTRIES_HWMARK: 1500000 > ========================== > > Export Defaults > ========================== > ACCESS_TYPE: NONE > PROTOCOLS: 3,4 > TRANSPORTS: TCP > ANONYMOUS_UID: -2 > ANONYMOUS_GID: -2 > SECTYPE: SYS > PRIVILEGEDPORT: FALSE > MANAGE_GIDS: TRUE > SQUASH: ROOT_SQUASH > NFS_COMMIT: FALSE > ========================== > > Log Configuration > ========================== > LOG_LEVEL: EVENT > ========================== > > Idmapd Configuration > ========================== > LOCAL-REALMS: LOCALDOMAIN > DOMAIN: LOCALDOMAIN > ========================== > -- > > Thanks! _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From chetkulk at in.ibm.com Mon May 7 09:08:33 2018 From: chetkulk at in.ibm.com (Chetan R Kulkarni) Date: Mon, 7 May 2018 13:38:33 +0530 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: Make sure NFSv4 ID Mapping value matches on client and server. On server side (i.e. CES nodes); you can set as below: $ mmnfs config change IDMAPD_DOMAIN=test.com On client side (e.g. RHEL NFS client); one can set it using Domain attribute in /etc/idmapd.conf file. $ egrep ^Domain /etc/idmapd.conf Domain = test.com [root at rh73node2 2018_05_07-13:31:11 ~]$ $ service nfs-idmap restart Please refer following link for the details: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/b1ladm_authconsidfornfsv4access.htm Thanks, Chetan. From: "Yaron Daniel" To: gpfsug main discussion list Date: 05/07/2018 10:46 AM Subject: Re: [gpfsug-discuss] CES NFS export Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi If you want to use NFSv3 , define only NFSv3 on the export. In case you work with NFSv4 - you should have "DOMAIN\user" all the way - so this way you will not get any user mismatch errors, and see permissions like nobody. Regards Yaron 94 Em Daniel Ha'Moshavot Rd Storage Petach Tiqva, Architect 49527 IBM Israel Global Markets, Systems HW Sales Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel IBM Storage Strategy and Solutions v1IBM Storage Management and Data Protection v1 Related image From: Jagga Soorma To: gpfsug-discuss at spectrumscale.org Date: 05/07/2018 06:05 AM Subject: Re: [gpfsug-discuss] CES NFS export Sent by: gpfsug-discuss-bounces at spectrumscale.org Looks like this is due to nfs v4 and idmapd domain not being configured correctly. I am going to test further and reach out if more assistance is needed. Thanks! On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > Hi Guys, > > We are new to gpfs and have a few client that will be mounting gpfs > via nfs. We have configured the exports but all user/group > permissions are showing up as nobody. The gateway/protocol nodes can > query the uid/gid's via centrify without any issues as well as the > clients and the perms look good on a client that natively accesses the > gpfs filesystem. Is there some specific config that we might be > missing? > > -- > # mmnfs export list --nfsdefs /gpfs/datafs1 > Path Delegations Clients > Access_Type Protocols Transports Squash Anonymous_uid > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > NFS_Commit > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE NONE > TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > -- > > On the nfs clients I see this though: > > -- > # ls -l > total 0 > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > -- > > Here is our mmnfs config: > > -- > # mmnfs config list > > NFS Ganesha Configuration: > ========================== > NFS_PROTOCOLS: 3,4 > NFS_PORT: 2049 > MNT_PORT: 0 > NLM_PORT: 0 > RQUOTA_PORT: 0 > NB_WORKER: 256 > LEASE_LIFETIME: 60 > DOMAINNAME: VIRTUAL1.COM > DELEGATIONS: Disabled > ========================== > > STATD Configuration > ========================== > STATD_PORT: 0 > ========================== > > CacheInode Configuration > ========================== > ENTRIES_HWMARK: 1500000 > ========================== > > Export Defaults > ========================== > ACCESS_TYPE: NONE > PROTOCOLS: 3,4 > TRANSPORTS: TCP > ANONYMOUS_UID: -2 > ANONYMOUS_GID: -2 > SECTYPE: SYS > PRIVILEGEDPORT: FALSE > MANAGE_GIDS: TRUE > SQUASH: ROOT_SQUASH > NFS_COMMIT: FALSE > ========================== > > Log Configuration > ========================== > LOG_LEVEL: EVENT > ========================== > > Idmapd Configuration > ========================== > LOCAL-REALMS: LOCALDOMAIN > DOMAIN: LOCALDOMAIN > ========================== > -- > > Thanks! _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=uic-29lyJ5TCiTRi0FyznYhKJx5I7Vzu80WyYuZ4_iM&m=3k9qWcL7UfySpNVW2J8S1XsIekUHTHBBYQhN7cPVg3Q&s=844KFrfpsN6nT-DKV6HdfS8EEejdwHuQxbNR8cX2cyc&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15633834.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15657152.gif Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15750750.gif Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15967392.gif Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15858665.gif Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15884206.jpg Type: image/jpeg Size: 11294 bytes Desc: not available URL: From Kevin.Buterbaugh at Vanderbilt.Edu Mon May 7 16:05:36 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Mon, 7 May 2018 15:05:36 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <4E0D4232-14FC-4229-BFBC-B61242473456@vanderbilt.edu> Hi All, I want to thank all of you who took the time to respond to this question ? your thoughts / suggestions are much appreciated. What I?m taking away from all of this is that it is OK to run CES on NSD servers as long as you are very careful in how you set things up. This would include: 1. Making sure you have enough CPU horsepower and using cgroups to limit how much CPU SMB and NFS can utilize. 2. Making sure you have enough RAM ? 256 GB sounds like it should be ?enough? when using SMB. 3. Making sure you have your network config properly set up. We would be able to provide three separate, dedicated 10 GbE links for GPFS daemon communication, GPFS multi-cluster link to our HPC cluster, and SMB / NFS communication. 4. Making sure you have good monitoring of all of the above in place. Have I missed anything or does anyone have any additional thoughts? Thanks? Kevin On May 4, 2018, at 11:26 AM, Sven Oehme > wrote: there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L > wrote: Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C6ec06d262ea84752b1d408d5b1dbe2cc%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610480314880560&sdata=J5%2F9X4dNeLrGKH%2BwmhIObVK%2BQ4oyoIa1vZ9F2yTU854%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Mon May 7 17:53:19 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 7 May 2018 16:53:19 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <426d4185-a163-2eb0-954d-7c1947fea607@strath.ac.uk> References: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> <426d4185-a163-2eb0-954d-7c1947fea607@strath.ac.uk> Message-ID: <9b83806da68c4afe85a048ac736e0d5c@jumptrading.com> Sure, many ways to solve the same problem, just depends on where you want to have the controls. Having a separate VLAN doesn't give you as fine grained controls over each network workload you are using, such as metrics collection, monitoring, GPFS, SSH, NFS vs SMB, vs Object, etc. But it doesn't matter how it's done as long as you ensure GPFS has enough bandwidth to function, cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Jonathan Buzzard Sent: Saturday, May 05, 2018 3:57 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Not recommended, but why not? Note: External Email ------------------------------------------------- On 04/05/18 18:30, Bryan Banister wrote: > You also have to be careful with network utilization? we have some very > hungry NFS clients in our environment and the NFS traffic can actually > DOS other services that need to use the network links. If you configure > GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then > this could lead to GPFS node evictions if disk leases cannot get > renewed. You could limit the amount that SMV/NFS use on the network > with something like the tc facility if you?re sharing the network > interfaces for GPFS and CES services. > The right answer to that IMHO is a separate VLAN for the GPFS command/control traffic that is prioritized above all other VLAN's. Do something like mark it as a voice VLAN. Basically don't rely on some OS layer to do the right thing at layer three, enforce it at layer two in the switches. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From jfosburg at mdanderson.org Tue May 8 14:32:54 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Tue, 8 May 2018 13:32:54 +0000 Subject: [gpfsug-discuss] Snapshots for backups Message-ID: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From LloydDean at us.ibm.com Tue May 8 15:59:37 2018 From: LloydDean at us.ibm.com (Lloyd Dean) Date: Tue, 8 May 2018 14:59:37 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: Jonathan, First it must be understood the snap is either at the filesystems or fileset, and more importantly is not an application level backup. This is a huge difference to say Protects many application integrations like exchange, databases, etc. With that understood the approach is similar to what others are doing. Just understand the restrictions. Lloyd Dean IBM Software Storage Architect/Specialist Communication & CSI Heartland Email: LloydDean at us.ibm.com Phone: (720) 395-1246 > On May 8, 2018, at 8:44 AM, Fosburgh,Jonathan wrote: > > We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: > > Replicate to a remote filesystem (I assume this is best done via AFM). > Take periodic (probably daily) snapshots at the remote site. > > The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? > The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From UWEFALKE at de.ibm.com Tue May 8 18:20:49 2018 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Tue, 8 May 2018 19:20:49 +0200 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: One thought: file A is created and synched out. it is changed bit later (say a few days). You have the original version in one snapshot, and the modified in the eternal fs (unless changed again). At some day you will need to delete the snapshot with the initial version since you can keep only a finite number. The initial version is gone then forever. Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Thomas Wolter, Sven Schoo? Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: "Fosburgh,Jonathan" To: gpfsug main discussion list Date: 08/05/2018 15:44 Subject: [gpfsug-discuss] Snapshots for backups Sent by: gpfsug-discuss-bounces at spectrumscale.org We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From valdis.kletnieks at vt.edu Tue May 8 18:24:37 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Tue, 08 May 2018 13:24:37 -0400 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: Message-ID: <29277.1525800277@turing-police.cc.vt.edu> On Tue, 08 May 2018 14:59:37 -0000, "Lloyd Dean" said: > First it must be understood the snap is either at the filesystems or fileset, > and more importantly is not an application level backup. This is a huge > difference to say Protects many application integrations like exchange, > databases, etc. And remember that a GPFS snapshot will only capture the disk as GPFS knows about it - any memory-cached data held by databases etc will *not* be captured (leading to the possibility of an inconsistent version being snapped). You'll need to do some sort of handshaking with any databases to get them to do a "flush everything to disk" to ensure on-disk consistency. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From Kevin.Buterbaugh at Vanderbilt.Edu Tue May 8 19:23:35 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Tue, 8 May 2018 18:23:35 +0000 Subject: [gpfsug-discuss] Node list error Message-ID: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue May 8 21:51:02 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 8 May 2018 20:51:02 +0000 Subject: [gpfsug-discuss] Node list error In-Reply-To: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> References: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> Message-ID: <342034e96e1f409b889b0e9aa4036098@jumptrading.com> What does `mmlsnodeclass -N ` give you? -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] Node list error Note: External Email ________________________________ Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Tue May 8 22:38:09 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Tue, 8 May 2018 21:38:09 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed May 9 13:16:03 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 9 May 2018 12:16:03 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: Our existing environments are using Scale+Protect with tape. Management wants us to move away from tape where possible. We do one filesystem per cluster. So, there will be two new clusters. We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range. We understand that if we replicate corrupted data, the corruption will go with it. But the same would be true for a backup (unless I am not quite following you). The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data. FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup. Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup. We abandoned tape backups on our NAS at around 600TB. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Tuesday, May 8, 2018 at 4:38 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups Hi Jonathan, First off a couple of questions: 1) your using Scale+Protect with Tape today? 2) your new filesystems will be within the same cluster ? 3) What capacity are the new filesystems Based on the above then: AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent) Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Snapshots for backups Date: Tue, May 8, 2018 11:43 PM We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed May 9 13:50:20 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 9 May 2018 12:50:20 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: , <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 9 14:13:04 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2018 14:13:04 +0100 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: , <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: <1525871584.27337.200.camel@strath.ac.uk> On Wed, 2018-05-09 at 12:50 +0000, Andrew Beattie wrote: > ? > From my perspective the difference / benefits of using something like > Protect and using backup policies over snapshot policies - even if > its disk based rather than tape based,? is that with a backup you get > far better control over your Disaster Recovery process. The policy > integration with Scale and Protect is very comprehensive.? If the > issue is Tape time for recovery - simply change from tape medium to a > Disk storage pool as your repository for Protect, you get all the > benefits of Spectrum Protect and the restore speeds of disk, (you > might even - subject to type of data start to see some benefits of > duplication and compression for your backups as you will be able to > take advantage of Protect's dedupe and compression for the disk based > storage pool, something that's not available on your tape > environment. The way I see it is that snapshots are not backup. They are handy for quick recovery from file deletion mistakes. They are utterly useless when your disaster recovery is needed because for example all your NSD descriptors have been overwritten (not my mistake I hasten to add). AT that point your snapshots are for jack. > ? > If your looking for a way to further reduce your disk costs then > potentially the benefits of Object Storage erasure coding might be > worth looking at although for a 1 or 2 site scenario the overheads > are pretty much the same if you use some variant of distributed raid > or if you use erasure coding. > ? At scale tape is a lot cheaper than disk. Also sorry your data is going to take a couple of weeks to recover goes down a lot better than sorry your data is gone for ever. Finally it's also hard for a hacker or disgruntled admin to wipe your tapes in a short period of time. The robot don't go that fast. Your disks/file systems on the other hand effectively be gone in seconds. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jfosburg at mdanderson.org Wed May 9 14:29:23 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 9 May 2018 13:29:23 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: I agree with your points. The thought here, is that if we had a complete loss of the primary site, we could bring up the secondary in relatively short order (hours or days instead of weeks or months). Maybe this is true, and maybe this isn?t, though I do see (and have advocated for) a DR setup much like that. My concern is that the use of snapshots as a substitute for traditional backups for a Scale environment is that that is an inappropriate use of the technology, particularly when we have a tool designed for that and that works. Let me take a moment to reiterate something that may be getting lost. The snapshots will be taken against the remote copy and recovered from there. We will not be relying on the primary site for this function. We were starting to look at ESS as a destination for these backups. I have also considered that a multisite ICOS implementation might work to satisfy some of our general backup requirements. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Wednesday, May 9, 2018 at 7:51 AM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups From my perspective the difference / benefits of using something like Protect and using backup policies over snapshot policies - even if its disk based rather than tape based, is that with a backup you get far better control over your Disaster Recovery process. The policy integration with Scale and Protect is very comprehensive. If the issue is Tape time for recovery - simply change from tape medium to a Disk storage pool as your repository for Protect, you get all the benefits of Spectrum Protect and the restore speeds of disk, (you might even - subject to type of data start to see some benefits of duplication and compression for your backups as you will be able to take advantage of Protect's dedupe and compression for the disk based storage pool, something that's not available on your tape environment. If your looking for a way to further reduce your disk costs then potentially the benefits of Object Storage erasure coding might be worth looking at although for a 1 or 2 site scenario the overheads are pretty much the same if you use some variant of distributed raid or if you use erasure coding. Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: Re: [gpfsug-discuss] Snapshots for backups Date: Wed, May 9, 2018 10:28 PM Our existing environments are using Scale+Protect with tape. Management wants us to move away from tape where possible. We do one filesystem per cluster. So, there will be two new clusters. We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range. We understand that if we replicate corrupted data, the corruption will go with it. But the same would be true for a backup (unless I am not quite following you). The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data. FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup. Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup. We abandoned tape backups on our NAS at around 600TB. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Tuesday, May 8, 2018 at 4:38 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups Hi Jonathan, First off a couple of questions: 1) your using Scale+Protect with Tape today? 2) your new filesystems will be within the same cluster ? 3) What capacity are the new filesystems Based on the above then: AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent) Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Snapshots for backups Date: Tue, May 8, 2018 11:43 PM We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed May 9 14:31:36 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 9 May 2018 13:31:36 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1525871584.27337.200.camel@strath.ac.uk> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> Message-ID: <81738C1C-FAFC-416A-9937-B99E86809EE4@mdanderson.org> That is the use case for snapshots, taken at the remote site. Recovery from accidental deletion. ?On 5/9/18, 8:13 AM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Jonathan Buzzard" wrote: The way I see it is that snapshots are not backup. They are handy for quick recovery from file deletion mistakes. They are utterly useless when your disaster recovery is needed because for example all your NSD descriptors have been overwritten (not my mistake I hasten to add). AT that point your snapshots are for jack. The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. From MKEIGO at jp.ibm.com Wed May 9 14:36:37 2018 From: MKEIGO at jp.ibm.com (Keigo Matsubara) Date: Wed, 9 May 2018 22:36:37 +0900 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1525871584.27337.200.camel@strath.ac.uk> References: , <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> Message-ID: Not sure if the topic is appropriate, but I know an installation case which employs IBM Spectrum Scale's snapshot function along with IBM Spectrum Protect to save the backup date onto LTO7 tape media. Both software components running on Linux on Power (RHEL 7.3 BE) if that matters. Of course, snapshots are taken per independent fileset. --- Keigo Matsubara, Storage Solutions Client Technical Specialist, IBM Japan TEL: +81-50-3150-0595, T/L: 6205-0595 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Wed May 9 14:37:43 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Wed, 9 May 2018 13:37:43 +0000 Subject: [gpfsug-discuss] mmlsnsd -m or -M Message-ID: <6f1760ea2d1244959d25763442ba96c0@SMXRF105.msg.hukrf.de> Hallo All, we experience some difficults in using mmlsnsd -m on 4.2.3.8 and 5.0.0.2. Are there any known bugs or changes happening here, that these function don?t does what it wants. The outputs are now for these suboption -m or -M the same!!??. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 9 15:23:59 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 9 May 2018 14:23:59 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: <08326DC0-30CF-4A63-A111-1EDBDC19E3F0@bham.ac.uk> For DR, what about making your secondary site mostly an object store, use TCT to pre-migrate the data out and then use SOBAR to dump the catalogue. You then restore the SOBAR dump to the DR site and have pretty much instant most of your data available. You could do the DR with tape/pre-migration as well, it?s just slower. OFC with SOBAR, you are just restoring the data that is being accessed or you target to migrate back in. Equally Protect can also backup/migrate to an object pool (note you can?t currently migrate in the Protect sense from a TSM object pool to a TSM disk/tape pool). And put snapshots in at home for the instant ?need to restore a file?. If this is appropriate depends on what you agree your RPO to be. Scale/Protect for us allows us to recover data N months after the user deleted the file and didn?t notice. Simon From: on behalf of "jfosburg at mdanderson.org" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Wednesday, 9 May 2018 at 14:30 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups I agree with your points. The thought here, is that if we had a complete loss of the primary site, we could bring up the secondary in relatively short order (hours or days instead of weeks or months). Maybe this is true, and maybe this isn?t, though I do see (and have advocated for) a DR setup much like that. My concern is that the use of snapshots as a substitute for traditional backups for a Scale environment is that that is an inappropriate use of the technology, particularly when we have a tool designed for that and that works. Let me take a moment to reiterate something that may be getting lost. The snapshots will be taken against the remote copy and recovered from there. We will not be relying on the primary site for this function. We were starting to look at ESS as a destination for these backups. I have also considered that a multisite ICOS implementation might work to satisfy some of our general backup requirements. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Wednesday, May 9, 2018 at 7:51 AM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups From my perspective the difference / benefits of using something like Protect and using backup policies over snapshot policies - even if its disk based rather than tape based, is that with a backup you get far better control over your Disaster Recovery process. The policy integration with Scale and Protect is very comprehensive. If the issue is Tape time for recovery - simply change from tape medium to a Disk storage pool as your repository for Protect, you get all the benefits of Spectrum Protect and the restore speeds of disk, (you might even - subject to type of data start to see some benefits of duplication and compression for your backups as you will be able to take advantage of Protect's dedupe and compression for the disk based storage pool, something that's not available on your tape environment. If your looking for a way to further reduce your disk costs then potentially the benefits of Object Storage erasure coding might be worth looking at although for a 1 or 2 site scenario the overheads are pretty much the same if you use some variant of distributed raid or if you use erasure coding. Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: Re: [gpfsug-discuss] Snapshots for backups Date: Wed, May 9, 2018 10:28 PM Our existing environments are using Scale+Protect with tape. Management wants us to move away from tape where possible. We do one filesystem per cluster. So, there will be two new clusters. We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range. We understand that if we replicate corrupted data, the corruption will go with it. But the same would be true for a backup (unless I am not quite following you). The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data. FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup. Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup. We abandoned tape backups on our NAS at around 600TB. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Tuesday, May 8, 2018 at 4:38 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups Hi Jonathan, First off a couple of questions: 1) your using Scale+Protect with Tape today? 2) your new filesystems will be within the same cluster ? 3) What capacity are the new filesystems Based on the above then: AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent) Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Snapshots for backups Date: Tue, May 8, 2018 11:43 PM We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkr at lbl.gov Wed May 9 17:01:30 2018 From: kkr at lbl.gov (Kristy Kallback-Rose) Date: Wed, 9 May 2018 09:01:30 -0700 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1525871584.27337.200.camel@strath.ac.uk> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> Message-ID: +1 for benefits of tape and also power consumption/heat production (may help a case to management) is obviously better with things that don?t have to be spinning all the time. > > At scale tape is a lot cheaper than disk. Also sorry your data is going > to take a couple of weeks to recover goes down a lot better than sorry > your data is gone for ever. > > Finally it's also hard for a hacker or disgruntled admin to wipe your > tapes in a short period of time. The robot don't go that fast. Your > disks/file systems on the other hand effectively be gone in seconds. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Wed May 9 20:01:55 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 9 May 2018 15:01:55 -0400 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org><1525871584.27337.200.camel@strath.ac.uk> Message-ID: I see there are also low-power / zero-power disk archive/arrays available. Any experience with those? From: Kristy Kallback-Rose To: gpfsug main discussion list Date: 05/09/2018 12:20 PM Subject: Re: [gpfsug-discuss] Snapshots for backups Sent by: gpfsug-discuss-bounces at spectrumscale.org +1 for benefits of tape and also power consumption/heat production (may help a case to management) is obviously better with things that don?t have to be spinning all the time. > > At scale tape is a lot cheaper than disk. Also sorry your data is going > to take a couple of weeks to recover goes down a lot better than sorry > your data is gone for ever. > > Finally it's also hard for a hacker or disgruntled admin to wipe your > tapes in a short period of time. The robot don't go that fast. Your > disks/file systems on the other hand effectively be gone in seconds. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valdis.kletnieks at vt.edu Wed May 9 21:33:26 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Wed, 09 May 2018 16:33:26 -0400 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org><1525871584.27337.200.camel@strath.ac.uk> Message-ID: <31428.1525898006@turing-police.cc.vt.edu> On Wed, 09 May 2018 15:01:55 -0400, "Marc A Kaplan" said: > I see there are also low-power / zero-power disk archive/arrays available. > Any experience with those? The last time I looked at those (which was a few years ago) they were competitive with tape for power consumption, but not on cost per terabyte - it takes a lot less cable and hardware to hook up a dozen tape drives and a robot arm that can reach 10,000 volumes than it does to wire up 10,000 disks of which only 500 are actually spinning at any given time... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From skylar2 at uw.edu Wed May 9 21:46:45 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Wed, 9 May 2018 20:46:45 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <31428.1525898006@turing-police.cc.vt.edu> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> <31428.1525898006@turing-police.cc.vt.edu> Message-ID: <20180509204645.fy5js7kjxslihjjr@utumno.gs.washington.edu> On Wed, May 09, 2018 at 04:33:26PM -0400, valdis.kletnieks at vt.edu wrote: > On Wed, 09 May 2018 15:01:55 -0400, "Marc A Kaplan" said: > > > I see there are also low-power / zero-power disk archive/arrays available. > > Any experience with those? > > The last time I looked at those (which was a few years ago) they were competitive > with tape for power consumption, but not on cost per terabyte - it takes a lot less > cable and hardware to hook up a dozen tape drives and a robot arm that can > reach 10,000 volumes than it does to wire up 10,000 disks of which only 500 are > actually spinning at any given time... I also wonder what the lifespan of cold-storage hard drives are relative to tape. With BaFe universal for LTO now, our failure rate for tapes has gone way down (not that it was very high relative to HDDs anyways). FWIW, the operating+capital costs we recharge our grants for tape storage is ~50% of what we recharge them for bulk disk storage. -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From daniel.kidger at uk.ibm.com Thu May 10 11:19:49 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Thu, 10 May 2018 10:19:49 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <4E0D4232-14FC-4229-BFBC-B61242473456@vanderbilt.edu> Message-ID: One additional point to consider is what happens on a hardware failure. eg. If you have two NSD servers that are both CES servers and one fails, then there is a double-failure at exactly the same point in time. Daniel Dr Daniel Kidger IBM Technical Sales Specialist Software Defined Solution Sales +44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 7 May 2018, at 16:39, Buterbaugh, Kevin L wrote: > > Hi All, > > I want to thank all of you who took the time to respond to this question ? your thoughts / suggestions are much appreciated. > > What I?m taking away from all of this is that it is OK to run CES on NSD servers as long as you are very careful in how you set things up. This would include: > > 1. Making sure you have enough CPU horsepower and using cgroups to limit how much CPU SMB and NFS can utilize. > 2. Making sure you have enough RAM ? 256 GB sounds like it should be ?enough? when using SMB. > 3. Making sure you have your network config properly set up. We would be able to provide three separate, dedicated 10 GbE links for GPFS daemon communication, GPFS multi-cluster link to our HPC cluster, and SMB / NFS communication. > 4. Making sure you have good monitoring of all of the above in place. > > Have I missed anything or does anyone have any additional thoughts? Thanks? > > Kevin > >> On May 4, 2018, at 11:26 AM, Sven Oehme wrote: >> >> there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. >> the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. >> >> sven >> >>> On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L wrote: >>> Hi All, >>> >>> In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. >>> >>> I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? >>> >>> To answer the question of why I would want to ? simple, server licenses. >>> >>> Thanks? >>> >>> Kevin >>> >>> ? >>> Kevin Buterbaugh - Senior System Administrator >>> Vanderbilt University - Advanced Computing Center for Research and Education >>> Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 >>> >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C6ec06d262ea84752b1d408d5b1dbe2cc%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610480314880560&sdata=J5%2F9X4dNeLrGKH%2BwmhIObVK%2BQ4oyoIa1vZ9F2yTU854%3D&reserved=0 > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Thu May 10 13:51:45 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Thu, 10 May 2018 15:51:45 +0300 Subject: [gpfsug-discuss] Node list error In-Reply-To: <342034e96e1f409b889b0e9aa4036098@jumptrading.com> References: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> <342034e96e1f409b889b0e9aa4036098@jumptrading.com> Message-ID: Hi Just to verify - there is no Firewalld running or Selinux ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Bryan Banister To: gpfsug main discussion list Date: 05/08/2018 11:51 PM Subject: Re: [gpfsug-discuss] Node list error Sent by: gpfsug-discuss-bounces at spectrumscale.org What does `mmlsnodeclass -N ` give you? -B From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] Node list error Note: External Email Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From Kevin.Buterbaugh at Vanderbilt.Edu Thu May 10 14:37:05 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 10 May 2018 13:37:05 +0000 Subject: [gpfsug-discuss] Node list error In-Reply-To: References: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> <342034e96e1f409b889b0e9aa4036098@jumptrading.com> Message-ID: Hi Yaron, Thanks for the response ? no firewalld nor SELinux. I went ahead and opened up a PMR and it turns out this is a known defect (at least in GPFS 5, I may have been the first to report it in GPFS 4.2.3.x) and IBM is working on a fix. Thanks? Kevin On May 10, 2018, at 7:51 AM, Yaron Daniel > wrote: Hi Just to verify - there is no Firewalld running or Selinux ? Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Bryan Banister > To: gpfsug main discussion list > Date: 05/08/2018 11:51 PM Subject: Re: [gpfsug-discuss] Node list error Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ What does `mmlsnodeclass -N ` give you? -B From:gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Node list error Note: External Email ________________________________ Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu- (615)875-9633 ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C58826c68a116427f5c2d08d5b674e2b2%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636615535509439494&sdata=eB3wc4PtGINXs0pAA9GYowE6ERimMahPBWzejHuOexQ%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From JRLang at uwyo.edu Thu May 10 20:32:00 2018 From: JRLang at uwyo.edu (Jeffrey R. Lang) Date: Thu, 10 May 2018 19:32:00 +0000 Subject: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? In-Reply-To: References: Message-ID: Hi Just a quick check. I upgraded my test GPFS system to RHEL 7.5 today and now GPFS 4.2.3-6 and 4.2.3-8 no longer compile properly. What version of GPFS (Spectrum Scale) is support on RHEL 7.5? Thanks Jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (IT Research Support) Sent: Monday, December 4, 2017 4:29 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? The FAQ at: https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it would work with a build your own kernel, but that doesn?t mean it is **supported** Simon On 04/12/2017, 09:52, "gpfsug-discuss-bounces at spectrumscale.org on behalf of z.han at imperial.ac.uk" wrote: Hi All, Any one is using a Linux kernel 3.12.x or above to run gpfs 4.2.3-4.2? I mean you've compiled your own kernel without paying for a professional service. We're stuck by CentOS/RHEL's distributed kernel as the PCI passthrough is required for VMs. Your comments or suggestions are much appreciated. Kind regards, Zong-Pei _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luis.bolinches at fi.ibm.com Thu May 10 23:22:01 2018 From: luis.bolinches at fi.ibm.com (Luis Bolinches) Date: Fri, 11 May 2018 00:22:01 +0200 Subject: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? In-Reply-To: References: Message-ID: https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html#linuxrest By reading table 30, none at this point Thanks -- Yst?v?llisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis-bolinches "If you always give you will always have" -- Anonymous From: "Jeffrey R. Lang" To: gpfsug main discussion list Date: 11/05/2018 00:05 Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Just a quick check. I upgraded my test GPFS system to RHEL 7.5 today and now GPFS 4.2.3-6 and 4.2.3-8 no longer compile properly. What version of GPFS (Spectrum Scale) is support on RHEL 7.5? Thanks Jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (IT Research Support) Sent: Monday, December 4, 2017 4:29 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? The FAQ at: https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it would work with a build your own kernel, but that doesn?t mean it is **supported** Simon On 04/12/2017, 09:52, "gpfsug-discuss-bounces at spectrumscale.org on behalf of z.han at imperial.ac.uk" wrote: Hi All, Any one is using a Linux kernel 3.12.x or above to run gpfs 4.2.3-4.2? I mean you've compiled your own kernel without paying for a professional service. We're stuck by CentOS/RHEL's distributed kernel as the PCI passthrough is required for VMs. Your comments or suggestions are much appreciated. Kind regards, Zong-Pei _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Ellei edell? ole toisin mainittu: / Unless stated otherwise above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Fri May 11 04:32:42 2018 From: knop at us.ibm.com (Felipe Knop) Date: Thu, 10 May 2018 23:32:42 -0400 Subject: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x orabove? In-Reply-To: References: Message-ID: Luis, Correct. Jeff: The Spectrum Scale team has been actively working on the support for RHEL 7.5 . Since code changes will be required, the support will require upcoming 4.2.3 and 5.0 PTFs. The FAQ will be updated when support for 7.5 becomes available. Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: Luis Bolinches To: gpfsug main discussion list Date: 05/10/2018 06:22 PM Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? Sent by: gpfsug-discuss-bounces at spectrumscale.org https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html#linuxrest By reading table 30, none at this point Thanks -- Yst?v?llisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis-bolinches "If you always give you will always have" -- Anonymous From: "Jeffrey R. Lang" To: gpfsug main discussion list Date: 11/05/2018 00:05 Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Just a quick check. I upgraded my test GPFS system to RHEL 7.5 today and now GPFS 4.2.3-6 and 4.2.3-8 no longer compile properly. What version of GPFS (Spectrum Scale) is support on RHEL 7.5? Thanks Jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (IT Research Support) Sent: Monday, December 4, 2017 4:29 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? The FAQ at: https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it would work with a build your own kernel, but that doesn?t mean it is **supported** Simon On 04/12/2017, 09:52, "gpfsug-discuss-bounces at spectrumscale.org on behalf of z.han at imperial.ac.uk" wrote: Hi All, Any one is using a Linux kernel 3.12.x or above to run gpfs 4.2.3-4.2? I mean you've compiled your own kernel without paying for a professional service. We're stuck by CentOS/RHEL's distributed kernel as the PCI passthrough is required for VMs. Your comments or suggestions are much appreciated. Kind regards, Zong-Pei _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Ellei edell? ole toisin mainittu: / Unless stated otherwise above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From bbanister at jumptrading.com Fri May 11 17:25:06 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Fri, 11 May 2018 16:25:06 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out Message-ID: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> It's on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Fri May 11 18:11:12 2018 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Fri, 11 May 2018 17:11:12 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out In-Reply-To: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> References: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> Message-ID: <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> I'd normally be excited by this, since we do aggressively apply GPFS upgrades. But it's worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you're also in the habit of aggressively upgrading RedHat then you're going to have to wait for 5.0.1-1 before you can resume that practice. From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Bryan Banister Sent: Friday, May 11, 2018 12:25 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out It's on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Fri May 11 18:56:49 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 11 May 2018 17:56:49 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out In-Reply-To: <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> References: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> Message-ID: On the other hand, we are very excited by this (from the README): File systems: Traditional NSD nodes and servers can use checksums NSD clients and servers that are configured with IBM Spectrum Scale can use checksums to verify data integrity and detect network corruption of file data that the client reads from or writes to the NSD server. For more information, see the nsdCksumTraditional and nsdDumpBuffersOnCksumError attributes in the topic mmchconfig command. Finally! Thanks, IBM (seriously)? Kevin On May 11, 2018, at 12:11 PM, Sanchez, Paul > wrote: I?d normally be excited by this, since we do aggressively apply GPFS upgrades. But it?s worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re also in the habit of aggressively upgrading RedHat then you?re going to have to wait for 5.0.1-1 before you can resume that practice. From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Bryan Banister Sent: Friday, May 11, 2018 12:25 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) > Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out It?s on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cfba17a5bf8c54359d5a308d5b7636fc4%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636616560077181684&sdata=ymNFnAFOsfzWoFLXWiQMgaHdUKn9sAC8WMv4%2FNjCP%2B0%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri May 11 19:34:30 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 11 May 2018 18:34:30 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out -NSD Checksum Message-ID: <30E7142C-3D77-4A97-834B-D54FFF06564B@nuance.com> Ah be careful! looking at the man page for mmchconfig ?nsdCksumTraditional: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1adm_mmchconfig.htm * Enabling this feature can result in significant I/O performance degradation and a considerable increase in CPU usage. Bob Oesterlin Sr Principal Storage Engineer, Nuance From: on behalf of "Buterbaugh, Kevin L" Reply-To: gpfsug main discussion list Date: Friday, May 11, 2018 at 1:29 PM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out On the other hand, we are very excited by this (from the README): File systems: Traditional NSD nodes and servers can use checksums NSD clients and servers that are configured with IBM Spectrum Scale can use checksums to verify data integrity and detect network corruption of file data that the client reads from or writes to the NSD server. For more information, see the nsdCksumTraditional and nsdDumpBuffersOnCksumError attributes in the topic mmchconfig command. Finally! Thanks, IBM (seriously)? Kevin On May 11, 2018, at 12:11 PM, Sanchez, Paul > wrote: I?d normally be excited by this, since we do aggressively apply GPFS upgrades. But it?s worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re also in the habit of aggressively upgrading RedHat then you?re going to have to wait for 5.0.1-1 before you can resume that practice. From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Bryan Banister Sent: Friday, May 11, 2018 12:25 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) > Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out It?s on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cfba17a5bf8c54359d5a308d5b7636fc4%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636616560077181684&sdata=ymNFnAFOsfzWoFLXWiQMgaHdUKn9sAC8WMv4%2FNjCP%2B0%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri May 11 20:02:30 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 11 May 2018 19:02:30 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out -NSD Checksum In-Reply-To: <30E7142C-3D77-4A97-834B-D54FFF06564B@nuance.com> Message-ID: >From some graphs I have seen the overhead varies a lot depending on the I/O size and if read or write and if random IO or not. So definitely YMMV. Remember too that ESS uses powerful processors in order to do the erasure coding and hence has performance to do checksums too. Traditionally ordinary NSD servers are merely ?routers? and as such are often using low spec cpus which may not be fast enough for the extra load? Daniel Dr Daniel Kidger IBM Technical Sales Specialist Software Defined Solution Sales + 44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 11 May 2018, at 19:34, Oesterlin, Robert wrote: > > Ah be careful! looking at the man page for mmchconfig ?nsdCksumTraditional: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1adm_mmchconfig.htm > > Enabling this feature can result in significant I/O performance degradation and a considerable increase in CPU usage. > > > Bob Oesterlin > Sr Principal Storage Engineer, Nuance > > > From: on behalf of "Buterbaugh, Kevin L" > Reply-To: gpfsug main discussion list > Date: Friday, May 11, 2018 at 1:29 PM > To: gpfsug main discussion list > Subject: [EXTERNAL] Re: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out > > On the other hand, we are very excited by this (from the README): > File systems: Traditional NSD nodes and servers can use checksums > > NSD clients and servers that are configured with IBM Spectrum Scale can use checksums > > to verify data integrity and detect network corruption of file data that the client > > reads from or writes to the NSD server. For more information, see the > > nsdCksumTraditional and nsdDumpBuffersOnCksumError attributes in the topic mmchconfig command. > > Finally! Thanks, IBM (seriously)? > > Kevin > > > On May 11, 2018, at 12:11 PM, Sanchez, Paul wrote: > > I?d normally be excited by this, since we do aggressively apply GPFS upgrades. But it?s worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re also in the habit of aggressively upgrading RedHat then you?re going to have to wait for 5.0.1-1 before you can resume that practice. > > From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Bryan Banister > Sent: Friday, May 11, 2018 12:25 PM > To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) > Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out > > It?s on fix central, https://www-945.ibm.com/support/fixcentral > > Cheers, > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cfba17a5bf8c54359d5a308d5b7636fc4%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636616560077181684&sdata=ymNFnAFOsfzWoFLXWiQMgaHdUKn9sAC8WMv4%2FNjCP%2B0%3D&reserved=0 > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From valdis.kletnieks at vt.edu Fri May 11 20:35:40 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Fri, 11 May 2018 15:35:40 -0400 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out -NSD Checksum In-Reply-To: References: Message-ID: <112843.1526067340@turing-police.cc.vt.edu> On Fri, 11 May 2018 19:02:30 -0000, "Daniel Kidger" said: > Remember too that ESS uses powerful processors in order to do the erasure > coding and hence has performance to do checksums too. Traditionally ordinary > NSD servers are merely ???routers??? and as such are often using low spec cpus > which may not be fast enough for the extra load? More to the point - if you're at all clever, you can do the erasure encoding in such a way that a perfectly usable checksum just drops out the bottom free of charge, so no additional performance is needed to checksum stuff.... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From jonathan at buzzard.me.uk Fri May 11 21:38:03 2018 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 11 May 2018 21:38:03 +0100 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out In-Reply-To: <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> References: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> Message-ID: <7a6eeed3-134f-620a-b49b-ed79ade90733@buzzard.me.uk> On 11/05/18 18:11, Sanchez, Paul wrote: > I?d normally be excited by this, since we do aggressively apply GPFS > upgrades.? But it?s worth noting that no released version of Scale works > with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re > also in the habit of aggressively upgrading RedHat then you?re going to > have to wait for 5.0.1-1 before you can resume that practice. > You can upgrade to RHEL 7.5 and then just boot the last of the 7.4 kernels. I have done that in the past with early RHEL 5. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From goncalves.erika at gene.com Fri May 11 22:55:42 2018 From: goncalves.erika at gene.com (Erika Goncalves) Date: Fri, 11 May 2018 14:55:42 -0700 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: I'm new on the Forum (hello to everyone!!) Quick question related to Chetan mail, How is the procedure when you have more than one domain? Make sure NFSv4 ID Mapping value matches on client and server. On server side (i.e. CES nodes); you can set as below: $ mmnfs config change IDMAPD_DOMAIN=test.com On client side (e.g. RHEL NFS client); one can set it using Domain attribute in /etc/idmapd.conf file. $ egrep ^Domain /etc/idmapd.conf Domain = test.com [root at rh73node2 2018_05_07-13:31:11 ~]$ $ service nfs-idmap restart It is possible to configure the IDMAPD_DOMAIN to support more than one? Thanks! -- *E**rika Goncalves* SSF Agile Operations Global IT Infrastructure & Solutions (GIS) Genentech - A member of the Roche Group +1 (650) 529 5458 goncalves.erika at gene.com *Confidentiality Note: *This message is intended only for the use of the named recipient(s) and may contain confidential and/or proprietary information. If you are not the intended recipient, please contact the sender and delete this message. Any unauthorized use of the information contained in this message is prohibited. On Mon, May 7, 2018 at 1:08 AM, Chetan R Kulkarni wrote: > Make sure NFSv4 ID Mapping value matches on client and server. > > On server side (i.e. CES nodes); you can set as below: > > $ mmnfs config change IDMAPD_DOMAIN=test.com > > On client side (e.g. RHEL NFS client); one can set it using Domain > attribute in /etc/idmapd.conf file. > > $ egrep ^Domain /etc/idmapd.conf > Domain = test.com > [root at rh73node2 2018_05_07-13:31:11 ~]$ > $ service nfs-idmap restart > > Please refer following link for the details: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0. > 0/com.ibm.spectrum.scale.v5r00.doc/b1ladm_authconsidfornfsv4access.htm > > Thanks, > Chetan. > > [image: Inactive hide details for "Yaron Daniel" ---05/07/2018 10:46:32 > AM---Hi If you want to use NFSv3 , define only NFSv3 on the exp]"Yaron > Daniel" ---05/07/2018 10:46:32 AM---Hi If you want to use NFSv3 , define > only NFSv3 on the export. > > From: "Yaron Daniel" > To: gpfsug main discussion list > Date: 05/07/2018 10:46 AM > > Subject: Re: [gpfsug-discuss] CES NFS export > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hi > > If you want to use NFSv3 , define only NFSv3 on the export. > In case you work with NFSv4 - you should have "DOMAIN\user" all the way - > so this way you will not get any user mismatch errors, and see permissions > like nobody. > > > > Regards > ------------------------------ > > *Yaron Daniel* 94 Em Ha'Moshavot Rd > *Storage Architect* Petach Tiqva, 49527 > *IBM Global Markets, Systems HW Sales* Israel > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > *IBM Israel* > > [image: IBM Storage Strategy and Solutions v1][image: IBM Storage > Management and Data Protection v1] [image: Related image] > > > > From: Jagga Soorma > To: gpfsug-discuss at spectrumscale.org > Date: 05/07/2018 06:05 AM > Subject: Re: [gpfsug-discuss] CES NFS export > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Looks like this is due to nfs v4 and idmapd domain not being > configured correctly. I am going to test further and reach out if > more assistance is needed. > > Thanks! > > On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > > Hi Guys, > > > > We are new to gpfs and have a few client that will be mounting gpfs > > via nfs. We have configured the exports but all user/group > > permissions are showing up as nobody. The gateway/protocol nodes can > > query the uid/gid's via centrify without any issues as well as the > > clients and the perms look good on a client that natively accesses the > > gpfs filesystem. Is there some specific config that we might be > > missing? > > > > -- > > # mmnfs export list --nfsdefs /gpfs/datafs1 > > Path Delegations Clients > > Access_Type Protocols Transports Squash Anonymous_uid > > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > > NFS_Commit > > ------------------------------------------------------------ > ------------------------------------------------------------ > ------------------------------------------------------------ > ----------------------- > > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > > ROOT_SQUASH -2 -2 SYS FALSE NONE > > TRUE FALSE > > /gpfs/datafs1 NONE {nodenames} RW 3,4 > > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > > NONE TRUE FALSE > > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > > ROOT_SQUASH -2 -2 SYS FALSE > > NONE TRUE FALSE > > -- > > > > On the nfs clients I see this though: > > > > -- > > # ls -l > > total 0 > > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > > -- > > > > Here is our mmnfs config: > > > > -- > > # mmnfs config list > > > > NFS Ganesha Configuration: > > ========================== > > NFS_PROTOCOLS: 3,4 > > NFS_PORT: 2049 > > MNT_PORT: 0 > > NLM_PORT: 0 > > RQUOTA_PORT: 0 > > NB_WORKER: 256 > > LEASE_LIFETIME: 60 > > DOMAINNAME: VIRTUAL1.COM > > DELEGATIONS: Disabled > > ========================== > > > > STATD Configuration > > ========================== > > STATD_PORT: 0 > > ========================== > > > > CacheInode Configuration > > ========================== > > ENTRIES_HWMARK: 1500000 > > ========================== > > > > Export Defaults > > ========================== > > ACCESS_TYPE: NONE > > PROTOCOLS: 3,4 > > TRANSPORTS: TCP > > ANONYMOUS_UID: -2 > > ANONYMOUS_GID: -2 > > SECTYPE: SYS > > PRIVILEGEDPORT: FALSE > > MANAGE_GIDS: TRUE > > SQUASH: ROOT_SQUASH > > NFS_COMMIT: FALSE > > ========================== > > > > Log Configuration > > ========================== > > LOG_LEVEL: EVENT > > ========================== > > > > Idmapd Configuration > > ========================== > > LOCAL-REALMS: LOCALDOMAIN > > DOMAIN: LOCALDOMAIN > > ========================== > > -- > > > > Thanks! > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss* > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug. > org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_ > iaSHvJObTbx-siA1ZOg&r=uic-29lyJ5TCiTRi0FyznYhKJx5I7Vzu80WyYuZ4_iM&m= > 3k9qWcL7UfySpNVW2J8S1XsIekUHTHBBYQhN7cPVg3Q&s=844KFrfpsN6nT- > DKV6HdfS8EEejdwHuQxbNR8cX2cyc&e= > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15633834.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15884206.jpg Type: image/jpeg Size: 11294 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15750750.gif Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15967392.gif Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15858665.gif Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15657152.gif Type: image/gif Size: 4376 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Mon May 14 11:09:10 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Mon, 14 May 2018 10:09:10 +0000 Subject: [gpfsug-discuss] SMB quotas query Message-ID: Hi all, I want to run this past the group to see if I?m going mad or not. We do have an open PMR about the issue which is currently being escalated. We have 400 independent filesets all linked to a path in the filesystem. The root of that path is then exported via SMB, e.g.: Fileset1: /gpfs/rootsmb/fileset1 Fileset2: /gpfs/rootsmb/fileset2 The CES export is /gpfs/rootsmb and the name of the share is (for example) ?share?. All our filesets have block quotas applied to them with the hard and soft limit being the same. Customers then map drives to these filesets using the following path: \\ces-cluster\share\fileset1 \\ces-cluster\share\fileset2 ?fileset400 Some customers have one drive mapping only, others have two or more. For the customers that map two or more drives, the quota that Windows reports is identical for each fileset, and is usually for the last fileset that gets mapped. I do not believe this has always been the case: our customers have only recently (since the New Year at least) started complaining in the three+ years we?ve been running GPFS. In my test cluster I?ve tried rolling back to 4.2.3-2 which we were running last Summer and I can easily reproduce the problem. So a couple of questions: 1. Am I right to think that since GPFS is actually exposing the quota of a fileset over SMB then each fileset mapped as a drive in the manner above *should* each report the correct quota? 2. Does anyone else see the same behaviour? 3. There is suspicion this could be recent changes from a Microsoft Update and I?m not ruling that out just yet. Ok so that?s not a question ? I am worried that IBM may tell us we?re doing it wrong (humm) and to create individual exports for each fileset but this will quickly become tiresome! Thanks Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From z.han at imperial.ac.uk Mon May 14 11:33:07 2018 From: z.han at imperial.ac.uk (z.han at imperial.ac.uk) Date: Mon, 14 May 2018 11:33:07 +0100 (BST) Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Message-ID: Dear All, Any one has the same problem? /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); ^ ...... From jonathan.buzzard at strath.ac.uk Mon May 14 11:44:51 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 14 May 2018 11:44:51 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: Message-ID: <1526294691.17680.18.camel@strath.ac.uk> On Mon, 2018-05-14 at 10:09 +0000, Sobey, Richard A wrote: [SNIP] > ? > I am worried that IBM may tell us we?re doing it wrong (humm) and to > create individual exports for each fileset but this will quickly > become tiresome! > Worst case scenario you could fall back to using the dfree option in smb.conf and then use a program to get the file quota. I have the ~100 lines of C that you need it. Though it has been ~5 years since I last used it. In fact the whole reporting the fileset quota as the disk size is my idea, and the dfree config option is how I implemented it prior to IBM adding it to the vfs_gpfs module. A quick check shows a commit from Jeremy Allison on June 18th last year to use const stuct smb_filename, the comment on the commit is ?instead of const char *. We need to migrate all pathname based VFS calls to use a struct to finish modernising the VFS with extra timestamp and flags parameters. I suspect this change has broken the behaviour. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From spectrumscale at kiranghag.com Mon May 14 11:56:37 2018 From: spectrumscale at kiranghag.com (KG) Date: Mon, 14 May 2018 16:26:37 +0530 Subject: [gpfsug-discuss] pool-metadata_high_error Message-ID: Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rohwedder at de.ibm.com Mon May 14 12:18:55 2018 From: rohwedder at de.ibm.com (Markus Rohwedder) Date: Mon, 14 May 2018 13:18:55 +0200 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: Hello, the pool metadata high error reports issues with the free blocks in the metadataOnly and/or dataAndMetadata NSDs in the system pool. mmlspool and subsequently the GPFSPool sensor is the source of the information that is used be the threshold that reports this error. So please compare with mmlspool and mmperfmon query gpfs_pool_disksize, gpfs_pool_free_fullkb -b 86400 -n 1 Once inodes are allocated I am not aware of a method to de-allocate them. This is what the Knowledge Center says: "Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata space that cannot be reclaimed. " Mit freundlichen Gr??en / Kind regards Dr. Markus Rohwedder Spectrum Scale GUI Development Phone: +49 7034 6430190 IBM Deutschland Research & Development E-Mail: rohwedder at de.ibm.com Am Weiher 24 65451 Kelsterbach Germany From: KG To: gpfsug main discussion list Date: 14.05.2018 12:57 Subject: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1A908817.gif Type: image/gif Size: 4659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stockf at us.ibm.com Mon May 14 12:28:58 2018 From: stockf at us.ibm.com (Frederick Stock) Date: Mon, 14 May 2018 07:28:58 -0400 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: The difference in your inode information is presumably because the fileset you reference is an independent fileset and it has its own inode space distinct from the indoe space used for the "root" fileset (file system). Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com From: "Markus Rohwedder" To: gpfsug main discussion list Date: 05/14/2018 07:19 AM Subject: Re: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello, the pool metadata high error reports issues with the free blocks in the metadataOnly and/or dataAndMetadata NSDs in the system pool. mmlspool and subsequently the GPFSPool sensor is the source of the information that is used be the threshold that reports this error. So please compare with mmlspool and mmperfmon query gpfs_pool_disksize, gpfs_pool_free_fullkb -b 86400 -n 1 Once inodes are allocated I am not aware of a method to de-allocate them. This is what the Knowledge Center says: "Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata space that cannot be reclaimed. " Mit freundlichen Gr??en / Kind regards Dr. Markus Rohwedder Spectrum Scale GUI Development Phone: +49 7034 6430190 IBM Deutschland Research & Development E-Mail: rohwedder at de.ibm.com Am Weiher 24 65451 Kelsterbach Germany KG ---14.05.2018 12:57:33---Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. From: KG To: gpfsug main discussion list Date: 14.05.2018 12:57 Subject: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 4659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From arc at b4restore.com Mon May 14 12:10:18 2018 From: arc at b4restore.com (Andi Rhod Christiansen) Date: Mon, 14 May 2018 11:10:18 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 In-Reply-To: References: Message-ID: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> Hi, Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 and latest support is 7.4. You have to revert back to 3.10.0-693 ? I just had the same issue Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. Best regards Andi R. Christiansen -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk Sendt: 14. maj 2018 12:33 Til: gpfsug main discussion list Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Dear All, Any one has the same problem? /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); ^ ...... From spectrumscale at kiranghag.com Mon May 14 12:35:47 2018 From: spectrumscale at kiranghag.com (KG) Date: Mon, 14 May 2018 17:05:47 +0530 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: On Mon, May 14, 2018 at 4:48 PM, Markus Rohwedder wrote: > Once inodes are allocated I am not aware of a method to de-allocate them. > This is what the Knowledge Center says: > > *"Inodes are allocated when they are used. When a file is deleted, the > inode is reused, but inodes are never deallocated. When setting the maximum > number of inodes in a file system, there is the option to preallocate > inodes. However, in most cases there is no need to preallocate inodes > because, by default, inodes are allocated in sets as needed. If you do > decide to preallocate inodes, be careful not to preallocate more inodes > than will be used; otherwise, the allocated inodes will unnecessarily > consume metadata space that cannot be reclaimed. "* > > > I believe the Maximum number of inodes cannot be reduced but allocated number of inodes can be. Not sure why the GUI isnt allowing to reduce it. ? > > From: KG > To: gpfsug main discussion list > Date: 14.05.2018 12:57 > Subject: [gpfsug-discuss] pool-metadata_high_error > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hi Folks > > IHAC who is reporting pool-metadata_high_error on GUI. > > The inode utilisation on filesystem is as below > Used inodes - 92922895 > free inodes - 1684812529 > allocated - 1777735424 > max inodes - 1911363520 > > the inode utilization on one fileset (it is only one being used) is below > Used inodes - 93252664 > allocated - 1776624128 > max inodes 1876624064 > > is this because the difference in allocated and max inodes is very less? > > Customer tried reducing allocated inodes on fileset (between max and used > inode) and GUI complains that it is out of range. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 26124 bytes Desc: not available URL: From rohwedder at de.ibm.com Mon May 14 12:50:49 2018 From: rohwedder at de.ibm.com (Markus Rohwedder) Date: Mon, 14 May 2018 13:50:49 +0200 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: Hi, The GUI behavior is correct. You can reduce the maximum number of inodes of an inode space, but not below the allocated inodes level. See below: # Setting inode levels to 300000 max/ 200000 preallocated [root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 300000:200000 Set maxInodes for inode space 0 to 300000 Fileset root changed. # The actually allocated values may be sloightly different: [root at cache-11 ~]# mmlsfileset gpfs0 -L Filesets in file system 'gpfs0': Name Id RootInode ParentId Created InodeSpace MaxInodes AllocInodes Comment root 0 3 -- Mon Feb 26 11:34:06 2018 0 300000 200032 root fileset # Lowering the allocated values is not allowed [root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 300000:150000 The number of inodes to preallocate cannot be lower than the 200032 inodes already allocated. Input parameter value for inode limit out of range. mmchfileset: Command failed. Examine previous error messages to determine cause. # However, you can change the max inodes up to the allocated value [root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 200032:200032 Set maxInodes for inode space 0 to 200032 Fileset root changed. [root at cache-11 ~]# mmlsfileset gpfs0 -L Filesets in file system 'gpfs0': Name Id RootInode ParentId Created InodeSpace MaxInodes AllocInodes Comment root 0 3 -- Mon Feb 26 11:34:06 2018 0 200032 200032 root fileset Mit freundlichen Gr??en / Kind regards Dr. Markus Rohwedder Spectrum Scale GUI Development Phone: +49 7034 6430190 IBM Deutschland Research & Development E-Mail: rohwedder at de.ibm.com Am Weiher 24 65451 Kelsterbach Germany From: KG To: gpfsug main discussion list Date: 14.05.2018 13:37 Subject: Re: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org On Mon, May 14, 2018 at 4:48 PM, Markus Rohwedder wrote: Once inodes are allocated I am not aware of a method to de-allocate them. This is what the Knowledge Center says: "Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata space that cannot be reclaimed. " I believe the Maximum number of inodes cannot be reduced but allocated number of inodes can be. Not sure why the GUI isnt allowing to reduce it. ? From: KG To: gpfsug main discussion list Date: 14.05.2018 12:57 Subject: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 18426749.gif Type: image/gif Size: 4659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 18361734.gif Type: image/gif Size: 26124 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Mon May 14 12:54:17 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Mon, 14 May 2018 11:54:17 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526294691.17680.18.camel@strath.ac.uk> References: <1526294691.17680.18.camel@strath.ac.uk> Message-ID: Thanks Jonathan. What I failed to mention in my OP was that MacOS clients DO report the correct size of each mounted folder. Not sure how that changes anything except to reinforce the idea that it's Windows at fault. Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: 14 May 2018 11:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query On Mon, 2018-05-14 at 10:09 +0000, Sobey, Richard A wrote: [SNIP] > ? > I am worried that IBM may tell us we?re doing it wrong (humm) and to > create individual exports for each fileset but this will quickly > become tiresome! > Worst case scenario you could fall back to using the dfree option in smb.conf and then use a program to get the file quota. I have the ~100 lines of C that you need it. Though it has been ~5 years since I last used it. In fact the whole reporting the fileset quota as the disk size is my idea, and the dfree config option is how I implemented it prior to IBM adding it to the vfs_gpfs module. A quick check shows a commit from Jeremy Allison on June 18th last year to use const stuct smb_filename, the comment on the commit is ?instead of const char *. We need to migrate all pathname based VFS calls to use a struct to finish modernising the VFS with extra timestamp and flags parameters. I suspect this change has broken the behaviour. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From z.han at imperial.ac.uk Mon May 14 12:59:25 2018 From: z.han at imperial.ac.uk (z.han at imperial.ac.uk) Date: Mon, 14 May 2018 12:59:25 +0100 (BST) Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 In-Reply-To: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> Message-ID: Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh https://access.redhat.com/errata/RHSA-2018:1318 Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) Kernel: error in exception handling leads to DoS (CVE-2018-8897) Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) ... On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > Date: Mon, 14 May 2018 11:10:18 +0000 > From: Andi Rhod Christiansen > Reply-To: gpfsug main discussion list > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Hi, > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > I just had the same issue > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > Best regards > Andi R. Christiansen > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk > Sendt: 14. maj 2018 12:33 > Til: gpfsug main discussion list > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > > Dear All, > > Any one has the same problem? > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > exit 1;\ > fi > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > ^ ...... > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From arc at b4restore.com Mon May 14 13:13:21 2018 From: arc at b4restore.com (Andi Rhod Christiansen) Date: Mon, 14 May 2018 12:13:21 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> Message-ID: <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" Best regards. -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk Sendt: 14. maj 2018 13:59 Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh https://access.redhat.com/errata/RHSA-2018:1318 Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) Kernel: error in exception handling leads to DoS (CVE-2018-8897) Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) ... On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > Date: Mon, 14 May 2018 11:10:18 +0000 > From: Andi Rhod Christiansen > Reply-To: gpfsug main discussion list > > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Hi, > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > I just had the same issue > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > Best regards > Andi R. Christiansen > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af > z.han at imperial.ac.uk > Sendt: 14. maj 2018 12:33 > Til: gpfsug main discussion list > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Dear All, > > Any one has the same problem? > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > exit 1;\ > fi > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > ^ ...... > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From jonathan.buzzard at strath.ac.uk Mon May 14 13:19:43 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 14 May 2018 13:19:43 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> Message-ID: <1526300383.17680.20.camel@strath.ac.uk> On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From knop at us.ibm.com Mon May 14 14:30:41 2018 From: knop at us.ibm.com (Felipe Knop) Date: Mon, 14 May 2018 09:30:41 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> Message-ID: All, Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed in Scale to support this kernel level, upgrading to one of those upcoming PTFs will be required in order to run with that kernel. Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: Andi Rhod Christiansen To: gpfsug main discussion list Date: 05/14/2018 08:15 AM Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Sent by: gpfsug-discuss-bounces at spectrumscale.org You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" Best regards. -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk Sendt: 14. maj 2018 13:59 Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh https://access.redhat.com/errata/RHSA-2018:1318 Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) Kernel: error in exception handling leads to DoS (CVE-2018-8897) Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) ... On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > Date: Mon, 14 May 2018 11:10:18 +0000 > From: Andi Rhod Christiansen > Reply-To: gpfsug main discussion list > > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Hi, > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > I just had the same issue > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > Best regards > Andi R. Christiansen > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af > z.han at imperial.ac.uk > Sendt: 14. maj 2018 12:33 > Til: gpfsug main discussion list > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Dear All, > > Any one has the same problem? > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > exit 1;\ > fi > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > ^ ...... > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From bbanister at jumptrading.com Mon May 14 21:29:02 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 14 May 2018 20:29:02 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas Message-ID: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> Hi all, I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? Can't find anything in man pages, thanks! -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Mon May 14 22:26:44 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Tue, 15 May 2018 00:26:44 +0300 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526300383.17680.20.camel@strath.ac.uk> References: <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: Hi What is the output of mmlsfs - does you have --filesetdf enabled ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Jonathan Buzzard To: gpfsug main discussion list Date: 05/14/2018 03:22 PM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From peserocka at gmail.com Mon May 14 22:51:36 2018 From: peserocka at gmail.com (Peter Serocka) Date: Mon, 14 May 2018 23:51:36 +0200 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> Message-ID: <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From kywang at us.ibm.com Mon May 14 23:12:48 2018 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Mon, 14 May 2018 18:12:48 -0400 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> Message-ID: Try disabling and re-enabling default quotas withthe -d option for that fileset. mmdefquotaon command Activates default quota limit usage. Synopsis mmdefquotaon [?u] [?g] [?j] [?v] [?d] {Device [Device... ] | ?a} or mmdefquotaon [?u] [?g] [?v] [?d] {Device:Fileset ... | ?a} ... ?d Assigns default quota limits to existing users, groups, or filesets when the mmdefedquota command is issued. When ??perfileset?quota is not in effect for the file system, this option will only affect existing users, groups, or filesets with no established quota limits. When ??perfileset?quota is in effect for the file system, this option will affect existing users, groups, or filesets with no established quota limits, and it will also change existing users or groups that refer to default quotas at the file system level into users or groups that refer to fileset?level default quota. For more information about default quota priorities, see the following IBM Spectrum Scale: Administration and Programming Reference topic: Default quotas. If this option is not chosen, existing quota entries remain in effect and are not governed by the default quota rules. Kuei-Yu Wang-Knop IBM Scalable I/O development From: Bryan Banister To: "gpfsug main discussion list (gpfsug-discuss at spectrumscale.org)" Date: 05/14/2018 04:29 PM Subject: [gpfsug-discuss] How to clear explicitly set quotas Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi all, I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? Can?t find anything in man pages, thanks! -Bryan Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From christof.schmitt at us.ibm.com Mon May 14 23:17:45 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Mon, 14 May 2018 22:17:45 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: , <1526294691.17680.18.camel@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Tue May 15 06:59:38 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Tue, 15 May 2018 05:59:38 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de> Message-ID: <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Tue May 15 08:10:32 2018 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Tue, 15 May 2018 09:10:32 +0200 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de> <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> Message-ID: An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Tue May 15 09:10:21 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Tue, 15 May 2018 08:10:21 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: Hi Yaron It's currently set to no. Thanks Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Yaron Daniel Sent: 14 May 2018 22:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query Hi What is the output of mmlsfs - does you have --filesetdfenabled ? Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd [cid:image001.gif at 01D3EC2C.8ACE5310] Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel [IBM Storage Strategy and Solutions v1][IBM Storage Management and Data Protection v1][cid:image004.gif at 01D3EC2C.8ACE5310][cid:image005.gif at 01D3EC2C.8ACE5310] [Related image] From: Jonathan Buzzard > To: gpfsug main discussion list > Date: 05/14/2018 03:22 PM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 1851 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 4376 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 5093 bytes Desc: image003.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.gif Type: image/gif Size: 4746 bytes Desc: image004.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.gif Type: image/gif Size: 4557 bytes Desc: image005.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 11294 bytes Desc: image006.jpg URL: From YARD at il.ibm.com Tue May 15 11:10:45 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Tue, 15 May 2018 13:10:45 +0300 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: Hi So - u want to get quota report per fileset quota - right ? We use this param when we want to monitor the NFS exports with df , i think this should also affect the SMB filesets. Can u try to enable it and see if it works ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: "Sobey, Richard A" To: gpfsug main discussion list Date: 05/15/2018 11:11 AM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Yaron It?s currently set to no. Thanks Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Yaron Daniel Sent: 14 May 2018 22:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query Hi What is the output of mmlsfs - does you have --filesetdfenabled ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Jonathan Buzzard To: gpfsug main discussion list Date: 05/14/2018 03:22 PM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Tue May 15 11:23:49 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 15 May 2018 11:23:49 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: <1526379829.17680.27.camel@strath.ac.uk> On Tue, 2018-05-15 at 13:10 +0300, Yaron Daniel wrote: > Hi > > So - u want to get quota report per fileset quota - right ? > We use this param when we want to monitor the NFS exports with df , i > think this should also affect the SMB filesets. > > Can u try to enable it and see if it works ? > It is irrelevant to Samba, this is or should be handled in vfs_gpfs as Christof said earlier. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Tue May 15 11:28:00 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 15 May 2018 11:28:00 +0100 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> Message-ID: <1526380080.17680.29.camel@strath.ac.uk> On Mon, 2018-05-14 at 09:30 -0400, Felipe Knop wrote: > All, > > Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are > needed in Scale to support this kernel level, upgrading to one of > those upcoming PTFs will be required in order to run with that > kernel. > One wonders what the mmfs26/mmfslinux does that you can't achieve with fuse these days? Sure I understand back in the day fuse didn't exist and it could be a significant rewrite of code to use fuse instead. On the plus side though it would make all these sorts of security issues, can't upgrade your distro etc. disappear. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From valdis.kletnieks at vt.edu Tue May 15 13:51:07 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Tue, 15 May 2018 08:51:07 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <1526380080.17680.29.camel@strath.ac.uk> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <1526380080.17680.29.camel@strath.ac.uk> Message-ID: <201401.1526388667@turing-police.cc.vt.edu> On Tue, 15 May 2018 11:28:00 +0100, Jonathan Buzzard said: > One wonders what the mmfs26/mmfslinux does that you can't achieve with > fuse these days? Handling each disk I/O request without several transitions to/from userspace comes to mind... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From ulmer at ulmer.org Tue May 15 16:09:01 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 15 May 2018 10:09:01 -0500 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <1526380080.17680.29.camel@strath.ac.uk> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <1526380080.17680.29.camel@strath.ac.uk> Message-ID: <26DF1F4F-BC66-40C8-89F1-3A64E94CE5B4@ulmer.org> > On May 15, 2018, at 5:28 AM, Jonathan Buzzard wrote: > > On Mon, 2018-05-14 at 09:30 -0400, Felipe Knop wrote: >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is >> planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are >> needed in Scale to support this kernel level, upgrading to one of >> those upcoming PTFs will be required in order to run with that >> kernel. >> > > One wonders what the mmfs26/mmfslinux does that you can't achieve with > fuse these days? Sure I understand back in the day fuse didn't exist > and it could be a significant rewrite of code to use fuse instead. On > the plus side though it would make all these sorts of security issues, > can't upgrade your distro etc. disappear. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > More lines of code. More code is bad. :) Liberty, -- Stephen From bbanister at jumptrading.com Tue May 15 16:35:51 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 15:35:51 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> Message-ID: <723293fee7214938ae20cdfdbaf99149@jumptrading.com> That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue May 15 16:59:56 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 15:59:56 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <723293fee7214938ae20cdfdbaf99149@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> <723293fee7214938ae20cdfdbaf99149@jumptrading.com> Message-ID: <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Tue May 15 16:13:15 2018 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Tue, 15 May 2018 15:13:15 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> Message-ID: <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> I know these dates can move, but any vague idea of a timeframe target for release (this quarter, next quarter, etc.)? Thanks! -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' > On May 14, 2018, at 9:30 AM, Felipe Knop wrote: > > All, > > Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed in Scale to support this kernel level, upgrading to one of those upcoming PTFs will be required in order to run with that kernel. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that > > From: Andi Rhod Christiansen > To: gpfsug main discussion list > Date: 05/14/2018 08:15 AM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > You are welcome. > > I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. > > they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" > > Best regards. > > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk > Sendt: 14. maj 2018 13:59 > Til: gpfsug main discussion list > Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > > Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh > > > https://access.redhat.com/errata/RHSA-2018:1318 > > Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) > > Kernel: error in exception handling leads to DoS (CVE-2018-8897) > Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) > > kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) > > ... > > > On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > > Date: Mon, 14 May 2018 11:10:18 +0000 > > From: Andi Rhod Christiansen > > Reply-To: gpfsug main discussion list > > > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Hi, > > > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > > > I just had the same issue > > > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > > > > Best regards > > Andi R. Christiansen > > > > -----Oprindelig meddelelse----- > > Fra: gpfsug-discuss-bounces at spectrumscale.org > > P? vegne af > > z.han at imperial.ac.uk > > Sendt: 14. maj 2018 12:33 > > Til: gpfsug main discussion list > > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Dear All, > > > > Any one has the same problem? > > > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > > exit 1;\ > > fi > > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > > ^ ...... > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: Message signed with OpenPGP URL: From bbanister at jumptrading.com Tue May 15 19:04:40 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 18:04:40 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Message-ID: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> I am now trying to get our system automation to play with the new Spectrum Scale Protocols 5.0.1-0 release and have found that the nfs-ganesha.service can no longer start: # systemctl status nfs-ganesha ? nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2018-05-15 12:43:23 CDT; 8s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 8398 ExecStart=/usr/bin/ganesha.nfsd $OPTIONS (code=exited, status=203/EXEC) May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Starting NFS-Ganesha file server... May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[8398]: Failed at step EXEC spawning /usr/bin/ganesha.nfsd: No such file or directory May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service: control process exited, code=exited status=203 May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Failed to start NFS-Ganesha file server. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Unit nfs-ganesha.service entered failed state. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service failed. Sure enough, it?s not there anymore: # ls /usr/bin/*ganesha* /usr/bin/ganesha_conf /usr/bin/ganesha_mgr /usr/bin/ganesha_stats /usr/bin/gpfs.ganesha.nfsd /usr/bin/sm_notify.ganesha So I wondered what does provide it: # yum whatprovides /usr/bin/ganesha.nfsd Loaded plugins: etckeeper, priorities 2490 packages excluded due to repository priority protections [snip] nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 : NFS-Ganesha is a NFS Server running in user space Repo : @rhel7-universal-linux-production Matched from: Filename : /usr/bin/ganesha.nfsd Confirmed again just for sanity sake: # rpm -ql nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" /usr/bin/ganesha.nfsd But it?s not in the latest release: # rpm -ql nfs-ganesha-2.5.3-ibm020.00.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" # I also looked in every RPM package that was provided in the Spectrum Scale 5.0.1-0 download. So should it be provided? Or should the service really try to start `/usr/bin/gpfs.ganesha.nfsd`?? Or should there be a symlink between the two??? Is this something the magical Spectrum Scale Install Toolkit would do under the covers???? Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue May 15 19:08:08 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 18:08:08 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> Message-ID: <6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> BTW, I just tried the symlink option and it seems to work: # ln -s gpfs.ganesha.nfsd ganesha.nfsd # ls -ld ganesha.nfsd lrwxrwxrwx 1 root root 17 May 15 13:05 ganesha.nfsd -> gpfs.ganesha.nfsd # # systemctl restart nfs-ganesha.service # systemctl status nfs-ganesha.service ? nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2018-05-15 13:06:10 CDT; 5s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 62888 ExecStop=/bin/dbus-send --system --dest=org.ganesha.nfsd --type=method_call /org/ganesha/nfsd/admin org.ganesha.nfsd.admin.shutdown (code=exited, status=0/SUCCESS) Process: 63091 ExecStartPost=/bin/bash -c prlimit --pid $MAINPID --nofile=$NOFILE:$NOFILE (code=exited, status=0/SUCCESS) Process: 63089 ExecStart=/usr/bin/ganesha.nfsd $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 63090 (ganesha.nfsd) Memory: 6.1M CGroup: /system.slice/nfs-ganesha.service ??63090 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT May 15 13:06:10 fpia-gpfs-testing-cnfs01 systemd[1]: Starting NFS-Ganesha file server... May 15 13:06:10 fpia-gpfs-testing-cnfs01 systemd[1]: Started NFS-Ganesha file server. [root at fpia-gpfs-testing-cnfs01 bin]# Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 1:05 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Note: External Email ________________________________ I am now trying to get our system automation to play with the new Spectrum Scale Protocols 5.0.1-0 release and have found that the nfs-ganesha.service can no longer start: # systemctl status nfs-ganesha ? nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2018-05-15 12:43:23 CDT; 8s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 8398 ExecStart=/usr/bin/ganesha.nfsd $OPTIONS (code=exited, status=203/EXEC) May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Starting NFS-Ganesha file server... May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[8398]: Failed at step EXEC spawning /usr/bin/ganesha.nfsd: No such file or directory May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service: control process exited, code=exited status=203 May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Failed to start NFS-Ganesha file server. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Unit nfs-ganesha.service entered failed state. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service failed. Sure enough, it?s not there anymore: # ls /usr/bin/*ganesha* /usr/bin/ganesha_conf /usr/bin/ganesha_mgr /usr/bin/ganesha_stats /usr/bin/gpfs.ganesha.nfsd /usr/bin/sm_notify.ganesha So I wondered what does provide it: # yum whatprovides /usr/bin/ganesha.nfsd Loaded plugins: etckeeper, priorities 2490 packages excluded due to repository priority protections [snip] nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 : NFS-Ganesha is a NFS Server running in user space Repo : @rhel7-universal-linux-production Matched from: Filename : /usr/bin/ganesha.nfsd Confirmed again just for sanity sake: # rpm -ql nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" /usr/bin/ganesha.nfsd But it?s not in the latest release: # rpm -ql nfs-ganesha-2.5.3-ibm020.00.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" # I also looked in every RPM package that was provided in the Spectrum Scale 5.0.1-0 download. So should it be provided? Or should the service really try to start `/usr/bin/gpfs.ganesha.nfsd`?? Or should there be a symlink between the two??? Is this something the magical Spectrum Scale Install Toolkit would do under the covers???? Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue May 15 19:31:13 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 15 May 2018 19:31:13 +0100 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> <6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> Message-ID: <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From christof.schmitt at us.ibm.com Tue May 15 19:49:44 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Tue, 15 May 2018 18:49:44 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526379829.17680.27.camel@strath.ac.uk> References: <1526379829.17680.27.camel@strath.ac.uk>, <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: From knop at us.ibm.com Tue May 15 20:02:53 2018 From: knop at us.ibm.com (Felipe Knop) Date: Tue, 15 May 2018 15:02:53 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com><83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: All, Validation of RHEL 7.5 on Scale is currently under way, and we are currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which will include the corresponding fix. Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: Ryan Novosielski To: gpfsug main discussion list Date: 05/15/2018 12:56 PM Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Sent by: gpfsug-discuss-bounces at spectrumscale.org I know these dates can move, but any vague idea of a timeframe target for release (this quarter, next quarter, etc.)? Thanks! -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' > On May 14, 2018, at 9:30 AM, Felipe Knop wrote: > > All, > > Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed in Scale to support this kernel level, upgrading to one of those upcoming PTFs will be required in order to run with that kernel. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that > > From: Andi Rhod Christiansen > To: gpfsug main discussion list > Date: 05/14/2018 08:15 AM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > You are welcome. > > I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. > > they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" > > Best regards. > > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk > Sendt: 14. maj 2018 13:59 > Til: gpfsug main discussion list > Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > > Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh > > > https://access.redhat.com/errata/RHSA-2018:1318 > > Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) > > Kernel: error in exception handling leads to DoS (CVE-2018-8897) > Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) > > kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) > > ... > > > On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > > Date: Mon, 14 May 2018 11:10:18 +0000 > > From: Andi Rhod Christiansen > > Reply-To: gpfsug main discussion list > > > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Hi, > > > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > > > I just had the same issue > > > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > > > > Best regards > > Andi R. Christiansen > > > > -----Oprindelig meddelelse----- > > Fra: gpfsug-discuss-bounces at spectrumscale.org > > P? vegne af > > z.han at imperial.ac.uk > > Sendt: 14. maj 2018 12:33 > > Til: gpfsug main discussion list > > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Dear All, > > > > Any one has the same problem? > > > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > > exit 1;\ > > fi > > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > > ^ ...... > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stijn.deweirdt at ugent.be Tue May 15 20:25:31 2018 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Tue, 15 May 2018 21:25:31 +0200 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: so this means running out-of-date kernels for at least another month? oh boy... i hope this is not some new trend in gpfs support. othwerwise all RHEL based sites will have to start adding EUS as default cost to run gpfs with basic security compliance. stijn On 05/15/2018 09:02 PM, Felipe Knop wrote: > All, > > Validation of RHEL 7.5 on Scale is currently under way, and we are > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > will include the corresponding fix. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > > > From: Ryan Novosielski > To: gpfsug main discussion list > Date: 05/15/2018 12:56 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > I know these dates can move, but any vague idea of a timeframe target for > release (this quarter, next quarter, etc.)? > > Thanks! > > -- > ____ > || \\UTGERS, > |---------------------------*O*--------------------------- > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > || \\ of NJ | Office of Advanced Research Computing - MSB > C630, Newark > `' > >> On May 14, 2018, at 9:30 AM, Felipe Knop wrote: >> >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > in Scale to support this kernel level, upgrading to one of those upcoming > PTFs will be required in order to run with that kernel. >> >> Regards, >> >> Felipe >> >> ---- >> Felipe Knop knop at us.ibm.com >> GPFS Development and Security >> IBM Systems >> IBM Building 008 >> 2455 South Rd, Poughkeepsie, NY 12601 >> (845) 433-9314 T/L 293-9314 >> >> >> >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > welcome. I see your concern but as long as IBM has not released spectrum > scale for 7.5 that >> >> From: Andi Rhod Christiansen >> To: gpfsug main discussion list >> Date: 05/14/2018 08:15 AM >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> >> >> >> You are welcome. >> >> I see your concern but as long as IBM has not released spectrum scale for > 7.5 that is their only solution, in regards to them caring about security I > would say yes they do care, but from their point of view either they tell > the customer to upgrade as soon as red hat releases new versions and > forcing the customer to be down until they have a new release or they tell > them to stay on supported level to a new release is ready. >> >> they should release a version supporting the new kernel soon, IBM told me > when I asked that they are "currently testing and have a support date soon" >> >> Best regards. >> >> >> -----Oprindelig meddelelse----- >> Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af z.han at imperial.ac.uk >> Sendt: 14. maj 2018 13:59 >> Til: gpfsug main discussion list >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> >> Thanks. Does IBM care about security, one would ask? In this case I'd > choose to use the new kernel for my virtualization over gpfs ... sigh >> >> >> https://access.redhat.com/errata/RHSA-2018:1318 >> >> Kernel: KVM: error in exception handling leads to wrong debug stack value > (CVE-2018-1087) >> >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > escalation (CVE-2017-16939) >> >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > netfilter/ebtables.c (CVE-2018-1068) >> >> ... >> >> >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: >>> Date: Mon, 14 May 2018 11:10:18 +0000 >>> From: Andi Rhod Christiansen >>> Reply-To: gpfsug main discussion list >>> >>> To: gpfsug main discussion list >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Hi, >>> >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? >>> >>> I just had the same issue >>> >>> Revert to previous working kernel at redhat 7.4 release which is > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > level. >>> >>> >>> Best regards >>> Andi R. Christiansen >>> >>> -----Oprindelig meddelelse----- >>> Fra: gpfsug-discuss-bounces at spectrumscale.org >>> P? vegne af >>> z.han at imperial.ac.uk >>> Sendt: 14. maj 2018 12:33 >>> Til: gpfsug main discussion list >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Dear All, >>> >>> Any one has the same problem? >>> >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > [ $? -ne 0 ]; then \ >>> exit 1;\ >>> fi >>> make[2]: Entering directory > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > no member named ?i_wb_list? >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); >>> ^ ...... >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From abeattie at au1.ibm.com Tue May 15 22:45:47 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Tue, 15 May 2018 21:45:47 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: , <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com><83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com><4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Tue May 15 23:00:48 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 15 May 2018 18:00:48 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks Message-ID: Hello All, Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? I understand that i will not need a redundant SMB server configuration. I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Tue May 15 22:57:12 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Tue, 15 May 2018 21:57:12 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: All, I have to kind of agree with Andrew ? it seems that there is a broad range of takes on kernel upgrades ? everything from ?install the latest kernel the day it comes out? to ?stick with this kernel, we know it works.? Related to that, let me throw out this question ? what about those who haven?t upgraded their kernel in a while at least because they?re concerned with the negative performance impacts of the meltdown / spectre patches??? So let?s just say a customer has upgraded the non-GPFS servers in their cluster, but they?ve left their NSD servers unpatched (I?m talking about the kernel only here; all other updates are applied) due to the aforementioned performance concerns ? as long as they restrict access (i.e. who can log in) and use appropriate host-based firewall rules, is their some risk that they should be aware of? Discuss. Thanks! Kevin On May 15, 2018, at 4:45 PM, Andrew Beattie > wrote: this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux that they "just can't move off" Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: Stijn De Weirdt > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Date: Wed, May 16, 2018 5:35 AM so this means running out-of-date kernels for at least another month? oh boy... i hope this is not some new trend in gpfs support. othwerwise all RHEL based sites will have to start adding EUS as default cost to run gpfs with basic security compliance. stijn On 05/15/2018 09:02 PM, Felipe Knop wrote: > All, > > Validation of RHEL 7.5 on Scale is currently under way, and we are > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > will include the corresponding fix. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > > > From: Ryan Novosielski > > To: gpfsug main discussion list > > Date: 05/15/2018 12:56 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > I know these dates can move, but any vague idea of a timeframe target for > release (this quarter, next quarter, etc.)? > > Thanks! > > -- > ____ > || \\UTGERS, > |---------------------------*O*--------------------------- > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > || \\ of NJ | Office of Advanced Research Computing - MSB > C630, Newark > `' > >> On May 14, 2018, at 9:30 AM, Felipe Knop > wrote: >> >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > in Scale to support this kernel level, upgrading to one of those upcoming > PTFs will be required in order to run with that kernel. >> >> Regards, >> >> Felipe >> >> ---- >> Felipe Knop knop at us.ibm.com >> GPFS Development and Security >> IBM Systems >> IBM Building 008 >> 2455 South Rd, Poughkeepsie, NY 12601 >> (845) 433-9314 T/L 293-9314 >> >> >> >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > welcome. I see your concern but as long as IBM has not released spectrum > scale for 7.5 that >> >> From: Andi Rhod Christiansen > >> To: gpfsug main discussion list > >> Date: 05/14/2018 08:15 AM >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> >> >> >> You are welcome. >> >> I see your concern but as long as IBM has not released spectrum scale for > 7.5 that is their only solution, in regards to them caring about security I > would say yes they do care, but from their point of view either they tell > the customer to upgrade as soon as red hat releases new versions and > forcing the customer to be down until they have a new release or they tell > them to stay on supported level to a new release is ready. >> >> they should release a version supporting the new kernel soon, IBM told me > when I asked that they are "currently testing and have a support date soon" >> >> Best regards. >> >> >> -----Oprindelig meddelelse----- >> Fra: gpfsug-discuss-bounces at spectrumscale.org > > P? vegne af z.han at imperial.ac.uk >> Sendt: 14. maj 2018 13:59 >> Til: gpfsug main discussion list > >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> >> Thanks. Does IBM care about security, one would ask? In this case I'd > choose to use the new kernel for my virtualization over gpfs ... sigh >> >> >> https://access.redhat.com/errata/RHSA-2018:1318 >> >> Kernel: KVM: error in exception handling leads to wrong debug stack value > (CVE-2018-1087) >> >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > escalation (CVE-2017-16939) >> >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > netfilter/ebtables.c (CVE-2018-1068) >> >> ... >> >> >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: >>> Date: Mon, 14 May 2018 11:10:18 +0000 >>> From: Andi Rhod Christiansen > >>> Reply-To: gpfsug main discussion list >>> > >>> To: gpfsug main discussion list > >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Hi, >>> >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? >>> >>> I just had the same issue >>> >>> Revert to previous working kernel at redhat 7.4 release which is > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > level. >>> >>> >>> Best regards >>> Andi R. Christiansen >>> >>> -----Oprindelig meddelelse----- >>> Fra: gpfsug-discuss-bounces at spectrumscale.org >>> > P? vegne af >>> z.han at imperial.ac.uk >>> Sendt: 14. maj 2018 12:33 >>> Til: gpfsug main discussion list > >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Dear All, >>> >>> Any one has the same problem? >>> >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > [ $? -ne 0 ]; then \ >>> exit 1;\ >>> fi >>> make[2]: Entering directory > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > no member named ?i_wb_list? >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); >>> ^ ...... >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From leslie.james.elliott at gmail.com Tue May 15 23:18:45 2018 From: leslie.james.elliott at gmail.com (leslie elliott) Date: Wed, 16 May 2018 08:18:45 +1000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: you might want to read the license details of gpfs before you try do this :) pretty sure you need a server license to re-export the files from a GPFS mount On 16 May 2018 at 08:00, wrote: > Hello All, > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on > GPFS client? Is it supported and does it lead to any issues? > I understand that i will not need a redundant SMB server configuration. > > I could use CES, but CES does not support follow-symlinks outside > respective SMB export. Follow-symlinks is a however a hard-requirement for > to follow links outside GPFS filesystems. > > Thanks, > Lohit > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Tue May 15 23:32:02 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Tue, 15 May 2018 22:32:02 +0000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 15 23:46:18 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 15 May 2018 18:46:18 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com><83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com><4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: Kevin, that seems to be a good point. IF you have dedicated hardware to acting only as a storage and/or file server, THEN neither meltdown nor spectre should not be a worry. BECAUSE meltdown and spectre are just about an adversarial process spying on another process or kernel memory. IF we're not letting any potential adversary run her code on our file server, what's the exposure? NOW, let the security experts tell us where the flaw is in this argument... From: "Buterbaugh, Kevin L" To: gpfsug main discussion list Date: 05/15/2018 06:12 PM Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Sent by: gpfsug-discuss-bounces at spectrumscale.org All, I have to kind of agree with Andrew ? it seems that there is a broad range of takes on kernel upgrades ? everything from ?install the latest kernel the day it comes out? to ?stick with this kernel, we know it works.? Related to that, let me throw out this question ? what about those who haven?t upgraded their kernel in a while at least because they?re concerned with the negative performance impacts of the meltdown / spectre patches??? So let?s just say a customer has upgraded the non-GPFS servers in their cluster, but they?ve left their NSD servers unpatched (I?m talking about the kernel only here; all other updates are applied) due to the aforementioned performance concerns ? as long as they restrict access (i.e. who can log in) and use appropriate host-based firewall rules, is their some risk that they should be aware of? Discuss. Thanks! Kevin On May 15, 2018, at 4:45 PM, Andrew Beattie wrote: this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux that they "just can't move off" Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: Stijn De Weirdt Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Date: Wed, May 16, 2018 5:35 AM so this means running out-of-date kernels for at least another month? oh boy... i hope this is not some new trend in gpfs support. othwerwise all RHEL based sites will have to start adding EUS as default cost to run gpfs with basic security compliance. stijn On 05/15/2018 09:02 PM, Felipe Knop wrote: > All, > > Validation of RHEL 7.5 on Scale is currently under way, and we are > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > will include the corresponding fix. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > > > From: Ryan Novosielski > To: gpfsug main discussion list > Date: 05/15/2018 12:56 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > I know these dates can move, but any vague idea of a timeframe target for > release (this quarter, next quarter, etc.)? > > Thanks! > > -- > ____ > || \\UTGERS, > |---------------------------*O*--------------------------- > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > || \\ of NJ | Office of Advanced Research Computing - MSB > C630, Newark > `' > >> On May 14, 2018, at 9:30 AM, Felipe Knop wrote: >> >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > in Scale to support this kernel level, upgrading to one of those upcoming > PTFs will be required in order to run with that kernel. >> >> Regards, >> >> Felipe >> >> ---- >> Felipe Knop knop at us.ibm.com >> GPFS Development and Security >> IBM Systems >> IBM Building 008 >> 2455 South Rd, Poughkeepsie, NY 12601 >> (845) 433-9314 T/L 293-9314 >> >> >> >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > welcome. I see your concern but as long as IBM has not released spectrum > scale for 7.5 that >> >> From: Andi Rhod Christiansen >> To: gpfsug main discussion list >> Date: 05/14/2018 08:15 AM >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> >> >> >> You are welcome. >> >> I see your concern but as long as IBM has not released spectrum scale for > 7.5 that is their only solution, in regards to them caring about security I > would say yes they do care, but from their point of view either they tell > the customer to upgrade as soon as red hat releases new versions and > forcing the customer to be down until they have a new release or they tell > them to stay on supported level to a new release is ready. >> >> they should release a version supporting the new kernel soon, IBM told me > when I asked that they are "currently testing and have a support date soon" >> >> Best regards. >> >> >> -----Oprindelig meddelelse----- >> Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af z.han at imperial.ac.uk >> Sendt: 14. maj 2018 13:59 >> Til: gpfsug main discussion list >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> >> Thanks. Does IBM care about security, one would ask? In this case I'd > choose to use the new kernel for my virtualization over gpfs ... sigh >> >> >> https://access.redhat.com/errata/RHSA-2018:1318 >> >> Kernel: KVM: error in exception handling leads to wrong debug stack value > (CVE-2018-1087) >> >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > escalation (CVE-2017-16939) >> >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > netfilter/ebtables.c (CVE-2018-1068) >> >> ... >> >> >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: >>> Date: Mon, 14 May 2018 11:10:18 +0000 >>> From: Andi Rhod Christiansen >>> Reply-To: gpfsug main discussion list >>> >>> To: gpfsug main discussion list >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Hi, >>> >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? >>> >>> I just had the same issue >>> >>> Revert to previous working kernel at redhat 7.4 release which is > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > level. >>> >>> >>> Best regards >>> Andi R. Christiansen >>> >>> -----Oprindelig meddelelse----- >>> Fra: gpfsug-discuss-bounces at spectrumscale.org >>> P? vegne af >>> z.han at imperial.ac.uk >>> Sendt: 14. maj 2018 12:33 >>> Til: gpfsug main discussion list >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Dear All, >>> >>> Any one has the same problem? >>> >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > [ $? -ne 0 ]; then \ >>> exit 1;\ >>> fi >>> make[2]: Entering directory > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > no member named ?i_wb_list? >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); >>> ^ ...... >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Wed May 16 00:48:40 2018 From: valleru at cbio.mskcc.org (Lohit Valleru) Date: Tue, 15 May 2018 19:48:40 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: <7aef4353-058f-4741-9760-319bcd037213@Spark> Thanks Christof. The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. Now we are migrating most of the data to GPFS keeping the symlinks as they are. Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? Regards, Lohit On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. > > Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. > > > Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? > > Regards, > > Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ > christof.schmitt at us.ibm.com? ||? +1-520-799-2469??? (T/L: 321-2469) > > > > ----- Original message ----- > > From: valleru at cbio.mskcc.org > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > To: gpfsug main discussion list > > Cc: > > Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > Date: Tue, May 15, 2018 3:04 PM > > > > Hello All, > > > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? > > I understand that i will not need a redundant SMB server configuration. > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > Thanks, > > Lohit > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron.s.knister at nasa.gov Wed May 16 02:03:36 2018 From: aaron.s.knister at nasa.gov (Aaron Knister) Date: Tue, 15 May 2018 21:03:36 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: The one thing that comes to mind is if you're able to affect some unprivileged process on the NSD servers. Let's say there's a daemon that listens on a port but runs as an unprivileged user in which a vulnerability appears (lets say a 0-day remote code execution bug). One might be tempted to ignore that vulnerability for one reason or another but you couple that with something like meltdown/spectre and in *theory* you could do something like sniff ssh key material and get yourself on the box. In principle I agree with your argument but I've find that when one accepts and justifies a particular risk it can become easy to remember which vulnerability risks you've accepted and end up more exposed than one may realize. Still, the above scenario is low risk (but potentially very high impact), though :) -Aaron On 5/15/18 6:46 PM, Marc A Kaplan wrote: > Kevin, that seems to be a good point. > > IF you have dedicated hardware to acting only as a storage and/or file > server, THEN neither meltdown nor spectre should not be a worry. > > BECAUSE meltdown and spectre are just about an adversarial process > spying on another process or kernel memory. ?IF we're not letting any > potential adversary run her code on our file server, what's the exposure? > > NOW, let the security experts tell us where the flaw is in this argument... > > > > From: "Buterbaugh, Kevin L" > To: gpfsug main discussion list > Date: 05/15/2018 06:12 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working > ?withkernel ? ? ? ?3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------------------------------------------------ > > > > All, > > I have to kind of agree with Andrew ? it seems that there is a broad > range of takes on kernel upgrades ? everything from ?install the latest > kernel the day it comes out? to ?stick with this kernel, we know it works.? > > Related to that, let me throw out this question ? what about those who > haven?t upgraded their kernel in a while at least because they?re > concerned with the negative performance impacts of the meltdown / > spectre patches??? ?So let?s just say a customer has upgraded the > non-GPFS servers in their cluster, but they?ve left their NSD servers > unpatched (I?m talking about the kernel only here; all other updates are > applied) due to the aforementioned performance concerns ? as long as > they restrict access (i.e. who can log in) and use appropriate > host-based firewall rules, is their some risk that they should be aware of? > > Discuss. ?Thanks! > > Kevin > > On May 15, 2018, at 4:45 PM, Andrew Beattie <_abeattie at au1.ibm.com_ > > wrote: > > this thread is mildly amusing, given we regularly get customers asking > why we are dropping support for versions of linux > that they "just can't move off" > > > *Andrew Beattie* > *Software Defined Storage ?- IT Specialist* > *Phone: *614-2133-7927 > *E-mail: *_abeattie at au1.ibm.com_ > > > ----- Original message ----- > From: Stijn De Weirdt <_stijn.deweirdt at ugent.be_ > > > Sent by: _gpfsug-discuss-bounces at spectrumscale.org_ > > To: _gpfsug-discuss at spectrumscale.org_ > > Cc: > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Date: Wed, May 16, 2018 5:35 AM > > so this means running out-of-date kernels for at least another month? oh > boy... > > i hope this is not some new trend in gpfs support. othwerwise all RHEL > based sites will have to start adding EUS as default cost to run gpfs > with basic security compliance. > > stijn > > > On 05/15/2018 09:02 PM, Felipe Knop wrote: > > All, > > > > Validation of RHEL 7.5 on Scale is currently under way, and we are > > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > > will include the corresponding fix. > > > > Regards, > > > > ? Felipe > > > > ---- > > Felipe Knop _knop at us.ibm.com_ > > GPFS Development and Security > > IBM Systems > > IBM Building 008 > > 2455 South Rd, Poughkeepsie, NY 12601 > > (845) 433-9314 ?T/L 293-9314 > > > > > > > > > > > > From: Ryan Novosielski <_novosirj at rutgers.edu_ > > > > To: gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > > Date: 05/15/2018 12:56 PM > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > > ? ? ? ? ? ? 3.10.0-862.2.3.el7 > > Sent by: _gpfsug-discuss-bounces at spectrumscale.org_ > > > > > > > > > I know these dates can move, but any vague idea of a timeframe target for > > release (this quarter, next quarter, etc.)? > > > > Thanks! > > > > -- > > ____ > > || \\UTGERS, > > |---------------------------*O*--------------------------- > > ||_// the State ?| ? ? ? ? Ryan Novosielski - _novosirj at rutgers.edu_ > > > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS > Campus > > || ?\\ ? ?of NJ ?| Office of Advanced Research Computing - MSB > > C630, Newark > > ? ? ?`' > > > >> On May 14, 2018, at 9:30 AM, Felipe Knop <_knop at us.ibm.com_ > > wrote: > >> > >> All, > >> > >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > > in Scale to support this kernel level, upgrading to one of those upcoming > > PTFs will be required in order to run with that kernel. > >> > >> Regards, > >> > >> Felipe > >> > >> ---- > >> Felipe Knop _knop at us.ibm.com_ > >> GPFS Development and Security > >> IBM Systems > >> IBM Building 008 > >> 2455 South Rd, Poughkeepsie, NY 12601 > >> (845) 433-9314 T/L 293-9314 > >> > >> > >> > >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > > welcome. I see your concern but as long as IBM has not released spectrum > > scale for 7.5 that > >> > >> From: ?Andi Rhod Christiansen <_arc at b4restore.com_ > > > >> To: ?gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > >> Date: ?05/14/2018 08:15 AM > >> Subject: ?Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> Sent by: _gpfsug-discuss-bounces at spectrumscale.org_ > > >> > >> > >> > >> > >> You are welcome. > >> > >> I see your concern but as long as IBM has not released spectrum > scale for > > 7.5 that is their only solution, in regards to them caring about > security I > > would say yes they do care, but from their point of view either they tell > > the customer to upgrade as soon as red hat releases new versions and > > forcing the customer to be down until they have a new release or they > tell > > them to stay on supported level to a new release is ready. > >> > >> they should release a version supporting the new kernel soon, IBM > told me > > when I asked that they are "currently testing and have a support date > soon" > >> > >> Best regards. > >> > >> > >> -----Oprindelig meddelelse----- > >> Fra: _gpfsug-discuss-bounces at spectrumscale.org_ > > > <_gpfsug-discuss-bounces at spectrumscale.org_ > > P? vegne af > _z.han at imperial.ac.uk_ > >> Sendt: 14. maj 2018 13:59 > >> Til: gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> > >> Thanks. Does IBM care about security, one would ask? In this case I'd > > choose to use the new kernel for my virtualization over gpfs ... sigh > >> > >> > >> _https://access.redhat.com/errata/RHSA-2018:1318_ > > >> > >> Kernel: KVM: error in exception handling leads to wrong debug stack > value > > (CVE-2018-1087) > >> > >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) > >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > > escalation (CVE-2017-16939) > >> > >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > > netfilter/ebtables.c (CVE-2018-1068) > >> > >> ... > >> > >> > >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > >>> Date: Mon, 14 May 2018 11:10:18 +0000 > >>> From: Andi Rhod Christiansen <_arc at b4restore.com_ > > > >>> Reply-To: gpfsug main discussion list > >>> <_gpfsug-discuss at spectrumscale.org_ > > > >>> To: gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> ? ? 3.10.0-862.2.3.el7 > >>> > >>> Hi, > >>> > >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? > >>> > >>> I just had the same issue > >>> > >>> Revert to previous working kernel at redhat 7.4 release which is > > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > > level. > >>> > >>> > >>> Best regards > >>> Andi R. Christiansen > >>> > >>> -----Oprindelig meddelelse----- > >>> Fra: _gpfsug-discuss-bounces at spectrumscale.org_ > > >>> <_gpfsug-discuss-bounces at spectrumscale.org_ > > P? vegne af > >>> _z.han at imperial.ac.uk_ > >>> Sendt: 14. maj 2018 12:33 > >>> Til: gpfsug main discussion list > <_gpfsug-discuss at spectrumscale.org_ > > > >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> 3.10.0-862.2.3.el7 > >>> > >>> Dear All, > >>> > >>> Any one has the same problem? > >>> > >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ?; \ if > > [ $? -ne 0 ]; then \ > >>> exit 1;\ > >>> fi > >>> make[2]: Entering directory > > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > >>> ? LD ? ? ?/usr/lpp/mmfs/src/gpl-linux/built-in.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/tracelin.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/relaytrc.o > >>> ? LD [M] ?/usr/lpp/mmfs/src/gpl-linux/tracedev.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > >>> ? LD [M] ?/usr/lpp/mmfs/src/gpl-linux/mmfs26.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > >>> ? ? ? ? ? ? ? ? ?from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > >>> ? ? ? ? ? ? ? ? ?from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > > no member named ?i_wb_list? > >>> ? ? ?_TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > >>> ? ? ? ? ? ? ? ? ?^ ...... > >>> _______________________________________________ > >>> gpfsug-discuss mailing list > >>> gpfsug-discuss at _spectrumscale.org_ > >>> _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at _spectrumscale.org_ > >> _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > >> > >> > >> > >> > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at _spectrumscale.org_ > >> > > > _https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0_ > > > > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at _spectrumscale.org_ > > _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at _spectrumscale.org_ > > _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at _spectrumscale.org_ _ > __http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at _spectrumscale.org_ _ > __https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0_ > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Aaron Knister NASA Center for Climate Simulation (Code 606.2) Goddard Space Flight Center (301) 286-2776 From ulmer at ulmer.org Wed May 16 03:19:47 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 15 May 2018 21:19:47 -0500 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: <7aef4353-058f-4741-9760-319bcd037213@Spark> References: <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: Lohit, Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You?ve mentioned client a few times now. :) -- Stephen > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > Thanks Christof. > > The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. > > The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. > > Now we are migrating most of the data to GPFS keeping the symlinks as they are. > Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. > The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. > > I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? > > Regards, > > Lohit > > On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: >> > I could use CES, but CES does not support follow-symlinks outside respective SMB export. >> >> Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. >> >> > Follow-symlinks is a however a hard-requirement for to follow links outside GPFS filesystems. >> >> I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? >> >> Regards, >> >> Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ >> christof.schmitt at us.ibm.com || +1-520-799-2469 (T/L: 321-2469 ) >> >> >> ----- Original message ----- >> From: valleru at cbio.mskcc.org >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> To: gpfsug main discussion list >> Cc: >> Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks >> Date: Tue, May 15, 2018 3:04 PM >> >> Hello All, >> >> Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? >> I understand that i will not need a redundant SMB server configuration. >> >> I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement for to follow links outside GPFS filesystems. >> >> Thanks, >> Lohit >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed May 16 03:22:48 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 15 May 2018 21:22:48 -0500 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> There isn?t a flaw in that argument, but where the security experts are concerned there is no argument. Apparently this time Red Hat just told all of their RHEL 7.4 customers to upgrade to RHEL 7.5, rather than back-porting the security patches. So this time the retirement to upgrade distributions is much worse than normal. -- Stephen > On May 15, 2018, at 5:46 PM, Marc A Kaplan wrote: > > Kevin, that seems to be a good point. > > IF you have dedicated hardware to acting only as a storage and/or file server, THEN neither meltdown nor spectre should not be a worry. > > BECAUSE meltdown and spectre are just about an adversarial process spying on another process or kernel memory. IF we're not letting any potential adversary run her code on our file server, what's the exposure? > > NOW, let the security experts tell us where the flaw is in this argument... > > > > From: "Buterbaugh, Kevin L" > To: gpfsug main discussion list > Date: 05/15/2018 06:12 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > All, > > I have to kind of agree with Andrew ? it seems that there is a broad range of takes on kernel upgrades ? everything from ?install the latest kernel the day it comes out? to ?stick with this kernel, we know it works.? > > Related to that, let me throw out this question ? what about those who haven?t upgraded their kernel in a while at least because they?re concerned with the negative performance impacts of the meltdown / spectre patches??? So let?s just say a customer has upgraded the non-GPFS servers in their cluster, but they?ve left their NSD servers unpatched (I?m talking about the kernel only here; all other updates are applied) due to the aforementioned performance concerns ? as long as they restrict access (i.e. who can log in) and use appropriate host-based firewall rules, is their some risk that they should be aware of? > > Discuss. Thanks! > > Kevin > > On May 15, 2018, at 4:45 PM, Andrew Beattie > wrote: > > this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux > that they "just can't move off" > > > Andrew Beattie > Software Defined Storage - IT Specialist > Phone: 614-2133-7927 > E-mail: abeattie at au1.ibm.com > > > ----- Original message ----- > From: Stijn De Weirdt > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Cc: > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 > Date: Wed, May 16, 2018 5:35 AM > > so this means running out-of-date kernels for at least another month? oh > boy... > > i hope this is not some new trend in gpfs support. othwerwise all RHEL > based sites will have to start adding EUS as default cost to run gpfs > with basic security compliance. > > stijn > > > On 05/15/2018 09:02 PM, Felipe Knop wrote: > > All, > > > > Validation of RHEL 7.5 on Scale is currently under way, and we are > > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > > will include the corresponding fix. > > > > Regards, > > > > Felipe > > > > ---- > > Felipe Knop knop at us.ibm.com > > GPFS Development and Security > > IBM Systems > > IBM Building 008 > > 2455 South Rd, Poughkeepsie, NY 12601 > > (845) 433-9314 T/L 293-9314 > > > > > > > > > > > > From: Ryan Novosielski > > > To: gpfsug main discussion list > > > Date: 05/15/2018 12:56 PM > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > > 3.10.0-862.2.3.el7 > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > > > > I know these dates can move, but any vague idea of a timeframe target for > > release (this quarter, next quarter, etc.)? > > > > Thanks! > > > > -- > > ____ > > || \\UTGERS, > > |---------------------------*O*--------------------------- > > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > > || \\ of NJ | Office of Advanced Research Computing - MSB > > C630, Newark > > `' > > > >> On May 14, 2018, at 9:30 AM, Felipe Knop > wrote: > >> > >> All, > >> > >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > > in Scale to support this kernel level, upgrading to one of those upcoming > > PTFs will be required in order to run with that kernel. > >> > >> Regards, > >> > >> Felipe > >> > >> ---- > >> Felipe Knop knop at us.ibm.com > >> GPFS Development and Security > >> IBM Systems > >> IBM Building 008 > >> 2455 South Rd, Poughkeepsie, NY 12601 > >> (845) 433-9314 T/L 293-9314 > >> > >> > >> > >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > > welcome. I see your concern but as long as IBM has not released spectrum > > scale for 7.5 that > >> > >> From: Andi Rhod Christiansen > > >> To: gpfsug main discussion list > > >> Date: 05/14/2018 08:15 AM > >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> Sent by: gpfsug-discuss-bounces at spectrumscale.org > >> > >> > >> > >> > >> You are welcome. > >> > >> I see your concern but as long as IBM has not released spectrum scale for > > 7.5 that is their only solution, in regards to them caring about security I > > would say yes they do care, but from their point of view either they tell > > the customer to upgrade as soon as red hat releases new versions and > > forcing the customer to be down until they have a new release or they tell > > them to stay on supported level to a new release is ready. > >> > >> they should release a version supporting the new kernel soon, IBM told me > > when I asked that they are "currently testing and have a support date soon" > >> > >> Best regards. > >> > >> > >> -----Oprindelig meddelelse----- > >> Fra: gpfsug-discuss-bounces at spectrumscale.org > > > P? vegne af z.han at imperial.ac.uk > >> Sendt: 14. maj 2018 13:59 > >> Til: gpfsug main discussion list > > >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> > >> Thanks. Does IBM care about security, one would ask? In this case I'd > > choose to use the new kernel for my virtualization over gpfs ... sigh > >> > >> > >> https://access.redhat.com/errata/RHSA-2018:1318 > >> > >> Kernel: KVM: error in exception handling leads to wrong debug stack value > > (CVE-2018-1087) > >> > >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) > >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > > escalation (CVE-2017-16939) > >> > >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > > netfilter/ebtables.c (CVE-2018-1068) > >> > >> ... > >> > >> > >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > >>> Date: Mon, 14 May 2018 11:10:18 +0000 > >>> From: Andi Rhod Christiansen > > >>> Reply-To: gpfsug main discussion list > >>> > > >>> To: gpfsug main discussion list > > >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> 3.10.0-862.2.3.el7 > >>> > >>> Hi, > >>> > >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? > >>> > >>> I just had the same issue > >>> > >>> Revert to previous working kernel at redhat 7.4 release which is > > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > > level. > >>> > >>> > >>> Best regards > >>> Andi R. Christiansen > >>> > >>> -----Oprindelig meddelelse----- > >>> Fra: gpfsug-discuss-bounces at spectrumscale.org > >>> > P? vegne af > >>> z.han at imperial.ac.uk > >>> Sendt: 14. maj 2018 12:33 > >>> Til: gpfsug main discussion list > > >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> 3.10.0-862.2.3.el7 > >>> > >>> Dear All, > >>> > >>> Any one has the same problem? > >>> > >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > > [ $? -ne 0 ]; then \ > >>> exit 1;\ > >>> fi > >>> make[2]: Entering directory > > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > > no member named ?i_wb_list? > >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > >>> ^ ...... > >>> _______________________________________________ > >>> gpfsug-discuss mailing list > >>> gpfsug-discuss at spectrumscale.org > >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> > >> > >> > >> > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> > > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Wed May 16 03:21:22 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 15 May 2018 22:21:22 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: Thanks Stephen, Yes i do acknowledge, that it will need a SERVER license and thank you for reminding me. I just wanted to make sure, from the technical point of view that we won?t face any issues by exporting a GPFS mount as a SMB export. I remember, i had seen in documentation about few years ago that it is not recommended to export a GPFS mount via Third party SMB services (not CES). But i don?t exactly remember why. Regards, Lohit On May 15, 2018, 10:19 PM -0400, Stephen Ulmer , wrote: > Lohit, > > Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You?ve mentioned client a few times now. :) > > -- > Stephen > > > > > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > > > Thanks Christof. > > > > The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. > > > > The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. > > > > Now we are migrating most of the data to GPFS keeping the symlinks as they are. > > Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. > > The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. > > > > I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? > > > > Regards, > > > > Lohit > > > > On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. > > > > > > Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. > > > > > > > Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? > > > > > > Regards, > > > > > > Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ > > > christof.schmitt at us.ibm.com? ||? +1-520-799-2469??? (T/L: 321-2469) > > > > > > > > > > ----- Original message ----- > > > > From: valleru at cbio.mskcc.org > > > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > To: gpfsug main discussion list > > > > Cc: > > > > Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > > > Date: Tue, May 15, 2018 3:04 PM > > > > > > > > Hello All, > > > > > > > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? > > > > I understand that i will not need a redundant SMB server configuration. > > > > > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > > > Thanks, > > > > Lohit > > > > > > > > > > > > _______________________________________________ > > > > gpfsug-discuss mailing list > > > > gpfsug-discuss at spectrumscale.org > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed May 16 03:38:59 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 16 May 2018 02:38:59 +0000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: , <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Wed May 16 04:05:50 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 15 May 2018 23:05:50 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: Thank you for the detailed answer Andrew. I do understand that anything above the posix level will not be supported by IBM and might lead to scaling/other issues. We will start small, and discuss with IBM representative on any other possible efforts. Regards, Lohit On May 15, 2018, 10:39 PM -0400, Andrew Beattie , wrote: > Lohit, > > There is no technical reason why if you use the correct licensing that you can't publish a Posix fileystem using external Protocol tool rather than CES > the key thing to note is that if its not the IBM certified solution that IBM support stops at the Posix level and the protocol issues are your own to resolve. > > The reason we provide the CES environment is to provide a supported architecture to deliver protocol access,? does it have some limitations - certainly > but it is a supported environment.? Moving away from this moves the risk onto the customer to resolve and maintain. > > The other part of this, and potentially the reason why you might have been warned off using an external solution is that not all systems provide scalability and resiliency > so you may end up bumping into scaling issues by building your own environment --- and from the sound of things this is a large complex environment.? These issues are clearly defined in the CES stack and are well understood.? moving away from this will move you into the realm of the unknown -- again the risk becomes yours. > > it may well be worth putting a request in with your local IBM representative to have IBM Scale protocol development team involved in your design and see what we can support for your requirements. > > > Regards, > Andrew Beattie > Software Defined Storage? - IT Specialist > Phone: 614-2133-7927 > E-mail: abeattie at au1.ibm.com > > > > ----- Original message ----- > > From: valleru at cbio.mskcc.org > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > To: gpfsug main discussion list > > Cc: > > Subject: Re: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > Date: Wed, May 16, 2018 12:25 PM > > > > Thanks Stephen, > > > > Yes i do acknowledge, that it will need a SERVER license and thank you for reminding me. > > > > I just wanted to make sure, from the technical point of view that we won?t face any issues by exporting a GPFS mount as a SMB export. > > > > I remember, i had seen in documentation about few years ago that it is not recommended to export a GPFS mount via Third party SMB services (not CES). But i don?t exactly remember why. > > > > Regards, > > Lohit > > > > On May 15, 2018, 10:19 PM -0400, Stephen Ulmer , wrote: > > > Lohit, > > > > > > Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You?ve mentioned client a few times now. :) > > > > > > -- > > > Stephen > > > > > > > > > > > > > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > > > > > > > Thanks Christof. > > > > > > > > The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. > > > > > > > > The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. > > > > > > > > Now we are migrating most of the data to GPFS keeping the symlinks as they are. > > > > Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. > > > > The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. > > > > > > > > I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? > > > > > > > > Regards, > > > > > > > > Lohit > > > > > > > > On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: > > > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. > > > > > > > > > > Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. > > > > > > > > > > > Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > > > > > I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? > > > > > > > > > > Regards, > > > > > > > > > > Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ > > > > > christof.schmitt at us.ibm.com? ||? +1-520-799-2469??? (T/L: 321-2469) > > > > > > > > > > > > > > > > ----- Original message ----- > > > > > > From: valleru at cbio.mskcc.org > > > > > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > > To: gpfsug main discussion list > > > > > > Cc: > > > > > > Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > > > > > Date: Tue, May 15, 2018 3:04 PM > > > > > > > > > > > > Hello All, > > > > > > > > > > > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? > > > > > > I understand that i will not need a redundant SMB server configuration. > > > > > > > > > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > > > > > > > Thanks, > > > > > > Lohit > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > gpfsug-discuss mailing list > > > > > > gpfsug-discuss at spectrumscale.org > > > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > > > > _______________________________________________ > > > > > gpfsug-discuss mailing list > > > > > gpfsug-discuss at spectrumscale.org > > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > > > > gpfsug-discuss mailing list > > > > gpfsug-discuss at spectrumscale.org > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From stijn.deweirdt at ugent.be Wed May 16 05:55:24 2018 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Wed, 16 May 2018 06:55:24 +0200 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> Message-ID: <3cab44ce-42c0-c8e4-01f7-3876541d2511@ugent.be> hi stephen, > There isn?t a flaw in that argument, but where the security experts > are concerned there is no argument. we have gpfs clients hosts where users can login, we can't update those. that is a certain worry. > > Apparently this time Red Hat just told all of their RHEL 7.4 > customers to upgrade to RHEL 7.5, rather than back-porting the > security patches. So this time the retirement to upgrade > distributions is much worse than normal. there's no 'this time', this is the default rhel support model. only with EUS you get patches for non-latest minor releases. stijn > > > > _______________________________________________ gpfsug-discuss > mailing list gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From mnaineni at in.ibm.com Wed May 16 06:18:30 2018 From: mnaineni at in.ibm.com (Malahal R Naineni) Date: Wed, 16 May 2018 10:48:30 +0530 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com><6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> Message-ID: The systemd service file also was updated to account for the daemon binary rename (the rename itself was done to avoid SELinux issues). It is possible that the systemd was using an old cache (unlikely as I didn't see daemon-reload message here) or the rpm update couldn't update the file as user changed the systemd unit service file (most likely case here). Please provide "rpm -qV ", the RPM shipped unit file should NOT have any reference to ganesha.nfsd (it should have gpfs.ganesha.nfsd). Regards, Malahal. PS: No symlink magic is necessary with usual cases! From: Jonathan Buzzard To: gpfsug-discuss at spectrumscale.org Date: 05/16/2018 12:01 AM Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Sent by: gpfsug-discuss-bounces at spectrumscale.org On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 16 09:14:14 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 16 May 2018 08:14:14 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de> <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> Message-ID: <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: on behalf of "olaf.weiser at de.ibm.com" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" To: "'gpfsug-discuss at spectrumscale.org'" Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Wed May 16 09:51:25 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Wed, 16 May 2018 08:51:25 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526379829.17680.27.camel@strath.ac.uk>, <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: For us the only one that matters is the fileset quota. With or without ?perfileset-quota set, we simply see a quota value for one of the filesets that is mapped to a drive, and every other mapped drives inherits the same value. whether it?s true or not. Just about to do some SMB tracing for my PMR. Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Christof Schmitt Sent: 15 May 2018 19:50 To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] SMB quotas query To maybe clarify a few points: There are three quotas: user, group and fileset. User and group quota can be applied on the fileset level or the file system level. Samba with the vfs_gpfs module, only queries the user and group quotas on the requested path. If the fileset quota should also be applied to the reported free space, that has to be done through the --filesetdf parameter. We had the fileset quota query from Samba in the past, but that was a very problematic codepath, and it was removed as --filesetdf is the more reliabel way to achieve the same result. So another part of the question is which quotas should be applied to the reported free space. Regards, Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ christof.schmitt at us.ibm.com || +1-520-799-2469 (T/L: 321-2469) ----- Original message ----- From: Jonathan Buzzard > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: Re: [gpfsug-discuss] SMB quotas query Date: Tue, May 15, 2018 3:24 AM On Tue, 2018-05-15 at 13:10 +0300, Yaron Daniel wrote: > Hi > > So - u want to get quota report per fileset quota - right ? > We use this param when we want to monitor the NFS exports with df , i > think this should also affect the SMB filesets. > > Can u try to enable it and see if it works ? > It is irrelevant to Samba, this is or should be handled in vfs_gpfs as Christof said earlier. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 16 10:02:06 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 16 May 2018 10:02:06 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526379829.17680.27.camel@strath.ac.uk> , <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: <1526461326.17680.48.camel@strath.ac.uk> On Wed, 2018-05-16 at 08:51 +0000, Sobey, Richard A wrote: > For us the only one that matters is the fileset quota. With or > without ?perfileset-quota set, we simply see a quota value for one of > the filesets that is mapped to a drive, and every other mapped drives > inherits the same value. whether it?s true or not. > ? > Just about to do some SMB tracing for my PMR. > ? I have a fully working solution that uses the dfree option in Samba if you want. I am with you here in that a lot of places will be carving a GPFS file system up with file sets with a quota that are then shared to a group of users and you want the disk size, and amount free to show up on the clients based on the quota for the fileset not the whole file system. I am really not sure what the issue with the code path for this as it is 35 lines of C including comments to get the fileset if one exists for a given path on a GPFS file system. You open a random file on the path, call gpfs_fcntl and then gpfs_getfilesetid. It's then a simple call to gpfs_quotactl. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From r.sobey at imperial.ac.uk Wed May 16 10:08:09 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Wed, 16 May 2018 09:08:09 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526461326.17680.48.camel@strath.ac.uk> References: <1526379829.17680.27.camel@strath.ac.uk> , <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> <1526461326.17680.48.camel@strath.ac.uk> Message-ID: Thanks Jonathan for the offer, but I'd prefer to have this working without implementing unsupported options in production. I'd be willing to give it a go in my test cluster though, which is exhibiting the same symptoms, so if you wouldn't mind getting in touch off list I can see how it works? I am almost certain that this used to work properly in the past though. My customers would surely have noticed a problem like this - they like to say when things are wrong ? Cheers Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: 16 May 2018 10:02 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query On Wed, 2018-05-16 at 08:51 +0000, Sobey, Richard A wrote: > For us the only one that matters is the fileset quota. With or without > ?perfileset-quota set, we simply see a quota value for one of the > filesets that is mapped to a drive, and every other mapped drives > inherits the same value. whether it?s true or not. > ? > Just about to do some SMB tracing for my PMR. > ? I have a fully working solution that uses the dfree option in Samba if you want. I am with you here in that a lot of places will be carving a GPFS file system up with file sets with a quota that are then shared to a group of users and you want the disk size, and amount free to show up on the clients based on the quota for the fileset not the whole file system. I am really not sure what the issue with the code path for this as it is 35 lines of C including comments to get the fileset if one exists for a given path on a GPFS file system. You open a random file on the path, call gpfs_fcntl and then gpfs_getfilesetid. It's then a simple call to gpfs_quotactl. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From smita.raut at in.ibm.com Wed May 16 11:23:05 2018 From: smita.raut at in.ibm.com (Smita J Raut) Date: Wed, 16 May 2018 15:53:05 +0530 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> Message-ID: You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm >From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" To: gpfsug main discussion list Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: on behalf of "olaf.weiser at de.ibm.com" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" To: "'gpfsug-discuss at spectrumscale.org'" Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 16 13:23:41 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 16 May 2018 13:23:41 +0100 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: <1526473421.17680.57.camel@strath.ac.uk> On Tue, 2018-05-15 at 22:32 +0000, Christof Schmitt wrote: > > I could use CES, but CES does not support follow-symlinks outside > respective SMB export. > ? > Samba has the 'wide links' option, that we currently do not test and > support as part of the mmsmb integration. You can always open a RFE > and ask that we support this option in a future release. > ? Note?that if unix extensions are on then you also need the "allow insecure wide links" option, which is a pretty good hint as to why one should steer several parsecs wide of using symlinks on SMB exports. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From daniel.kidger at uk.ibm.com Wed May 16 13:37:27 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Wed, 16 May 2018 12:37:27 +0000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: <1526473421.17680.57.camel@strath.ac.uk> References: <1526473421.17680.57.camel@strath.ac.uk>, Message-ID: An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Wed May 16 14:31:30 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Wed, 16 May 2018 13:31:30 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> Message-ID: <5ef78d14aa0c4a23b2979b13deeecab7@SMXRF108.msg.hukrf.de> Hallo Smita, i will search in wich rhel-release is the 0.15 release available. If we found one I want to install, and give feedback. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 +++ Bitte beachten Sie die neuen Telefonnummern +++ +++ Siehe auch: https://www.huk.de/presse/pressekontakt/ansprechpartner.html +++ E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" > To: gpfsug main discussion list > Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: > on behalf of "olaf.weiser at de.ibm.com" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" > To: "'gpfsug-discuss at spectrumscale.org'" > Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed May 16 15:05:19 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Wed, 16 May 2018 09:05:19 -0500 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: <3cab44ce-42c0-c8e4-01f7-3876541d2511@ugent.be> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> <3cab44ce-42c0-c8e4-01f7-3876541d2511@ugent.be> Message-ID: <20485D89-2F0F-4905-A5C7-FCACAAAB1FCC@ulmer.org> > On May 15, 2018, at 11:55 PM, Stijn De Weirdt wrote: > > hi stephen, > >> There isn?t a flaw in that argument, but where the security experts >> are concerned there is no argument. > we have gpfs clients hosts where users can login, we can't update those. > that is a certain worry. The original statement from Marc was about dedicated hardware for storage and/or file serving. If that?s not the use case, then neither his logic nor my support of it apply. >> >> Apparently this time Red Hat just told all of their RHEL 7.4 >> customers to upgrade to RHEL 7.5, rather than back-porting the >> security patches. So this time the retirement to upgrade >> distributions is much worse than normal. > there's no 'this time', this is the default rhel support model. only > with EUS you get patches for non-latest minor releases. > > stijn > You are correct! I did a quick check and most of my customers are enterprise-y, and many of them seem to have EUS. I thought it was standard, but it is not. I could be mixing Red Hat up with another Linux vendor at this point? Liberty, -- Stephen From bbanister at jumptrading.com Wed May 16 16:30:14 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 16 May 2018 15:30:14 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com><6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> Message-ID: <5b7aacf8e9c246b4ae06b2a0fa706ed6@jumptrading.com> Malahal is correct, we did modify our version of the systemd unit and the update is being overwritten. My bad. We seemed to have issues with the original version, but will try to use the new version and will open a ticket if we have issues. Definitely do not want to modify the IBM provided configs as this is an obvious example of how that can come back to bite you!! Not symlink is needed as Malahal states. Sorry for the confusion and false alarms. Thanks Malahal!! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Malahal R Naineni Sent: Wednesday, May 16, 2018 12:19 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Note: External Email ________________________________ The systemd service file also was updated to account for the daemon binary rename (the rename itself was done to avoid SELinux issues). It is possible that the systemd was using an old cache (unlikely as I didn't see daemon-reload message here) or the rpm update couldn't update the file as user changed the systemd unit service file (most likely case here). Please provide "rpm -qV ", the RPM shipped unit file should NOT have any reference to ganesha.nfsd (it should have gpfs.ganesha.nfsd). Regards, Malahal. PS: No symlink magic is necessary with usual cases! From: Jonathan Buzzard > To: gpfsug-discuss at spectrumscale.org Date: 05/16/2018 12:01 AM Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Wed May 16 17:01:18 2018 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Wed, 16 May 2018 16:01:18 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <5b7aacf8e9c246b4ae06b2a0fa706ed6@jumptrading.com> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com><6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> , <5b7aacf8e9c246b4ae06b2a0fa706ed6@jumptrading.com> Message-ID: <3D5B04DE-3BC4-478D-A32F-C4417358A003@rutgers.edu> Thing to do here ought to be using overrides in /etc/systemd, not modifying the vendor scripts. I can?t think of a case where one would want to do otherwise, but it may be out there. -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' On May 16, 2018, at 11:30, Bryan Banister > wrote: Malahal is correct, we did modify our version of the systemd unit and the update is being overwritten. My bad. We seemed to have issues with the original version, but will try to use the new version and will open a ticket if we have issues. Definitely do not want to modify the IBM provided configs as this is an obvious example of how that can come back to bite you!! Not symlink is needed as Malahal states. Sorry for the confusion and false alarms. Thanks Malahal!! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Malahal R Naineni Sent: Wednesday, May 16, 2018 12:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Note: External Email ________________________________ The systemd service file also was updated to account for the daemon binary rename (the rename itself was done to avoid SELinux issues). It is possible that the systemd was using an old cache (unlikely as I didn't see daemon-reload message here) or the rpm update couldn't update the file as user changed the systemd unit service file (most likely case here). Please provide "rpm -qV ", the RPM shipped unit file should NOT have any reference to ganesha.nfsd (it should have gpfs.ganesha.nfsd). Regards, Malahal. PS: No symlink magic is necessary with usual cases! From: Jonathan Buzzard > To: gpfsug-discuss at spectrumscale.org Date: 05/16/2018 12:01 AM Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C333d1c944c464856be7008d5bb41f07f%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636620814253162614&sdata=ihaClVwGs9Cp69UflH7eYp%2F0q7%2FR29AY%2FbM1IzbZrsI%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed May 16 18:01:52 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 16 May 2018 17:01:52 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526461326.17680.48.camel@strath.ac.uk> References: <1526461326.17680.48.camel@strath.ac.uk>, <1526379829.17680.27.camel@strath.ac.uk> , <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: From bevans at pixitmedia.com Thu May 17 14:41:57 2018 From: bevans at pixitmedia.com (Barry Evans) Date: Thu, 17 May 2018 14:41:57 +0100 Subject: [gpfsug-discuss] =?utf-8?Q?=E2=80=94subblocks-per-full-block_?=in 5.0.1 Message-ID: Slight wonkiness in mmcrfs script that spits this out ?subblocks-per-full-block as an invalid option. No worky: ? ? 777 ? ? ? ? subblocks-per-full-block ) ? ? 778 ? ? ? ? ? if [[ -z $optArg ]] ? ? 779 ? ? ? ? ? then ? ? 780 ? ? ? ? ? ? # The expected argument is not in the same string as its ? ? 781 ? ? ? ? ? ? # option name. ?Get it from the next token. ? ? 782 ? ? ? ? ? ? eval optArg="\${$OPTIND}" ? ? 783 ? ? ? ? ? ? [[ -z $optArg ]] && ?\ ? ? 784 ? ? ? ? ? ? ? syntaxError "missingValue" $noUsageMsg "--$optName_lc" ? ? 785 ? ? ? ? ? ? shift 1 ? ? 786 ? ? ? ? ? fi ? ? 787 ? ? ? ? ? [[ -n $subblocksPerFullBlockOpt ]] && ?\ ? ? 788 ? ? ? ? ? ? syntaxError "multiple" $noUsageMsg "--$optName_lc" ? ? 789 ? ? ? ? ? subblocksPerFullBlockOpt="--$optName_lc" ? ? 790 ? ? 791 ? ? ? ? ? nSubblocksArg=$(checkIntRange --subblocks-per-full-block $optArg 32 8192) ? ? 792 ? ? ? ? ? [[ $? -ne 0 ]] && syntaxError nomsg $noUsageMsg ? ? 793 ? ? ? ? ? tscrfsParms="$tscrfsParms --subblocks-per-full-block $nSubblocksArg" ? ? 794 ? ? ? ? ? ;; Worky: ? ? 777 ? ? ? ? subblocks-per-full-block ) ? ? 778 ? ? ? ? ? if [[ -z $optArg ]] ? ? 779 ? ? ? ? ? then ? ? 780 ? ? ? ? ? ? # The expected argument is not in the same string as its ? ? 781 ? ? ? ? ? ? # option name. ?Get it from the next token. ? ? 782 ? ? ? ? ? ? eval optArg="\${$OPTIND}" ? ? 783 ? ? ? ? ? ? [[ -z $optArg ]] && ?\ ? ? 784 ? ? ? ? ? ? ? syntaxError "missingValue" $noUsageMsg "--$optName_lc" ? ? 785 ? ? ? ? ? ? shift 1 ? ? 786 ? ? ? ? ? fi ? ? 787 ? ? ? ? ? #[[ -n $subblocksPerFullBlockOpt ]] && ?\ ? ? 788 ? ? ? ? ? [[ -n $nSubblocksArg ?]] && ?\ ? ? 789 ? ? ? ? ? ? syntaxError "multiple" $noUsageMsg "--$optName_lc" ? ? 790 ? ? ? ? ? #subblocksPerFullBlockOpt="--$optName_lc" ? ? 791 ? ? ? ? ? nSubblocksArg="--$optName_lc" ? ? 792 ? ? 793 ? ? ? ? ? nSubblocksArg=$(checkIntRange --subblocks-per-full-block $optArg 32 8192) ? ? 794 ? ? ? ? ? [[ $? -ne 0 ]] && syntaxError nomsg $noUsageMsg ? ? 795 ? ? ? ? ? tscrfsParms="$tscrfsParms --subblocks-per-full-block $nSubblocksArg" ? ? 796 ? ? ? ? ? ;; Looks like someone got halfway through the variable change ?subblocksPerFullBlockOpt"?is referenced elsewhere in the script: if [[ -z $forceOption ]] then ? [[ -n $fflag ]] && ?\ ? ? syntaxError "invalidOption" $usageMsg "$fflag" ? [[ -n $subblocksPerFullBlockOpt ]] && ?\ ? ? syntaxError "invalidOption" $usageMsg "$subblocksPerFullBlockOpt" fi ...so this is probably naughty on my behalf. Kind Regards, Barry Evans CTO/Co-Founder Pixit Media Ltd +44 7950 666 248 bevans at pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Thu May 17 16:31:47 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 17 May 2018 16:31:47 +0100 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: <1526473421.17680.57.camel@strath.ac.uk> , Message-ID: <1526571107.17680.81.camel@strath.ac.uk> On Wed, 2018-05-16 at 12:37 +0000, Daniel Kidger wrote: > Jonathan, > ? > Are you suggesting that a SMB?exported symlink to /etc/shadow is > somehow a bad thing ??:-) > The irony is that people are busy complaining about not being able to update their kernels for security reasons while someone else is complaining about not being able to do what can only be described in 2018 as very bad practice. The right answer IMHO is to forget about symlinks being followed server side and take the opportunity that migrating it all to GPFS gives you to re-architect your storage so they are no longer needed. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From Renar.Grunenberg at huk-coburg.de Thu May 17 17:13:30 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Thu, 17 May 2018 16:13:30 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> Message-ID: <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> Hallo Smita, I checks these now, today there are no real way to get these package from a rhel channel. All are on 0.13.1. I checked the pike repository and see that following packages are available: python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm python2-cryptography-1.7.2-1.el7.x86_64.rpm python2-urllib3-1.21.1-1.el7.noarch.rpm My Request and question here. Why are these packages are not in the pike-release that IBM shipped. Is it possible to implement and test these package for the next ptf 5.0.1.1. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" > To: gpfsug main discussion list > Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: > on behalf of "olaf.weiser at de.ibm.com" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" > To: "'gpfsug-discuss at spectrumscale.org'" > Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at ca.ibm.com Fri May 18 16:25:52 2018 From: bzhang at ca.ibm.com (Bohai Zhang) Date: Fri, 18 May 2018 11:25:52 -0400 Subject: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Message-ID: IBM Spectrum Scale Support Webinar Spectrum Scale Disk Lease, Expel & Recovery About this Webinar IBM Spectrum Scale webinars are hosted by IBM Spectrum Scale support to share expertise and knowledge of the Spectrum Scale product, as well as product updates and best practices based on various use cases. This webinar introduces various concepts and features related to disk lease, node expel, and node recovery. It explains the mechanism of disk lease, the common scenarios and causes for node expel, and different phases of node recovery. It also explains DMS (Deadman Switch) timer which could trigger kernel panic as a result of lease expiry and hang I/O. This webinar also talks about best practice tuning, recent improvements to mitigate node expels and RAS improvements for expel debug data collection. Recent critical defects about node expel will also be discussed in this webinar. Please note that our webinars are free of charge and will be held online via WebEx. Agenda: ? Disk lease concept and mechanism ? Node expel concept, causes and use cases ? Node recover concept and explanation ? Parameter explanation and tuning ? Recent improvement and critical issues ? Q&A NA/EU Session Date: June 6, 2018 Time: 10 AM ? 11AM EDT (2 PM ? 3PM GMT) Registration: https://ibm.biz/BdZLgY Audience: Spectrum Scale Administrators AP/JP Session Date: June 6, 2018 Time: 10 AM ? 11 AM Beijing Time (11 AM ? 12 AM Tokyo Time) Registration: https://ibm.biz/BdZLgi Audience: Spectrum Scale Administrators If you have any questions, please contact IBM Spectrum Scale support. Regards, IBM Spectrum Computing Bohai Zhang Critical Senior Technical Leader, IBM Systems Situation Tel: 1-905-316-2727 Resolver Mobile: 1-416-897-7488 Expert Badge Email: bzhang at ca.ibm.com 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada Live Chat at IBMStorageSuptMobile Apps Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM | dWA We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to recommend IBM. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73794593.gif Type: image/gif Size: 2665 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73540552.gif Type: image/gif Size: 275 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73219387.gif Type: image/gif Size: 305 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73169142.gif Type: image/gif Size: 331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73563875.gif Type: image/gif Size: 3621 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73474166.gif Type: image/gif Size: 1243 bytes Desc: not available URL: From skylar2 at uw.edu Fri May 18 16:32:05 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Fri, 18 May 2018 15:32:05 +0000 Subject: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery In-Reply-To: References: Message-ID: <20180518153205.beb5brsgadpnf7y3@utumno.gs.washington.edu> Hi Bohai, Will this be recorded? I'll be on vacation but am interested to learn about the topics under discussion. On Fri, May 18, 2018 at 11:25:52AM -0400, Bohai Zhang wrote: > > > > > > IBM Spectrum Scale Support Webinar > Spectrum Scale Disk Lease, Expel & Recovery > > > > > > > About this Webinar > IBM Spectrum Scale webinars are hosted by IBM Spectrum Scale support to > share expertise and knowledge of the Spectrum Scale product, as well as > product updates and best practices based on various use cases. This webinar > introduces various concepts and features related to disk lease, node expel, > and node recovery. It explains the mechanism of disk lease, the common > scenarios and causes for node expel, and different phases of node recovery. > It also explains DMS (Deadman Switch) timer which could trigger kernel > panic as a result of lease expiry and hang I/O. This webinar also talks > about best practice tuning, recent improvements to mitigate node expels and > RAS improvements for expel debug data collection. Recent critical defects > about node expel will also be discussed in this webinar. > > > > > Please note that our webinars are free of charge and will be held online > via WebEx. > > Agenda: > > ? Disk lease concept and mechanism > > ? Node expel concept, causes and use cases > > ? Node recover concept and explanation > > > ? Parameter explanation and tuning > > > ? Recent improvement and critical issues > > > ? Q&A > > NA/EU Session > Date: June 6, 2018 > Time: 10 AM ??? 11AM EDT (2 PM ??? 3PM GMT) > Registration: https://ibm.biz/BdZLgY > Audience: Spectrum Scale Administrators > > AP/JP Session > Date: June 6, 2018 > Time: 10 AM ??? 11 AM Beijing Time (11 AM ??? 12 AM Tokyo Time) > Registration: https://ibm.biz/BdZLgi > Audience: Spectrum Scale Administrators > > > If you have any questions, please contact IBM Spectrum Scale support. > > Regards, > > > > > > > IBM > Spectrum > Computing > > Bohai Zhang Critical > Senior Technical Leader, IBM Systems Situation > Tel: 1-905-316-2727 Resolver > Mobile: 1-416-897-7488 Expert Badge > Email: bzhang at ca.ibm.com > 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada > Live Chat at IBMStorageSuptMobile Apps > > > > Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM > | dWA > We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to > recommend IBM. > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From Robert.Oesterlin at nuance.com Fri May 18 16:37:48 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 18 May 2018 15:37:48 +0000 Subject: [gpfsug-discuss] Presentations from the May 16-17 User Group meeting in Cambridge Message-ID: Thanks to all the presenters and attendees, it was a great get-together. I?ll be posting these soon to spectrumscale.org, but I need to sort out the size restrictions with Simon, so it may be a few more days. Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From smita.raut at in.ibm.com Fri May 18 17:10:11 2018 From: smita.raut at in.ibm.com (Smita J Raut) Date: Fri, 18 May 2018 21:40:11 +0530 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de><6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> Message-ID: Hi Renar, Yes we plan to include newer pyOpenSSL in 5.0.1.1 Thanks, Smita From: "Grunenberg, Renar" To: 'gpfsug main discussion list' Date: 05/17/2018 09:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo Smita, I checks these now, today there are no real way to get these package from a rhel channel. All are on 0.13.1. I checked the pike repository and see that following packages are available: python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm python2-cryptography-1.7.2-1.el7.x86_64.rpm python2-urllib3-1.21.1-1.el7.noarch.rpm My Request and question here. Why are these packages are not in the pike-release that IBM shipped. Is it possible to implement and test these package for the next ptf 5.0.1.1. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. Von: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm >From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: on behalf of " olaf.weiser at de.ibm.com" Reply-To: "gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" To: "'gpfsug-discuss at spectrumscale.org'" < gpfsug-discuss at spectrumscale.org> Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Fri May 18 18:07:56 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Fri, 18 May 2018 17:07:56 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de><6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> Message-ID: Hallo Smita, thanks that sounds good. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Freitag, 18. Mai 2018 18:10 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Hi Renar, Yes we plan to include newer pyOpenSSL in 5.0.1.1 Thanks, Smita From: "Grunenberg, Renar" > To: 'gpfsug main discussion list' > Date: 05/17/2018 09:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo Smita, I checks these now, today there are no real way to get these package from a rhel channel. All are on 0.13.1. I checked the pike repository and see that following packages are available: python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm python2-cryptography-1.7.2-1.el7.x86_64.rpm python2-urllib3-1.21.1-1.el7.noarch.rpm My Request and question here. Why are these packages are not in the pike-release that IBM shipped. Is it possible to implement and test these package for the next ptf 5.0.1.1. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list > Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" > To: gpfsug main discussion list > Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: > on behalf of "olaf.weiser at de.ibm.com" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" > To: "'gpfsug-discuss at spectrumscale.org'" > Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at ca.ibm.com Fri May 18 19:19:24 2018 From: bzhang at ca.ibm.com (Bohai Zhang) Date: Fri, 18 May 2018 14:19:24 -0400 Subject: [gpfsug-discuss] Fw: IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Message-ID: Regards, IBM Spectrum Computing Bohai Zhang Critical Senior Technical Leader, IBM Systems Situation Tel: 1-905-316-2727 Resolver Mobile: 1-416-897-7488 Expert Badge Email: bzhang at ca.ibm.com 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada Live Chat at IBMStorageSuptMobile Apps Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM | dWA We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to recommend IBM. ----- Forwarded by Bohai Zhang/Ontario/IBM on 2018/05/18 02:18 PM ----- From: Bohai Zhang/Ontario/IBM To: Skylar Thompson Date: 2018/05/18 11:40 AM Subject: Re: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Hi Skylar, Thanks for your interesting. It will be recorded. If you register, we will send you a following up email after the webinar which will contain the link to the recording. Have a nice weekend. Regards, IBM Spectrum Computing Bohai Zhang Critical Senior Technical Leader, IBM Systems Situation Tel: 1-905-316-2727 Resolver Mobile: 1-416-897-7488 Expert Badge Email: bzhang at ca.ibm.com 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada Live Chat at IBMStorageSuptMobile Apps Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM | dWA We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to recommend IBM. From: Skylar Thompson To: bzhang at ca.ibm.com Cc: gpfsug-discuss at spectrumscale.org Date: 2018/05/18 11:34 AM Subject: Re: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Hi Bohai, Will this be recorded? I'll be on vacation but am interested to learn about the topics under discussion. On Fri, May 18, 2018 at 11:25:52AM -0400, Bohai Zhang wrote: > > > > > > IBM Spectrum Scale Support Webinar > Spectrum Scale Disk Lease, Expel & Recovery > > > > > > > About this Webinar > IBM Spectrum Scale webinars are hosted by IBM Spectrum Scale support to > share expertise and knowledge of the Spectrum Scale product, as well as > product updates and best practices based on various use cases. This webinar > introduces various concepts and features related to disk lease, node expel, > and node recovery. It explains the mechanism of disk lease, the common > scenarios and causes for node expel, and different phases of node recovery. > It also explains DMS (Deadman Switch) timer which could trigger kernel > panic as a result of lease expiry and hang I/O. This webinar also talks > about best practice tuning, recent improvements to mitigate node expels and > RAS improvements for expel debug data collection. Recent critical defects > about node expel will also be discussed in this webinar. > > > > > Please note that our webinars are free of charge and will be held online > via WebEx. > > Agenda: > > ? Disk lease concept and mechanism > > ? Node expel concept, causes and use cases > > ? Node recover concept and explanation > > > ? Parameter explanation and tuning > > > ? Recent improvement and critical issues > > > ? Q&A > > NA/EU Session > Date: June 6, 2018 > Time: 10 AM ??? 11AM EDT (2 PM ??? 3PM GMT) > Registration: https://ibm.biz/BdZLgY > Audience: Spectrum Scale Administrators > > AP/JP Session > Date: June 6, 2018 > Time: 10 AM ??? 11 AM Beijing Time (11 AM ??? 12 AM Tokyo Time) > Registration: https://ibm.biz/BdZLgi > Audience: Spectrum Scale Administrators > > > If you have any questions, please contact IBM Spectrum Scale support. > > Regards, > > > > > > > IBM > Spectrum > Computing > > Bohai Zhang Critical > Senior Technical Leader, IBM Systems Situation > Tel: 1-905-316-2727 Resolver > Mobile: 1-416-897-7488 Expert Badge > Email: bzhang at ca.ibm.com > 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada > Live Chat at IBMStorageSuptMobile Apps > > > > Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM > | dWA > We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to > recommend IBM. > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F310241.gif Type: image/gif Size: 2665 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F811734.gif Type: image/gif Size: 275 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F210195.gif Type: image/gif Size: 305 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F911712.gif Type: image/gif Size: 331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F859587.gif Type: image/gif Size: 3621 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F303375.gif Type: image/gif Size: 1243 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From hopii at interia.pl Fri May 18 19:53:57 2018 From: hopii at interia.pl (hopii at interia.pl) Date: Fri, 18 May 2018 20:53:57 +0200 Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos authentication issue Message-ID: Hi there, I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. NFS mount with keberos works with no issues as well. But I ran out of ideas how to configure SMB using LDAP with kerberos. I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. But error message seems to point to keytab file, which is present on both, server and client nodes. I ran into simillar post, dated few days ago, so I'm not the only one. https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html Below is my configuration and error message, and I'd appreciate any hints or help. Thank you, d. Error message from /var/adm/ras/log.smbd [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) GENSEC backend 'ntlmssp_resume_ccache' registered [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR Cluster nodes spectrum1.example.com RedHat 7.4 spectrum2.example.com RedHat 7.4 spectrum3.example.com RedHat 7.4 Protocols nodes: labs1.example.com lasb2.example.com labs3.example.com ssipa.example.com Centos 7.5 spectrum scale server: [root at spectrum1 security]# klist -k Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 host/labs1.example.com at example.com 1 host/labs1.example.com at example.com 1 host/labs2.example.com at example.com 1 host/labs2.example.com at example.com 1 host/labs3.example.com at example.com 1 host/labs3.example.com at example.com 1 nfs/labs1.example.com at example.com 1 nfs/labs1.example.com at example.com 1 nfs/labs2.example.com at example.com 1 nfs/labs2.example.com at example.com 1 nfs/labs3.example.com at example.com 1 nfs/labs3.example.com at example.com 1 cifs/labs1.example.com at example.com 1 cifs/labs1.example.com at example.com 1 cifs/labs2.example.com at example.com 1 cifs/labs2.example.com at example.com 1 cifs/labs3.example.com at example.com 1 cifs/labs3.example.com at example.com [root at spectrum1 security]# net conf list [global] disable netbios = yes disable spoolss = yes printcap cache time = 0 fileid:algorithm = fsname fileid:fstype allow = gpfs syncops:onmeta = no preferred master = no client NTLMv2 auth = yes kernel oplocks = no level2 oplocks = yes debug hires timestamp = yes max log size = 100000 host msdfs = yes notify:inotify = yes wide links = no log writeable files on exit = yes ctdb locktime warn threshold = 5000 auth methods = guest sam winbind smbd:backgroundqueue = False read only = no use sendfile = no strict locking = auto posix locking = no large readwrite = yes aio read size = 1 aio write size = 1 force unknown acl user = yes store dos attributes = yes map readonly = yes map archive = yes map system = yes map hidden = yes ea support = yes groupdb:backend = tdb winbind:online check timeout = 30 winbind max domain connections = 5 winbind max clients = 10000 dmapi support = no unix extensions = no socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 strict allocate = yes tdbsam:map builtin = no aio_pthread:aio open = yes dfree cache time = 100 change notify = yes max open files = 20000 time_audit:timeout = 5000 gencache:stabilize_count = 10000 server min protocol = SMB2_02 server max protocol = SMB3_02 vfs objects = shadow_copy2 syncops gpfs fileid time_audit smbd profiling level = on log level = 1 logging = syslog at 0 file smbd exit on ip drop = yes durable handles = no ctdb:smbxsrv_open_global.tdb = false mangled names = illegal include system krb5 conf = no smbd:async search ask sharemode = yes gpfs:sharemodes = yes gpfs:leases = yes gpfs:dfreequota = yes gpfs:prealloc = yes gpfs:hsm = yes gpfs:winattr = yes gpfs:merge_writeappend = no fruit:metadata = stream fruit:nfs_aces = no fruit:veto_appledouble = no readdir_attr:aapl_max_access = false shadow:snapdir = .snapshots shadow:fixinodes = yes shadow:snapdirseverywhere = yes shadow:sort = desc nfs4:mode = simple nfs4:chown = yes nfs4:acedup = merge add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport server string = IBM NAS client use spnego = yes kerberos method = system keytab ldap admin dn = cn=Directory Manager ldap ssl = start tls ldap suffix = dc=example,dc=com netbios name = spectrum1 passdb backend = ldapsam:"ldap://ssipa.example.com" realm = example.com security = ADS dedicated keytab file = /etc/krb5.keytab password server = ssipa.example.com idmap:cache = no idmap config * : read only = no idmap config * : backend = autorid idmap config * : range = 10000000-299999999 idmap config * : rangesize = 1000000 workgroup = labs1 ntlm auth = yes [share1] path = /ibm/gpfs1/labs1 guest ok = no browseable = yes comment = jas share smb encrypt = disabled [root at spectrum1 ~]# mmsmb export list export path browseable guest ok smb encrypt share1 /ibm/gpfs1/labs1 yes no disabled userauth command: mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com root at spectrum1 ~]# mmuserauth service list FILE access configuration : LDAP PARAMETERS VALUES ------------------------------------------------- ENABLE_SERVER_TLS true ENABLE_KERBEROS true USER_NAME cn=Directory Manager SERVERS ssipa.example.com NETBIOS_NAME spectrum1 BASE_DN dc=example,dc=com USER_DN none GROUP_DN none NETGROUP_DN none USER_OBJECTCLASS posixAccount GROUP_OBJECTCLASS posixGroup USER_NAME_ATTRIB cn USER_ID_ATTRIB uid KERBEROS_SERVER ssipa.example.com KERBEROS_REALM example.com OBJECT access not configured PARAMETERS VALUES ------------------------------------------------- net ads keytab list -> does not show any keys LDAP user information was updated with Samba attributes according to the documentation: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm [root at spectrum1 ~]# pdbedit -L -v Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 No builtin backend found, trying to load plugin Module 'ldapsam' loaded db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] StartTLS issued: using a TLS connection smbldap_open_connection: connection opened ldap_connect_system: successful connection to the LDAP server smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] smbldap_search_paged: search was successful init_sam_from_ldap: Entry found for user: jas --------------- Unix username: jas NT username: jas Account Flags: [U ] User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 Forcing Primary Group to 'Domain Users' for jas Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 Full Name: jas jas Home Directory: \\spectrum1\jas HomeDir Drive: Logon Script: Profile Path: \\spectrum1\jas\profile Domain: SPECTRUM1 Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: never Kickoff time: never Password last set: Thu, 17 May 2018 14:08:01 EDT Password can change: Thu, 17 May 2018 14:08:01 EDT Password must change: never Last bad password : 0 Bad password count : 0 Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Client keytab file: [root at test ~]# klist -k Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 host/test.example.com at example.com 1 host/test.example.com at example.com From christof.schmitt at us.ibm.com Sat May 19 00:05:56 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Fri, 18 May 2018 23:05:56 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos authentication issue In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From spectrumscale at kiranghag.com Sat May 19 05:00:04 2018 From: spectrumscale at kiranghag.com (KG) Date: Sat, 19 May 2018 09:30:04 +0530 Subject: [gpfsug-discuss] NFS on system Z Message-ID: Hi The SS FAQ says following for system Z - Cluster Export Service (CES) is not supported. (Monitoring capabilities, Object, CIFS, User space implementation of NFS) - Kernel NFS (v3 and v4) is supported. Clustered NFS is not supported. Does this mean we can only configure OS based non-redundant NFS exports from scale nodes without CNFS/CES? Kiran Ghag -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Sat May 19 07:58:41 2018 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Sat, 19 May 2018 08:58:41 +0200 Subject: [gpfsug-discuss] NFS on system Z In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Sun May 20 19:42:32 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Sun, 20 May 2018 18:42:32 +0000 Subject: [gpfsug-discuss] NFS on system Z In-Reply-To: Message-ID: Kieran, You can also add x86 nodes to run CES and Ganesha NFS. Either in the same cluster or perhaps neater in a separate multi-cluster Mount. Daniel Dr Daniel Kidger IBM Technical Sales Specialist Software Defined Solution Sales +44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 19 May 2018, at 07:58, Olaf Weiser wrote: > > HI, > yes.. CES comes along with lots of monitors about status, health checks and a special NFS (ganesha) code.. which is optimized / available only for a limited choice of OS/platforms > so CES is not available for e.g. AIX and in your case... not available for systemZ ... > > but - of course you can setup your own NFS server .. > > > > > From: KG > To: gpfsug main discussion list > Date: 05/19/2018 06:00 AM > Subject: [gpfsug-discuss] NFS on system Z > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hi > > The SS FAQ says following for system Z > Cluster Export Service (CES) is not supported. (Monitoring capabilities, Object, CIFS, User space implementation of NFS) > Kernel NFS (v3 and v4) is supported. Clustered NFS is not supported. > > Does this mean we can only configure OS based non-redundant NFS exports from scale nodes without CNFS/CES? > > Kiran Ghag > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Sun May 20 22:39:41 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Sun, 20 May 2018 21:39:41 +0000 Subject: [gpfsug-discuss] Presentations for Spectrum Scale USA - May 16th-17th Message-ID: <7AABFF43-54F0-418E-9F3C-C0E479696528@nuance.com> I?ve uploaded what I have received so far to the spectrumscale.org website, and they are located here: https://www.spectrumscaleug.org/presentations/2018/ Still working on the other authors for their content. Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron.s.knister at nasa.gov Mon May 21 02:41:08 2018 From: aaron.s.knister at nasa.gov (Aaron Knister) Date: Sun, 20 May 2018 21:41:08 -0400 (EDT) Subject: [gpfsug-discuss] Presentations for Spectrum Scale USA - May 16th-17th In-Reply-To: <7AABFF43-54F0-418E-9F3C-C0E479696528@nuance.com> References: <7AABFF43-54F0-418E-9F3C-C0E479696528@nuance.com> Message-ID: I must admit, I got a chuckle out of this typo: Compostable Infrastructure for Technical Computing sadly, I'm sure we all have stories about what we would consider "compostable" infrastructure. -Aaron -- Aaron Knister NASA Center for Climate Simulation (Code 606.2) Goddard Space Flight Center (301) 286-2776 On Sun, 20 May 2018, Oesterlin, Robert wrote: > > I?ve uploaded what I have received so far to the spectrumscale.org website, and they are located here: > > ? > > https://www.spectrumscaleug.org/presentations/2018/ > > ? > > Still working on the other authors for their content. > > ? > > ? > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > ? > > > From bbanister at jumptrading.com Mon May 21 21:51:54 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 21 May 2018 20:51:54 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> <723293fee7214938ae20cdfdbaf99149@jumptrading.com> <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> Message-ID: <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system + fileset + user with this form: mmedquota -d -u :: It?s just not documented in the man page or shown in the examples. Docs need to be updated! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 11:00 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Tue May 22 09:01:21 2018 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Tue, 22 May 2018 16:01:21 +0800 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com><672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com><723293fee7214938ae20cdfdbaf99149@jumptrading.com><3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> Message-ID: Hi Kuei-Yu, Should we update the document as the requested below ? Thanks. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Bryan Banister To: gpfsug main discussion list Date: 05/22/2018 04:52 AM Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Sent by: gpfsug-discuss-bounces at spectrumscale.org Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system + fileset + user with this form: mmedquota -d -u :: It?s just not documented in the man page or shown in the examples. Docs need to be updated! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 11:00 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Tue May 22 09:51:51 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Tue, 22 May 2018 08:51:51 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> Message-ID: Hi all, This has been resolved by (I presume what Jonathan was referring to in his posts) setting "dfree cache time" to 0. Many thanks for everyone's input on this! Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Sobey, Richard A Sent: 14 May 2018 12:54 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query Thanks Jonathan. What I failed to mention in my OP was that MacOS clients DO report the correct size of each mounted folder. Not sure how that changes anything except to reinforce the idea that it's Windows at fault. Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: 14 May 2018 11:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query On Mon, 2018-05-14 at 10:09 +0000, Sobey, Richard A wrote: [SNIP] > ? > I am worried that IBM may tell us we?re doing it wrong (humm) and to > create individual exports for each fileset but this will quickly > become tiresome! > Worst case scenario you could fall back to using the dfree option in smb.conf and then use a program to get the file quota. I have the ~100 lines of C that you need it. Though it has been ~5 years since I last used it. In fact the whole reporting the fileset quota as the disk size is my idea, and the dfree config option is how I implemented it prior to IBM adding it to the vfs_gpfs module. A quick check shows a commit from Jeremy Allison on June 18th last year to use const stuct smb_filename, the comment on the commit is ?instead of const char *. We need to migrate all pathname based VFS calls to use a struct to finish modernising the VFS with extra timestamp and flags parameters. I suspect this change has broken the behaviour. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From p.childs at qmul.ac.uk Tue May 22 10:23:58 2018 From: p.childs at qmul.ac.uk (Peter Childs) Date: Tue, 22 May 2018 09:23:58 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> <723293fee7214938ae20cdfdbaf99149@jumptrading.com> <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> Message-ID: Its a little difficult that the different quota commands for Spectrum Scale are all different in there syntax and can only be used by the "right" people. As far as I can see mmedquota is the only quota command that uses this "full colon" syntax and it would be better if its syntax matched that for mmsetquota and mmlsquota. or that the reset to default quota was added to mmsetquota and mmedquota was left for editing quotas visually in an editor. Regards Peter Childs On Tue, 2018-05-22 at 16:01 +0800, IBM Spectrum Scale wrote: Hi Kuei-Yu, Should we update the document as the requested below ? Thanks. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. [Inactive hide details for Bryan Banister ---05/22/2018 04:52:15 AM---Quick update. Thanks to a colleague of mine, John Valdes,]Bryan Banister ---05/22/2018 04:52:15 AM---Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system From: Bryan Banister To: gpfsug main discussion list Date: 05/22/2018 04:52 AM Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Sent by: gpfsug-discuss-bounces at spectrumscale.org Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system + fileset + user with this form: mmedquota -d -u :: It?s just not documented in the man page or shown in the examples. Docs need to be updated! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 11:00 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Peter Childs ITS Research Storage Queen Mary, University of London -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From valleru at cbio.mskcc.org Tue May 22 16:42:43 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 22 May 2018 11:42:43 -0400 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Message-ID: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this -?PMR: 24090,L6Q,000. However, According to the ticket ?- they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also ?- According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run ?mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ?( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dwayne.Hart at med.mun.ca Tue May 22 16:45:07 2018 From: Dwayne.Hart at med.mun.ca (Dwayne.Hart at med.mun.ca) Date: Tue, 22 May 2018 15:45:07 +0000 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Message-ID: Hi Lohit, What type of network are you using on the back end to transfer the GPFS traffic? Best, Dwayne From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org Sent: Tuesday, May 22, 2018 1:13 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this - PMR: 24090,L6Q,000. However, According to the ticket - they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also - According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Tue May 22 17:40:26 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 22 May 2018 12:40:26 -0400 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Message-ID: <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> 10G Ethernet. Thanks, Lohit On May 22, 2018, 11:55 AM -0400, Dwayne.Hart at med.mun.ca, wrote: > Hi Lohit, > > What type of network are you using on the back end to transfer the GPFS traffic? > > Best, > Dwayne > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org > Sent: Tuesday, May 22, 2018 1:13 PM > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 > > Hello All, > > We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) > Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) > The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. > > I have raised an IBM critical service request about a month ago related to this -?PMR: 24090,L6Q,000. > However, According to the ticket ?- they seemed to feel that it might not be related to GPFS. > Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. > > One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. > Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. > > Also ?- According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run ?mmchconfig release=LATEST command, and that will resolve the issue. > However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. > > Can downgrading GPFS take us back to exactly the previous GPFS config state? > With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? > Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 > > Our previous state: > > 2 Storage clusters - 4.2.3.2 > 1 Compute cluster - 4.2.3.2 ?( remote mounts the above 2 storage clusters ) > > Our current state: > > 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) > 1 Compute cluster - 5.0.0.2 > > Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? > > Any advice on the best steps forward, would greatly help. > > Thanks, > > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dwayne.Hart at med.mun.ca Tue May 22 17:54:43 2018 From: Dwayne.Hart at med.mun.ca (Dwayne.Hart at med.mun.ca) Date: Tue, 22 May 2018 16:54:43 +0000 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> , <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> Message-ID: We are having issues with ESS/Mellanox implementation and were curious as to what you were working with from a network perspective. Best, Dwayne ? Dwayne Hart | Systems Administrator IV CHIA, Faculty of Medicine Memorial University of Newfoundland 300 Prince Philip Drive St. John?s, Newfoundland | A1B 3V6 Craig L Dobbin Building | 4M409 T 709 864 6631 On May 22, 2018, at 2:10 PM, "valleru at cbio.mskcc.org" > wrote: 10G Ethernet. Thanks, Lohit On May 22, 2018, 11:55 AM -0400, Dwayne.Hart at med.mun.ca, wrote: Hi Lohit, What type of network are you using on the back end to transfer the GPFS traffic? Best, Dwayne From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org Sent: Tuesday, May 22, 2018 1:13 PM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this - PMR: 24090,L6Q,000. However, According to the ticket - they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also - According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Tue May 22 19:16:28 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 22 May 2018 14:16:28 -0400 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> Message-ID: <7cb337ab-7824-40a6-9bbf-b2cd62ec97cf@Spark> Thank Dwayne. I don?t think, we are facing anything else from network perspective as of now. We were seeing deadlocks initially when we upgraded to 5.0, but it might not be because of network. We also see deadlocks now, but they are mostly caused due to high waiters i believe. I have temporarily disabled deadlocks. Thanks, Lohit On May 22, 2018, 12:54 PM -0400, Dwayne.Hart at med.mun.ca, wrote: > We are having issues with ESS/Mellanox implementation and were curious as to what you were working with from a network perspective. > > Best, > Dwayne > ? > Dwayne Hart | Systems Administrator IV > > CHIA, Faculty of Medicine > Memorial University of Newfoundland > 300 Prince Philip Drive > St. John?s, Newfoundland | A1B 3V6 > Craig L Dobbin Building | 4M409 > T 709 864 6631 > > On May 22, 2018, at 2:10 PM, "valleru at cbio.mskcc.org" wrote: > > > 10G Ethernet. > > > > Thanks, > > Lohit > > > > On May 22, 2018, 11:55 AM -0400, Dwayne.Hart at med.mun.ca, wrote: > > > Hi Lohit, > > > > > > What type of network are you using on the back end to transfer the GPFS traffic? > > > > > > Best, > > > Dwayne > > > > > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org > > > Sent: Tuesday, May 22, 2018 1:13 PM > > > To: gpfsug main discussion list > > > Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 > > > > > > Hello All, > > > > > > We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) > > > Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) > > > The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. > > > > > > I have raised an IBM critical service request about a month ago related to this -?PMR: 24090,L6Q,000. > > > However, According to the ticket ?- they seemed to feel that it might not be related to GPFS. > > > Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. > > > > > > One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. > > > Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. > > > > > > Also ?- According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run ?mmchconfig release=LATEST command, and that will resolve the issue. > > > However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. > > > > > > Can downgrading GPFS take us back to exactly the previous GPFS config state? > > > With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? > > > Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 > > > > > > Our previous state: > > > > > > 2 Storage clusters - 4.2.3.2 > > > 1 Compute cluster - 4.2.3.2 ?( remote mounts the above 2 storage clusters ) > > > > > > Our current state: > > > > > > 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) > > > 1 Compute cluster - 5.0.0.2 > > > > > > Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? > > > > > > Any advice on the best steps forward, would greatly help. > > > > > > Thanks, > > > > > > Lohit > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From hopii at interia.pl Tue May 22 20:43:52 2018 From: hopii at interia.pl (hopii at interia.pl) Date: Tue, 22 May 2018 21:43:52 +0200 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 In-Reply-To: References: Message-ID: Thank you for reply. Because I didn't already know what to do, was just playing with different options including 'security = ADS' . Anyway, the problem is solved, not sure if it was a bug but the client Centos 7.4 couldn't connect to spectrum scale node RH 7.5, resulting the errors provided before. After client upgrade from Centos 7.4 to latest Centos 7.5, without any changes to configuration, smb with kerberos works perfectly fine. Thank you again, d. Od: gpfsug-discuss-request at spectrumscale.org Do: gpfsug-discuss at spectrumscale.org; Wys?ane: 1:06 Sobota 2018-05-19 Temat: gpfsug-discuss Digest, Vol 76, Issue 71 > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (hopii at interia.pl) > 2. Re: Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (Christof Schmitt) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 May 2018 20:53:57 +0200 > From: hopii at interia.pl > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos > authentication issue > Message-ID: > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. > > Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. > NFS mount with keberos works with no issues as well. > > But I ran out of ideas how to configure SMB using LDAP with kerberos. > > I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. > But error message seems to point to keytab file, which is present on both, server and client nodes. > > I ran into simillar post, dated few days ago, so I'm not the only one. > https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html > > > Below is my configuration and error message, and I'd appreciate any hints or help. > > Thank you, > d. > > > > Error message from /var/adm/ras/log.smbd > > [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) > GENSEC backend 'ntlmssp_resume_ccache' registered > [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) > smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] > [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) > Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR > > > > Cluster nodes > spectrum1.example.com RedHat 7.4 > spectrum2.example.com RedHat 7.4 > spectrum3.example.com RedHat 7.4 > > Protocols nodes: > labs1.example.com > lasb2.example.com > labs3.example.com > > > ssipa.example.com Centos 7.5 > > > > spectrum scale server: > > [root at spectrum1 security]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/labs1.example.com at example.com > 1 host/labs1.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs3.example.com at example.com > 1 host/labs3.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs3.example.com at example.com > 1 cifs/labs3.example.com at example.com > > > > > [root at spectrum1 security]# net conf list > [global] > disable netbios = yes > disable spoolss = yes > printcap cache time = 0 > fileid:algorithm = fsname > fileid:fstype allow = gpfs > syncops:onmeta = no > preferred master = no > client NTLMv2 auth = yes > kernel oplocks = no > level2 oplocks = yes > debug hires timestamp = yes > max log size = 100000 > host msdfs = yes > notify:inotify = yes > wide links = no > log writeable files on exit = yes > ctdb locktime warn threshold = 5000 > auth methods = guest sam winbind > smbd:backgroundqueue = False > read only = no > use sendfile = no > strict locking = auto > posix locking = no > large readwrite = yes > aio read size = 1 > aio write size = 1 > force unknown acl user = yes > store dos attributes = yes > map readonly = yes > map archive = yes > map system = yes > map hidden = yes > ea support = yes > groupdb:backend = tdb > winbind:online check timeout = 30 > winbind max domain connections = 5 > winbind max clients = 10000 > dmapi support = no > unix extensions = no > socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 > strict allocate = yes > tdbsam:map builtin = no > aio_pthread:aio open = yes > dfree cache time = 100 > change notify = yes > max open files = 20000 > time_audit:timeout = 5000 > gencache:stabilize_count = 10000 > server min protocol = SMB2_02 > server max protocol = SMB3_02 > vfs objects = shadow_copy2 syncops gpfs fileid time_audit > smbd profiling level = on > log level = 1 > logging = syslog at 0 file > smbd exit on ip drop = yes > durable handles = no > ctdb:smbxsrv_open_global.tdb = false > mangled names = illegal > include system krb5 conf = no > smbd:async search ask sharemode = yes > gpfs:sharemodes = yes > gpfs:leases = yes > gpfs:dfreequota = yes > gpfs:prealloc = yes > gpfs:hsm = yes > gpfs:winattr = yes > gpfs:merge_writeappend = no > fruit:metadata = stream > fruit:nfs_aces = no > fruit:veto_appledouble = no > readdir_attr:aapl_max_access = false > shadow:snapdir = .snapshots > shadow:fixinodes = yes > shadow:snapdirseverywhere = yes > shadow:sort = desc > nfs4:mode = simple > nfs4:chown = yes > nfs4:acedup = merge > add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport > change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport > delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport > server string = IBM NAS > client use spnego = yes > kerberos method = system keytab > ldap admin dn = cn=Directory Manager > ldap ssl = start tls > ldap suffix = dc=example,dc=com > netbios name = spectrum1 > passdb backend = ldapsam:"ldap://ssipa.example.com" > realm = example.com > security = ADS > dedicated keytab file = /etc/krb5.keytab > password server = ssipa.example.com > idmap:cache = no > idmap config * : read only = no > idmap config * : backend = autorid > idmap config * : range = 10000000-299999999 > idmap config * : rangesize = 1000000 > workgroup = labs1 > ntlm auth = yes > > [share1] > path = /ibm/gpfs1/labs1 > guest ok = no > browseable = yes > comment = jas share > smb encrypt = disabled > > > [root at spectrum1 ~]# mmsmb export list > export path browseable guest ok smb encrypt > share1 /ibm/gpfs1/labs1 yes no disabled > > > > userauth command: > mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com > > > root at spectrum1 ~]# mmuserauth service list > FILE access configuration : LDAP > PARAMETERS VALUES > ------------------------------------------------- > ENABLE_SERVER_TLS true > ENABLE_KERBEROS true > USER_NAME cn=Directory Manager > SERVERS ssipa.example.com > NETBIOS_NAME spectrum1 > BASE_DN dc=example,dc=com > USER_DN none > GROUP_DN none > NETGROUP_DN none > USER_OBJECTCLASS posixAccount > GROUP_OBJECTCLASS posixGroup > USER_NAME_ATTRIB cn > USER_ID_ATTRIB uid > KERBEROS_SERVER ssipa.example.com > KERBEROS_REALM example.com > > OBJECT access not configured > PARAMETERS VALUES > ------------------------------------------------- > > net ads keytab list -> does not show any keys > > > LDAP user information was updated with Samba attributes according to the documentation: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm > > > [root at spectrum1 ~]# pdbedit -L -v > Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 > Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 > No builtin backend found, trying to load plugin > Module 'ldapsam' loaded > db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b > db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 > smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] > StartTLS issued: using a TLS connection > smbldap_open_connection: connection opened > ldap_connect_system: successful connection to the LDAP server > smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] > smbldap_search_paged: search was successful > init_sam_from_ldap: Entry found for user: jas > --------------- > Unix username: jas > NT username: jas > Account Flags: [U ] > User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 > Forcing Primary Group to 'Domain Users' for jas > Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 > Full Name: jas jas > Home Directory: \\spectrum1\jas > HomeDir Drive: > Logon Script: > Profile Path: \\spectrum1\jas\profile > Domain: SPECTRUM1 > Account desc: > Workstations: > Munged dial: > Logon time: 0 > Logoff time: never > Kickoff time: never > Password last set: Thu, 17 May 2018 14:08:01 EDT > Password can change: Thu, 17 May 2018 14:08:01 EDT > Password must change: never > Last bad password : 0 > Bad password count : 0 > Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF > > > > Client keytab file: > [root at test ~]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/test.example.com at example.com > 1 host/test.example.com at example.com > > > > ------------------------------ > > Message: 2 > Date: Fri, 18 May 2018 23:05:56 +0000 > From: "Christof Schmitt" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP > kerberos authentication issue > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 76, Issue 71 > ********************************************** > From alvise.dorigo at psi.ch Wed May 23 08:41:50 2018 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Wed, 23 May 2018 07:41:50 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 In-Reply-To: References: , Message-ID: <83A6EEB0EC738F459A39439733AE804522F15CC5@MBX114.d.ethz.ch> Hi Felix, yes please, configure jumbo frames for both ports. And yes, I'll check the cable (I used an old one, without any label 25G). thanks, A ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of hopii at interia.pl [hopii at interia.pl] Sent: Tuesday, May 22, 2018 9:43 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 Thank you for reply. Because I didn't already know what to do, was just playing with different options including 'security = ADS' . Anyway, the problem is solved, not sure if it was a bug but the client Centos 7.4 couldn't connect to spectrum scale node RH 7.5, resulting the errors provided before. After client upgrade from Centos 7.4 to latest Centos 7.5, without any changes to configuration, smb with kerberos works perfectly fine. Thank you again, d. Od: gpfsug-discuss-request at spectrumscale.org Do: gpfsug-discuss at spectrumscale.org; Wys?ane: 1:06 Sobota 2018-05-19 Temat: gpfsug-discuss Digest, Vol 76, Issue 71 > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (hopii at interia.pl) > 2. Re: Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (Christof Schmitt) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 May 2018 20:53:57 +0200 > From: hopii at interia.pl > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos > authentication issue > Message-ID: > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. > > Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. > NFS mount with keberos works with no issues as well. > > But I ran out of ideas how to configure SMB using LDAP with kerberos. > > I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. > But error message seems to point to keytab file, which is present on both, server and client nodes. > > I ran into simillar post, dated few days ago, so I'm not the only one. > https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html > > > Below is my configuration and error message, and I'd appreciate any hints or help. > > Thank you, > d. > > > > Error message from /var/adm/ras/log.smbd > > [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) > GENSEC backend 'ntlmssp_resume_ccache' registered > [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) > smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] > [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) > Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR > > > > Cluster nodes > spectrum1.example.com RedHat 7.4 > spectrum2.example.com RedHat 7.4 > spectrum3.example.com RedHat 7.4 > > Protocols nodes: > labs1.example.com > lasb2.example.com > labs3.example.com > > > ssipa.example.com Centos 7.5 > > > > spectrum scale server: > > [root at spectrum1 security]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/labs1.example.com at example.com > 1 host/labs1.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs3.example.com at example.com > 1 host/labs3.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs3.example.com at example.com > 1 cifs/labs3.example.com at example.com > > > > > [root at spectrum1 security]# net conf list > [global] > disable netbios = yes > disable spoolss = yes > printcap cache time = 0 > fileid:algorithm = fsname > fileid:fstype allow = gpfs > syncops:onmeta = no > preferred master = no > client NTLMv2 auth = yes > kernel oplocks = no > level2 oplocks = yes > debug hires timestamp = yes > max log size = 100000 > host msdfs = yes > notify:inotify = yes > wide links = no > log writeable files on exit = yes > ctdb locktime warn threshold = 5000 > auth methods = guest sam winbind > smbd:backgroundqueue = False > read only = no > use sendfile = no > strict locking = auto > posix locking = no > large readwrite = yes > aio read size = 1 > aio write size = 1 > force unknown acl user = yes > store dos attributes = yes > map readonly = yes > map archive = yes > map system = yes > map hidden = yes > ea support = yes > groupdb:backend = tdb > winbind:online check timeout = 30 > winbind max domain connections = 5 > winbind max clients = 10000 > dmapi support = no > unix extensions = no > socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 > strict allocate = yes > tdbsam:map builtin = no > aio_pthread:aio open = yes > dfree cache time = 100 > change notify = yes > max open files = 20000 > time_audit:timeout = 5000 > gencache:stabilize_count = 10000 > server min protocol = SMB2_02 > server max protocol = SMB3_02 > vfs objects = shadow_copy2 syncops gpfs fileid time_audit > smbd profiling level = on > log level = 1 > logging = syslog at 0 file > smbd exit on ip drop = yes > durable handles = no > ctdb:smbxsrv_open_global.tdb = false > mangled names = illegal > include system krb5 conf = no > smbd:async search ask sharemode = yes > gpfs:sharemodes = yes > gpfs:leases = yes > gpfs:dfreequota = yes > gpfs:prealloc = yes > gpfs:hsm = yes > gpfs:winattr = yes > gpfs:merge_writeappend = no > fruit:metadata = stream > fruit:nfs_aces = no > fruit:veto_appledouble = no > readdir_attr:aapl_max_access = false > shadow:snapdir = .snapshots > shadow:fixinodes = yes > shadow:snapdirseverywhere = yes > shadow:sort = desc > nfs4:mode = simple > nfs4:chown = yes > nfs4:acedup = merge > add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport > change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport > delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport > server string = IBM NAS > client use spnego = yes > kerberos method = system keytab > ldap admin dn = cn=Directory Manager > ldap ssl = start tls > ldap suffix = dc=example,dc=com > netbios name = spectrum1 > passdb backend = ldapsam:"ldap://ssipa.example.com" > realm = example.com > security = ADS > dedicated keytab file = /etc/krb5.keytab > password server = ssipa.example.com > idmap:cache = no > idmap config * : read only = no > idmap config * : backend = autorid > idmap config * : range = 10000000-299999999 > idmap config * : rangesize = 1000000 > workgroup = labs1 > ntlm auth = yes > > [share1] > path = /ibm/gpfs1/labs1 > guest ok = no > browseable = yes > comment = jas share > smb encrypt = disabled > > > [root at spectrum1 ~]# mmsmb export list > export path browseable guest ok smb encrypt > share1 /ibm/gpfs1/labs1 yes no disabled > > > > userauth command: > mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com > > > root at spectrum1 ~]# mmuserauth service list > FILE access configuration : LDAP > PARAMETERS VALUES > ------------------------------------------------- > ENABLE_SERVER_TLS true > ENABLE_KERBEROS true > USER_NAME cn=Directory Manager > SERVERS ssipa.example.com > NETBIOS_NAME spectrum1 > BASE_DN dc=example,dc=com > USER_DN none > GROUP_DN none > NETGROUP_DN none > USER_OBJECTCLASS posixAccount > GROUP_OBJECTCLASS posixGroup > USER_NAME_ATTRIB cn > USER_ID_ATTRIB uid > KERBEROS_SERVER ssipa.example.com > KERBEROS_REALM example.com > > OBJECT access not configured > PARAMETERS VALUES > ------------------------------------------------- > > net ads keytab list -> does not show any keys > > > LDAP user information was updated with Samba attributes according to the documentation: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm > > > [root at spectrum1 ~]# pdbedit -L -v > Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 > Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 > No builtin backend found, trying to load plugin > Module 'ldapsam' loaded > db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b > db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 > smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] > StartTLS issued: using a TLS connection > smbldap_open_connection: connection opened > ldap_connect_system: successful connection to the LDAP server > smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] > smbldap_search_paged: search was successful > init_sam_from_ldap: Entry found for user: jas > --------------- > Unix username: jas > NT username: jas > Account Flags: [U ] > User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 > Forcing Primary Group to 'Domain Users' for jas > Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 > Full Name: jas jas > Home Directory: \\spectrum1\jas > HomeDir Drive: > Logon Script: > Profile Path: \\spectrum1\jas\profile > Domain: SPECTRUM1 > Account desc: > Workstations: > Munged dial: > Logon time: 0 > Logoff time: never > Kickoff time: never > Password last set: Thu, 17 May 2018 14:08:01 EDT > Password can change: Thu, 17 May 2018 14:08:01 EDT > Password must change: never > Last bad password : 0 > Bad password count : 0 > Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF > > > > Client keytab file: > [root at test ~]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/test.example.com at example.com > 1 host/test.example.com at example.com > > > > ------------------------------ > > Message: 2 > Date: Fri, 18 May 2018 23:05:56 +0000 > From: "Christof Schmitt" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP > kerberos authentication issue > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 76, Issue 71 > ********************************************** > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From alvise.dorigo at psi.ch Wed May 23 08:42:59 2018 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Wed, 23 May 2018 07:42:59 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 In-Reply-To: <83A6EEB0EC738F459A39439733AE804522F15CC5@MBX114.d.ethz.ch> References: , , <83A6EEB0EC738F459A39439733AE804522F15CC5@MBX114.d.ethz.ch> Message-ID: <83A6EEB0EC738F459A39439733AE804522F15CDF@MBX114.d.ethz.ch> ops sorry! wrong window! please remove it... sorry. Alvise Dorigo ________________________________________ From: Dorigo Alvise (PSI) Sent: Wednesday, May 23, 2018 9:41 AM To: gpfsug main discussion list Subject: RE: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 Hi Felix, yes please, configure jumbo frames for both ports. And yes, I'll check the cable (I used an old one, without any label 25G). thanks, A ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of hopii at interia.pl [hopii at interia.pl] Sent: Tuesday, May 22, 2018 9:43 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 Thank you for reply. Because I didn't already know what to do, was just playing with different options including 'security = ADS' . Anyway, the problem is solved, not sure if it was a bug but the client Centos 7.4 couldn't connect to spectrum scale node RH 7.5, resulting the errors provided before. After client upgrade from Centos 7.4 to latest Centos 7.5, without any changes to configuration, smb with kerberos works perfectly fine. Thank you again, d. Od: gpfsug-discuss-request at spectrumscale.org Do: gpfsug-discuss at spectrumscale.org; Wys?ane: 1:06 Sobota 2018-05-19 Temat: gpfsug-discuss Digest, Vol 76, Issue 71 > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (hopii at interia.pl) > 2. Re: Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (Christof Schmitt) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 May 2018 20:53:57 +0200 > From: hopii at interia.pl > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos > authentication issue > Message-ID: > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. > > Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. > NFS mount with keberos works with no issues as well. > > But I ran out of ideas how to configure SMB using LDAP with kerberos. > > I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. > But error message seems to point to keytab file, which is present on both, server and client nodes. > > I ran into simillar post, dated few days ago, so I'm not the only one. > https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html > > > Below is my configuration and error message, and I'd appreciate any hints or help. > > Thank you, > d. > > > > Error message from /var/adm/ras/log.smbd > > [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) > GENSEC backend 'ntlmssp_resume_ccache' registered > [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) > smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] > [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) > Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR > > > > Cluster nodes > spectrum1.example.com RedHat 7.4 > spectrum2.example.com RedHat 7.4 > spectrum3.example.com RedHat 7.4 > > Protocols nodes: > labs1.example.com > lasb2.example.com > labs3.example.com > > > ssipa.example.com Centos 7.5 > > > > spectrum scale server: > > [root at spectrum1 security]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/labs1.example.com at example.com > 1 host/labs1.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs3.example.com at example.com > 1 host/labs3.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs3.example.com at example.com > 1 cifs/labs3.example.com at example.com > > > > > [root at spectrum1 security]# net conf list > [global] > disable netbios = yes > disable spoolss = yes > printcap cache time = 0 > fileid:algorithm = fsname > fileid:fstype allow = gpfs > syncops:onmeta = no > preferred master = no > client NTLMv2 auth = yes > kernel oplocks = no > level2 oplocks = yes > debug hires timestamp = yes > max log size = 100000 > host msdfs = yes > notify:inotify = yes > wide links = no > log writeable files on exit = yes > ctdb locktime warn threshold = 5000 > auth methods = guest sam winbind > smbd:backgroundqueue = False > read only = no > use sendfile = no > strict locking = auto > posix locking = no > large readwrite = yes > aio read size = 1 > aio write size = 1 > force unknown acl user = yes > store dos attributes = yes > map readonly = yes > map archive = yes > map system = yes > map hidden = yes > ea support = yes > groupdb:backend = tdb > winbind:online check timeout = 30 > winbind max domain connections = 5 > winbind max clients = 10000 > dmapi support = no > unix extensions = no > socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 > strict allocate = yes > tdbsam:map builtin = no > aio_pthread:aio open = yes > dfree cache time = 100 > change notify = yes > max open files = 20000 > time_audit:timeout = 5000 > gencache:stabilize_count = 10000 > server min protocol = SMB2_02 > server max protocol = SMB3_02 > vfs objects = shadow_copy2 syncops gpfs fileid time_audit > smbd profiling level = on > log level = 1 > logging = syslog at 0 file > smbd exit on ip drop = yes > durable handles = no > ctdb:smbxsrv_open_global.tdb = false > mangled names = illegal > include system krb5 conf = no > smbd:async search ask sharemode = yes > gpfs:sharemodes = yes > gpfs:leases = yes > gpfs:dfreequota = yes > gpfs:prealloc = yes > gpfs:hsm = yes > gpfs:winattr = yes > gpfs:merge_writeappend = no > fruit:metadata = stream > fruit:nfs_aces = no > fruit:veto_appledouble = no > readdir_attr:aapl_max_access = false > shadow:snapdir = .snapshots > shadow:fixinodes = yes > shadow:snapdirseverywhere = yes > shadow:sort = desc > nfs4:mode = simple > nfs4:chown = yes > nfs4:acedup = merge > add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport > change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport > delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport > server string = IBM NAS > client use spnego = yes > kerberos method = system keytab > ldap admin dn = cn=Directory Manager > ldap ssl = start tls > ldap suffix = dc=example,dc=com > netbios name = spectrum1 > passdb backend = ldapsam:"ldap://ssipa.example.com" > realm = example.com > security = ADS > dedicated keytab file = /etc/krb5.keytab > password server = ssipa.example.com > idmap:cache = no > idmap config * : read only = no > idmap config * : backend = autorid > idmap config * : range = 10000000-299999999 > idmap config * : rangesize = 1000000 > workgroup = labs1 > ntlm auth = yes > > [share1] > path = /ibm/gpfs1/labs1 > guest ok = no > browseable = yes > comment = jas share > smb encrypt = disabled > > > [root at spectrum1 ~]# mmsmb export list > export path browseable guest ok smb encrypt > share1 /ibm/gpfs1/labs1 yes no disabled > > > > userauth command: > mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com > > > root at spectrum1 ~]# mmuserauth service list > FILE access configuration : LDAP > PARAMETERS VALUES > ------------------------------------------------- > ENABLE_SERVER_TLS true > ENABLE_KERBEROS true > USER_NAME cn=Directory Manager > SERVERS ssipa.example.com > NETBIOS_NAME spectrum1 > BASE_DN dc=example,dc=com > USER_DN none > GROUP_DN none > NETGROUP_DN none > USER_OBJECTCLASS posixAccount > GROUP_OBJECTCLASS posixGroup > USER_NAME_ATTRIB cn > USER_ID_ATTRIB uid > KERBEROS_SERVER ssipa.example.com > KERBEROS_REALM example.com > > OBJECT access not configured > PARAMETERS VALUES > ------------------------------------------------- > > net ads keytab list -> does not show any keys > > > LDAP user information was updated with Samba attributes according to the documentation: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm > > > [root at spectrum1 ~]# pdbedit -L -v > Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 > Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 > No builtin backend found, trying to load plugin > Module 'ldapsam' loaded > db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b > db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 > smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] > StartTLS issued: using a TLS connection > smbldap_open_connection: connection opened > ldap_connect_system: successful connection to the LDAP server > smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] > smbldap_search_paged: search was successful > init_sam_from_ldap: Entry found for user: jas > --------------- > Unix username: jas > NT username: jas > Account Flags: [U ] > User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 > Forcing Primary Group to 'Domain Users' for jas > Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 > Full Name: jas jas > Home Directory: \\spectrum1\jas > HomeDir Drive: > Logon Script: > Profile Path: \\spectrum1\jas\profile > Domain: SPECTRUM1 > Account desc: > Workstations: > Munged dial: > Logon time: 0 > Logoff time: never > Kickoff time: never > Password last set: Thu, 17 May 2018 14:08:01 EDT > Password can change: Thu, 17 May 2018 14:08:01 EDT > Password must change: never > Last bad password : 0 > Bad password count : 0 > Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF > > > > Client keytab file: > [root at test ~]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/test.example.com at example.com > 1 host/test.example.com at example.com > > > > ------------------------------ > > Message: 2 > Date: Fri, 18 May 2018 23:05:56 +0000 > From: "Christof Schmitt" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP > kerberos authentication issue > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 76, Issue 71 > ********************************************** > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From johnbent at gmail.com Wed May 23 10:39:08 2018 From: johnbent at gmail.com (John Bent) Date: Wed, 23 May 2018 03:39:08 -0600 Subject: [gpfsug-discuss] IO500 Call for Submissions Message-ID: IO500 Call for Submissions Deadline: 23 June 2018 AoE The IO500 is now accepting and encouraging submissions for the upcoming IO500 list revealed at ISC 2018 in Frankfurt, Germany. The benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please submit and we look forward to seeing many of you at ISC 2018! Please note that submissions of all size are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below. Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017 and published its first list at SC17. The need for such an initiative has long been known within High Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking. The multi-fold goals of the benchmark suite are as follows: * Maximizing simplicity in running the benchmark suite * Encouraging complexity in tuning for performance * Allowing submitters to highlight their ?hero run? performance numbers * Forcing submitters to simultaneously report performance for challenging IO patterns. Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication. The goals of the community are also multi-fold: * Gather historical data for the sake of analysis and to aid predictions of storage futures * Collect tuning information to share valuable performance optimizations across the community * Encourage vendors and designers to optimize for workloads beyond ?hero runs? * Establish bounded expectations for users, procurers, and administrators Once again, we encourage you to submit (see http://io500.org/submission), to join our community, and to attend our BoF ?The IO-500 and the Virtual Institute of I/O? at ISC 2018 where we will announce the second ever IO500 list. The current list includes results from BeeGPFS, DataWarp, IME, Lustre, and Spectrum Scale. We hope that the next list has even more! We look forward to answering any questions or concerns you might have. Thank you! IO500 Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Thu May 24 09:45:00 2018 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Thu, 24 May 2018 08:45:00 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system Message-ID: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> Dear members, at PSI I'm trying to integrate the CES service with our AD authentication system. My understanding, after talking to expert people here, is that I should use the RFC2307 model for ID mapping (described here: https://goo.gl/XvqHDH). The problem is that our ID schema is slightly different than that one described in RFC2307. In the RFC the relevant user identification fields are named "uidNumber" and "gidNumber". But in our AD database schema we have: # egrep 'uid_number|gid_number' /etc/sssd/sssd.conf ldap_user_uid_number = msSFU30UidNumber ldap_user_gid_number = msSFU30GidNumber ldap_group_gid_number = msSFU30GidNumber My question is: is it possible to configure CES to look for the custom field labels (those ones listed above) instead the default ones officially described in rfc2307 ? many thanks. Regards, Alvise Dorigo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ivano.Talamo at psi.ch Thu May 24 14:51:56 2018 From: Ivano.Talamo at psi.ch (Ivano Talamo) Date: Thu, 24 May 2018 15:51:56 +0200 Subject: [gpfsug-discuss] Inter-clusters issue with change of the subnet IP Message-ID: <432c8c12-4d36-d8a7-3c79-61b94aa409bf@psi.ch> Hi all, We currently have an issue with our GPFS clusters. Shortly when we removed/added a node to a cluster we changed IP address for the IPoIB subnet and this broke GPFS. The primary IP didn't change. In details our setup is quite standard: one GPFS cluster with CPU nodes only accessing (via remote cluster mount) different storage clusters. Clusters are on an Infiniband fabric plus IPoIB for communication via the subnet parameter. Yesterday it happened that some nodes were added to the CPU cluster with the correct primary IP addresses but incorrect IPoIB ones. Incorrect in the sense that the IPoIB addresses were already in use by some other nodes in the same CPU cluster. This made all the clusters (not only the CPU one) suffer for a lot of errors, gpfs restarting, file systems being unmounted. Removing the wrong nodes brought the clusters to a stable state. But the real strange thing came when one of these node was reinstalled, configured with the correct IPoIB address and added again to the cluster. At this point (when the node tried to mount the remote filesystems) the issue happened again. In the log files we have lines like: 2018-05-24_10:32:45.520+0200: [I] Accepted and connected to 192.168.x.y Where the IP number 192.168.x.y is the old/incorrect one. And looking at mmdiag --network there are a bunch of lines like the following: 192.168.x.z broken 233 -1 0 0 L With the wrong/old IPs. And this appears on all cluster (CPU and storage ones). Is it possible that the other nodes in the clusters use this outdated information when the reinstalled node is brought back into the cluster? Is there any kind of timeout, so that after sometimes this information is purged? Or is there any procedure that we could use to now introduce the nodes? Otherwise we see no other option but to restart GPFS on all the nodes of all clusters one by one to make sure that the incorrect information goes away. Thanks, Ivano From skylar2 at uw.edu Thu May 24 15:16:32 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Thu, 24 May 2018 14:16:32 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> Message-ID: <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> I haven't needed to change the LDAP attributes that CES uses, but I do see --user-id-attrib in the mmuserauth documentation. Unfortunately, I don't see an equivalent one for gidNumber. On Thu, May 24, 2018 at 08:45:00AM +0000, Dorigo Alvise (PSI) wrote: > Dear members, > at PSI I'm trying to integrate the CES service with our AD authentication system. > > My understanding, after talking to expert people here, is that I should use the RFC2307 model for ID mapping (described here: https://goo.gl/XvqHDH). The problem is that our ID schema is slightly different than that one described in RFC2307. In the RFC the relevant user identification fields are named "uidNumber" and "gidNumber". But in our AD database schema we have: > > # egrep 'uid_number|gid_number' /etc/sssd/sssd.conf > ldap_user_uid_number = msSFU30UidNumber > ldap_user_gid_number = msSFU30GidNumber > ldap_group_gid_number = msSFU30GidNumber > > My question is: is it possible to configure CES to look for the custom field labels (those ones listed above) instead the default ones officially described in rfc2307 ? > > many thanks. > Regards, > > Alvise Dorigo > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From jonathan.buzzard at strath.ac.uk Thu May 24 15:46:32 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 24 May 2018 15:46:32 +0100 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> Message-ID: <1527173192.28106.18.camel@strath.ac.uk> On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote: > I haven't needed to change the LDAP attributes that CES uses, but I > do see --user-id-attrib in the mmuserauth documentation. > Unfortunately, I don't see an equivalent one for gidNumber. > Is it not doing the "Samba thing" where your GID is the GID of your primary Active Directory group? This is usually "Domain Users" but not always. Basically Samba ignores the separate GID field in RFC2307bis, so one imagines the options for changing the LDAP attributes are none existent. I know back in the day this had me stumped for a while because unless you assign a GID number to the users primary group then Winbind does not return anything, aka a "getent passwd" on the user fails. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From skylar2 at uw.edu Thu May 24 15:51:09 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Thu, 24 May 2018 14:51:09 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <1527173192.28106.18.camel@strath.ac.uk> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> <1527173192.28106.18.camel@strath.ac.uk> Message-ID: <20180524145053.osnyosp4qmz4npay@utumno.gs.washington.edu> On Thu, May 24, 2018 at 03:46:32PM +0100, Jonathan Buzzard wrote: > On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote: > > I haven't needed to change the LDAP attributes that CES uses, but I > > do see --user-id-attrib in the mmuserauth documentation. > > Unfortunately, I don't see an equivalent one for gidNumber. > > > > Is it not doing the "Samba thing" where your GID is the GID of your > primary Active Directory group? This is usually "Domain Users" but not > always. > > Basically Samba ignores the separate GID field in RFC2307bis, so one > imagines the options for changing the LDAP attributes are none > existent. > > I know back in the day this had me stumped for a while because unless > you assign a GID number to the users primary group then Winbind does > not return anything, aka a "getent passwd" on the user fails. At least for us, it seems to be using the gidNumber attribute of our users. On the back-end, of course, it is Samba, but I don't know that there are mm* commands available for all of the tunables one can set in smb.conf. -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From S.J.Thompson at bham.ac.uk Thu May 24 17:46:14 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Thu, 24 May 2018 16:46:14 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <20180524145053.osnyosp4qmz4npay@utumno.gs.washington.edu> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> <1527173192.28106.18.camel@strath.ac.uk>, <20180524145053.osnyosp4qmz4npay@utumno.gs.washington.edu> Message-ID: You can change them using the normal SMB commands, from the appropriate bin directory, whether this is supported is another matter. We have one parameter set this way but I forgot which. Simkn ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Skylar Thompson [skylar2 at uw.edu] Sent: 24 May 2018 15:51 To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Question concerning integration of CES with AD authentication system On Thu, May 24, 2018 at 03:46:32PM +0100, Jonathan Buzzard wrote: > On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote: > > I haven't needed to change the LDAP attributes that CES uses, but I > > do see --user-id-attrib in the mmuserauth documentation. > > Unfortunately, I don't see an equivalent one for gidNumber. > > > > Is it not doing the "Samba thing" where your GID is the GID of your > primary Active Directory group? This is usually "Domain Users" but not > always. > > Basically Samba ignores the separate GID field in RFC2307bis, so one > imagines the options for changing the LDAP attributes are none > existent. > > I know back in the day this had me stumped for a while because unless > you assign a GID number to the users primary group then Winbind does > not return anything, aka a "getent passwd" on the user fails. At least for us, it seems to be using the gidNumber attribute of our users. On the back-end, of course, it is Samba, but I don't know that there are mm* commands available for all of the tunables one can set in smb.conf. -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From christof.schmitt at us.ibm.com Thu May 24 18:07:02 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 24 May 2018 17:07:02 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <1527173192.28106.18.camel@strath.ac.uk> References: <1527173192.28106.18.camel@strath.ac.uk>, <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch><20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> Message-ID: An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Thu May 24 18:14:28 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 24 May 2018 17:14:28 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> Message-ID: An HTML attachment was scrubbed... URL: From scale at us.ibm.com Fri May 25 08:01:43 2018 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 25 May 2018 15:01:43 +0800 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Message-ID: If you didn't run mmchconfig release=LATEST and didn't change the fs version, then you can downgrade either or both of them. Thanks. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 05/22/2018 11:54 PM Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this - PMR: 24090,L6Q,000. However, According to the ticket - they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also - According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Fri May 25 13:24:31 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Fri, 25 May 2018 12:24:31 +0000 Subject: [gpfsug-discuss] IPv6 not supported still? Message-ID: Is the FAQ woefully outdated with respect to this when it says IPv6 is not supported for virtually any scenario (GUI, NFS, CES, TCT amongst others). Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Fri May 25 14:24:11 2018 From: knop at us.ibm.com (Felipe Knop) Date: Fri, 25 May 2018 09:24:11 -0400 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Message-ID: All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Fri May 25 15:29:16 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Fri, 25 May 2018 14:29:16 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: Message-ID: Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Fri May 25 21:01:56 2018 From: knop at us.ibm.com (Felipe Knop) Date: Fri, 25 May 2018 16:01:56 -0400 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: Message-ID: Richard, As far as I could determine: Protocol servers for Scale can be at RHEL 7.4 today Protocol servers for Scale will be able to be at RHEL 7.5 once the mid-June PTFs are released On ESS, RHEL 7.3 is still the highest level, with support for higher RHEL 7.x levels still being implemented/validated Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Sobey, Richard A" To: gpfsug main discussion list Date: 05/25/2018 10:29 AM Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Fri May 25 21:06:10 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Fri, 25 May 2018 20:06:10 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: , Message-ID: Hi Richard, Ours run on 7.4 without issue. We had one upgrade to 7.5 packages (didn't reboot into new kernel) and it broke, reverting it back to a 7.4 release fixed it, so when support comes along, do it with care! Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sobey, Richard A [r.sobey at imperial.ac.uk] Sent: 25 May 2018 15:29 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From jonathan.buzzard at strath.ac.uk Fri May 25 21:37:05 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Fri, 25 May 2018 21:37:05 +0100 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: Message-ID: <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> On 25/05/18 21:06, Simon Thompson (IT Research Support) wrote: > Hi Richard, > > Ours run on 7.4 without issue. We had one upgrade to 7.5 packages > (didn't reboot into new kernel) and it broke, reverting it back to a > 7.4 release fixed it, so when support comes along, do it with care! > I will at this point chime in that DSS is on 7.4 at the moment, so I am not surprised ESS is just fine too. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From S.J.Thompson at bham.ac.uk Fri May 25 21:42:49 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Fri, 25 May 2018 20:42:49 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> References: , <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> Message-ID: I was talking about protocols. But yes, DSS is also supported and runs fine on 7.4. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Jonathan Buzzard [jonathan.buzzard at strath.ac.uk] Sent: 25 May 2018 21:37 To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 On 25/05/18 21:06, Simon Thompson (IT Research Support) wrote: > Hi Richard, > > Ours run on 7.4 without issue. We had one upgrade to 7.5 packages > (didn't reboot into new kernel) and it broke, reverting it back to a > 7.4 release fixed it, so when support comes along, do it with care! > I will at this point chime in that DSS is on 7.4 at the moment, so I am not surprised ESS is just fine too. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From jonathan at buzzard.me.uk Fri May 25 22:08:54 2018 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 25 May 2018 22:08:54 +0100 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> Message-ID: <4d3aaaad-898d-d27d-04bc-729f01cef868@buzzard.me.uk> On 25/05/18 21:42, Simon Thompson (IT Research Support) wrote: > I was talking about protocols. > > But yes, DSS is also supported and runs fine on 7.4. Sure but I believe protocols will run fine on 7.4. On the downside DSS is still 4.2.x, grrrrrrrr as we have just implemented it double grrrr. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From r.sobey at imperial.ac.uk Sat May 26 08:32:05 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Sat, 26 May 2018 07:32:05 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: , , Message-ID: Thanks All! The faq still seems to imply that 7.3 is the latest supported release. Section A2.5 specifically. Other areas of the FAQ which I've now seen do indeed say 7.4. Have a great weekend. Get Outlook for Android ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Simon Thompson (IT Research Support) Sent: Friday, May 25, 2018 9:06:10 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Hi Richard, Ours run on 7.4 without issue. We had one upgrade to 7.5 packages (didn't reboot into new kernel) and it broke, reverting it back to a 7.4 release fixed it, so when support comes along, do it with care! Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sobey, Richard A [r.sobey at imperial.ac.uk] Sent: 25 May 2018 15:29 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Mon May 28 08:59:03 2018 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Mon, 28 May 2018 09:59:03 +0200 Subject: [gpfsug-discuss] User Group Meeting at ISC2018 Frankfurt Message-ID: Greetings: IBM is happy to announce the agenda for the joint "IBM Spectrum Scale and IBM Spectrum LSF User Group Meeting" at ISC in Frankfurt, Germany. We will finish on time to attend the opening reception. As with other user group meetings, the agenda includes user stories, updates on IBM Spectrum Scale & IBM Spectrum LSF, and access to IBM experts and your peers. Please join us! To attend please register here so that we can have an accurate count of attendees: https://www-01.ibm.com/events/wwe/grp/grp308.nsf/Registration.xsp?openform&seminar=AA4A99ES We are still looking for two customers to talk about their experience with Spectrum Scale and/or Spectrum LSF. Please send me a personal mail, if you are interested to talk. Monday June 25th, 2018 - 14:00-17:30 - Conference Room Applaus 14:00-14:15 Welcome Gabor Samu (IBM) / Ulf Troppens (IBM) 14:15-14:45 What is new in Spectrum Scale? Mathias Dietz (IBM) 14:45-15:00 News from Lenovo Storage Michael Hennicke (Lenovo) 15:00-15:15 What is new in ESS? Christopher Maestas (IBM) 15:15-15:35 Customer talk 1 TBD 15:35-15:55 Customer talk 2 TBD 15:55-16:25 What is new in Spectrum Computing? Bill McMillan (IBM) 16:25-16:55 Field Update Olaf Weiser (IBM) 16:55-17:25 Spectrum Scale enhancements for CORAL Sven Oehme (IBM) 17:25-17:30 Wrap-up Gabor Samu (IBM) / Ulf Troppens (IBM) Looking forward to see some of you there. Best, Ulf -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Mon May 28 09:23:00 2018 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 28 May 2018 10:23:00 +0200 Subject: [gpfsug-discuss] mmapplypolicy --choice-algorithm fast Message-ID: Just found the Spectrum Scale policy "best practices" presentation from the latest UG: http://files.gpfsug.org/presentations/2018/USA/SpectrumScalePolicyBP.pdf which mentions: "mmapplypolicy ? --choice-algorithm fast && ... WEIGHT(0) ? (avoids final sort of all selected files by weight)" and looking at the man-page I see that "fast" "Works together with the parallelized ?g /shared?tmp ?N node?list selection method." I do a daily listing of all files, and avoiding unneccessary sorting would be great. So, what is really needed to avoid sorting for a file-list policy? Just "--choice-algorithm fast"? Also WEIGHT(0) in policy required? Also a ?g /shared?tmp ? -jf -------------- next part -------------- An HTML attachment was scrubbed... URL: From janusz.malka at desy.de Tue May 29 14:30:35 2018 From: janusz.malka at desy.de (Janusz Malka) Date: Tue, 29 May 2018 15:30:35 +0200 (CEST) Subject: [gpfsug-discuss] AFM relation on the fs level Message-ID: <120160874.9373781.1527600635623.JavaMail.zimbra@desy.de> Dear all, Is it possible to build the AFM relation on the file system level ? I mean root file set of one file system as AFM cache and mount point of second as AFM home. Best regards, Janusz -- ------------------------------------------------------------------------- Janusz Tomasz Malka IT-Scientific Computing Deutsches Elektronen-Synchrotron Ein Forschungszentrum der Helmholtz-Gemeinschaft Notkestr. 85 22607 Hamburg Germany phone: +49 40 8998 3818 e-mail: janusz.malka at desy.de ------------------------------------------------------------------------- From vpuvvada at in.ibm.com Wed May 30 04:23:28 2018 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 30 May 2018 08:53:28 +0530 Subject: [gpfsug-discuss] AFM relation on the fs level In-Reply-To: <120160874.9373781.1527600635623.JavaMail.zimbra@desy.de> References: <120160874.9373781.1527600635623.JavaMail.zimbra@desy.de> Message-ID: AFM cannot be enabled at root fileset level today. ~Venkat (vpuvvada at in.ibm.com) From: Janusz Malka To: gpfsug main discussion list Date: 05/29/2018 07:06 PM Subject: [gpfsug-discuss] AFM relation on the fs level Sent by: gpfsug-discuss-bounces at spectrumscale.org Dear all, Is it possible to build the AFM relation on the file system level ? I mean root file set of one file system as AFM cache and mount point of second as AFM home. Best regards, Janusz -- ------------------------------------------------------------------------- Janusz Tomasz Malka IT-Scientific Computing Deutsches Elektronen-Synchrotron Ein Forschungszentrum der Helmholtz-Gemeinschaft Notkestr. 85 22607 Hamburg Germany phone: +49 40 8998 3818 e-mail: janusz.malka at desy.de ------------------------------------------------------------------------- _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 30 12:52:33 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 30 May 2018 11:52:33 +0000 Subject: [gpfsug-discuss] AFM negative file caching Message-ID: Hi All, We have a file-set which is an AFM fileset and contains installed software. We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. /gpfs/apps/somesoftware/v1.2/lib Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 30 12:57:27 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 30 May 2018 11:57:27 +0000 Subject: [gpfsug-discuss] AFM negative file caching Message-ID: <2686836B-9BD3-4B9C-A5D9-7C3EF6E6D69B@bham.ac.uk> p.s. I wasn?t sure if afmDirLookupRefreshInterval and afmFileLookupRefreshInterval would be the right thing if it?s a file/directory that doesn?t exist? Simon From: on behalf of "Simon Thompson (IT Research Support)" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Wednesday, 30 May 2018 at 12:52 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] AFM negative file caching Hi All, We have a file-set which is an AFM fileset and contains installed software. We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. /gpfs/apps/somesoftware/v1.2/lib Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From peserocka at gmail.com Wed May 30 13:26:46 2018 From: peserocka at gmail.com (Peter Serocka) Date: Wed, 30 May 2018 14:26:46 +0200 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: Message-ID: As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? (Not to get started on using LD_LIBRARY_PATH in the first place?) ? Peter > On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: > > Hi All, > > We have a file-set which is an AFM fileset and contains installed software. > > We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. > > /gpfs/apps/somesoftware/v1.2/lib > > Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. > > Thanks > > Simon > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From david_johnson at brown.edu Wed May 30 13:43:33 2018 From: david_johnson at brown.edu (david_johnson at brown.edu) Date: Wed, 30 May 2018 08:43:33 -0400 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: Message-ID: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka wrote: > > As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? > > (Not to get started on using LD_LIBRARY_PATH in the first place?) > > > ? Peter > >> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: >> >> Hi All, >> >> We have a file-set which is an AFM fileset and contains installed software. >> >> We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. >> >> /gpfs/apps/somesoftware/v1.2/lib >> >> Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. >> >> Thanks >> >> Simon >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From vpuvvada at in.ibm.com Wed May 30 15:29:55 2018 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 30 May 2018 19:59:55 +0530 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> References: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> Message-ID: >I wasn?t sure if afmDirLookupRefreshInterval and afmFileLookupRefreshInterval would be the right thing if it?s a file/directory that doesn?t exist? These refresh intervals applies to all the lookups and not just for negative lookups. For working around in AFM itself, you could try setting these refresh intervals to higher value if cache does not need to validate with home often. ~Venkat (vpuvvada at in.ibm.com) From: david_johnson at brown.edu To: gpfsug main discussion list Date: 05/30/2018 06:14 PM Subject: Re: [gpfsug-discuss] AFM negative file caching Sent by: gpfsug-discuss-bounces at spectrumscale.org Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka wrote: > > As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? > > (Not to get started on using LD_LIBRARY_PATH in the first place?) > > > ? Peter > >> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: >> >> Hi All, >> >> We have a file-set which is an AFM fileset and contains installed software. >> >> We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. >> >> /gpfs/apps/somesoftware/v1.2/lib >> >> Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. >> >> Thanks >> >> Simon >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 30 15:30:40 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 30 May 2018 14:30:40 +0000 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: Message-ID: So we use easybuild to build software and dependency stacks (and modules to do all this), yeah I did wonder about putting it first, but my worry is that other "stuff" installed locally that dumps in there might then break the dependency stack. I was thinking maybe we can create something local with select symlinks and add that to the path ... but I was hoping we could do some sort of negative caching. Simon ?On 30/05/2018, 13:26, "gpfsug-discuss-bounces at spectrumscale.org on behalf of peserocka at gmail.com" wrote: As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? (Not to get started on using LD_LIBRARY_PATH in the first place?) ? Peter > On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: > > Hi All, > > We have a file-set which is an AFM fileset and contains installed software. > > We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. > > /gpfs/apps/somesoftware/v1.2/lib > > Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. > > Thanks > > Simon > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Sandra.McLaughlin at astrazeneca.com Wed May 30 16:03:32 2018 From: Sandra.McLaughlin at astrazeneca.com (McLaughlin, Sandra M) Date: Wed, 30 May 2018 15:03:32 +0000 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> Message-ID: If it?s any help, Simon, I had a very similar problem, and I set afmDirLookupRefreshIntervaland afmFileLookupRefreshInterval to one day on an AFM cache fileset which only had software on it. It did make a difference to the users. And if you are really desperate to push an application upgrade to the cache fileset, there are other ways to do it. Sandra From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Venkateswara R Puvvada Sent: 30 May 2018 15:30 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM negative file caching >I wasn?t sure if afmDirLookupRefreshIntervaland afmFileLookupRefreshIntervalwould be the right thing if it?s a file/directory that doesn?t exist? These refresh intervals applies to all the lookups and not just for negative lookups. For working around in AFM itself, you could try setting these refresh intervals to higher value if cache does not need to validate with home often. ~Venkat (vpuvvada at in.ibm.com) From: david_johnson at brown.edu To: gpfsug main discussion list > Date: 05/30/2018 06:14 PM Subject: Re: [gpfsug-discuss] AFM negative file caching Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka > wrote: > > As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? > > (Not to get started on using LD_LIBRARY_PATH in the first place?) > > > ? Peter > >> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) > wrote: >> >> Hi All, >> >> We have a file-set which is an AFM fileset and contains installed software. >> >> We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. >> >> /gpfs/apps/somesoftware/v1.2/lib >> >> Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. >> >> Thanks >> >> Simon >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA. This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_johnson at brown.edu Thu May 31 19:21:42 2018 From: david_johnson at brown.edu (David Johnson) Date: Thu, 31 May 2018 14:21:42 -0400 Subject: [gpfsug-discuss] recommendations for gpfs 5.x GUI and perf/health monitoring collector nodes Message-ID: We are planning to bring up the new ZIMon tools on our 450+ node cluster, and need to purchase new nodes to run the collector federation and GUI function on. What would you choose as a platform for this? ? memory size? ? local disk space ? SSD? shared? ? net attach ? 10Gig? 25Gig? IB? ? CPU horse power ? single or dual socket? I think I remember somebody in Cambridge UG meeting saying 150 nodes per collector as a rule of thumb, so we?re guessing a federation of 4 nodes would do it. Does this include the GUI host(s) or are those separate? Finally, we?re still using client/server based licensing model, do these nodes count as clients? Thanks, ? ddj Dave Johnson Brown University From valleru at cbio.mskcc.org Tue May 1 15:34:39 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 1 May 2018 10:34:39 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> Message-ID: <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > You have been able to do this for some time, though I think it's only just supported. > > We've been exporting remote mounts since CES was added. > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > Sent: 30 April 2018 22:11 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Hello All, > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > Because according to the limitations as mentioned in the below link: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > Regards, > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.smith at framestore.com Wed May 2 11:06:20 2018 From: peter.smith at framestore.com (Peter Smith) Date: Wed, 2 May 2018 11:06:20 +0100 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: "how do I see how much of the pagepool is in use and by what? I've looked at mmfsadm dump and mmdiag --memory and neither has provided me the information I'm looking for (or at least not in a format I understand)" +1. Pointers appreciated! :-) On 10 April 2018 at 17:22, Aaron Knister wrote: > I wonder if this is an artifact of pagepool exhaustion which makes me ask > the question-- how do I see how much of the pagepool is in use and by what? > I've looked at mmfsadm dump and mmdiag --memory and neither has provided me > the information I'm looking for (or at least not in a format I understand). > > -Aaron > > On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] > wrote: > >> I hate admitting this but I?ve found something that?s got me stumped. >> >> We have a user running an MPI job on the system. Each rank opens up >> several output files to which it writes ASCII debug information. The net >> result across several hundred ranks is an absolute smattering of teeny tiny >> I/o requests to te underlying disks which they don?t appreciate. >> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I >> don?t understand is why these write requests aren?t getting batched up into >> larger write requests to the underlying disks. >> >> If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see >> that the nasty unaligned 8k io requests are batched up into nice 1M I/o >> requests before they hit the NSD. >> >> As best I can tell the application isn?t doing any fsync?s and isn?t >> doing direct io to these files. >> >> Can anyone explain why seemingly very similar io workloads appear to >> result in well formed NSD I/O in one case and awful I/o in another? >> >> Thanks! >> >> -Stumped >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> > -- > Aaron Knister > NASA Center for Climate Simulation (Code 606.2) > Goddard Space Flight Center > (301) 286-2776 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- [image: Framestore] Peter Smith ? Senior Systems Engineer London ? New York ? Los Angeles ? Chicago ? Montr?al T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 <+44%20%280%297816%20123009> 28 Chancery Lane, London WC2A 1LB Twitter ? Facebook ? framestore.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From UWEFALKE at de.ibm.com Wed May 2 13:09:21 2018 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Wed, 2 May 2018 14:09:21 +0200 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: mmfsadm dump pgalloc might get you one step further ... Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Thomas Wolter, Sven Schoo? Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: Peter Smith To: gpfsug main discussion list Date: 02/05/2018 12:10 Subject: Re: [gpfsug-discuss] Confusing I/O Behavior Sent by: gpfsug-discuss-bounces at spectrumscale.org "how do I see how much of the pagepool is in use and by what? I've looked at mmfsadm dump and mmdiag --memory and neither has provided me the information I'm looking for (or at least not in a format I understand)" +1. Pointers appreciated! :-) On 10 April 2018 at 17:22, Aaron Knister wrote: I wonder if this is an artifact of pagepool exhaustion which makes me ask the question-- how do I see how much of the pagepool is in use and by what? I've looked at mmfsadm dump and mmdiag --memory and neither has provided me the information I'm looking for (or at least not in a format I understand). -Aaron On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] wrote: I hate admitting this but I?ve found something that?s got me stumped. We have a user running an MPI job on the system. Each rank opens up several output files to which it writes ASCII debug information. The net result across several hundred ranks is an absolute smattering of teeny tiny I/o requests to te underlying disks which they don?t appreciate. Performance plummets. The I/o requests are 30 to 80 bytes in size. What I don?t understand is why these write requests aren?t getting batched up into larger write requests to the underlying disks. If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see that the nasty unaligned 8k io requests are batched up into nice 1M I/o requests before they hit the NSD. As best I can tell the application isn?t doing any fsync?s and isn?t doing direct io to these files. Can anyone explain why seemingly very similar io workloads appear to result in well formed NSD I/O in one case and awful I/o in another? Thanks! -Stumped _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Aaron Knister NASA Center for Climate Simulation (Code 606.2) Goddard Space Flight Center (301) 286-2776 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Peter Smith ? Senior Systems Engineer London ? New York ? Los Angeles ? Chicago ? Montr?al T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 28 Chancery Lane, London WC2A 1LB Twitter ? Facebook ? framestore.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Wed May 2 13:25:42 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 2 May 2018 12:25:42 +0000 Subject: [gpfsug-discuss] AFM with clones Message-ID: <05241944-0A1C-4BC7-90FC-C22BC05F9643@bham.ac.uk> Hi, We are looking at providing an AFM cache of a home which has a number of cloned files. From the docs: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1ins_afmandafmdrlimitations.htm ? We can see that ?The mmclone command is not supported on AFM cache and AFM DR primary filesets. Clones created at home for AFM filesets are treated as separate files in the cache.? So it?s no surprise that when we pre-cache the files, they space consumed is different. What I?m not clear on is what happens if we update a clone file at home? I know AFM is supposed to only transfer the exact bytes updated, does this work with clones? i.e. at home do we just get the bytes updated in the copy-on-write clone, or do we accidentally end up shipping the whole file back? (note we are using IW mode) Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Wed May 2 13:31:37 2018 From: oehmes at gmail.com (Sven Oehme) Date: Wed, 02 May 2018 12:31:37 +0000 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: GPFS doesn't do flush on close by default unless explicit asked by the application itself, but you can configure that . mmchconfig flushOnClose=yes if you use O_SYNC or O_DIRECT then each write ends up on the media before we return. sven On Wed, Apr 11, 2018 at 7:06 AM Peter Serocka wrote: > Let?s keep in mind that line buffering is a concept > within the standard C library; > if every log line triggers one write(2) system call, > and it?s not direct io, then multiple write still get > coalesced into few larger disk writes (as with the dd example). > > A logging application might choose to close(2) > a log file after each write(2) ? that produces > a different scenario, where the file system might > guarantee that the data has been written to disk > when close(2) return a success. > > (Local Linux file systems do not do this with default mounts, > but networked filesystems usually do.) > > Aaron, can you trace your application to see > what is going on in terms of system calls? > > ? Peter > > > > On 2018 Apr 10 Tue, at 18:28, Marc A Kaplan wrote: > > > > Debug messages are typically unbuffered or "line buffered". If that is > truly causing a performance problem AND you still want to collect the > messages -- you'll need to find a better way to channel and collect those > messages. > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Wed May 2 13:34:56 2018 From: oehmes at gmail.com (Sven Oehme) Date: Wed, 02 May 2018 12:34:56 +0000 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: a few more weeks and we have a better answer than dump pgalloc ;-) On Wed, May 2, 2018 at 6:07 AM Peter Smith wrote: > "how do I see how much of the pagepool is in use and by what? I've looked > at mmfsadm dump and mmdiag --memory and neither has provided me the > information I'm looking for (or at least not in a format I understand)" > > +1. Pointers appreciated! :-) > > On 10 April 2018 at 17:22, Aaron Knister wrote: > >> I wonder if this is an artifact of pagepool exhaustion which makes me ask >> the question-- how do I see how much of the pagepool is in use and by what? >> I've looked at mmfsadm dump and mmdiag --memory and neither has provided me >> the information I'm looking for (or at least not in a format I understand). >> >> -Aaron >> >> On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE >> CORP] wrote: >> >>> I hate admitting this but I?ve found something that?s got me stumped. >>> >>> We have a user running an MPI job on the system. Each rank opens up >>> several output files to which it writes ASCII debug information. The net >>> result across several hundred ranks is an absolute smattering of teeny tiny >>> I/o requests to te underlying disks which they don?t appreciate. >>> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I >>> don?t understand is why these write requests aren?t getting batched up into >>> larger write requests to the underlying disks. >>> >>> If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see >>> that the nasty unaligned 8k io requests are batched up into nice 1M I/o >>> requests before they hit the NSD. >>> >>> As best I can tell the application isn?t doing any fsync?s and isn?t >>> doing direct io to these files. >>> >>> Can anyone explain why seemingly very similar io workloads appear to >>> result in well formed NSD I/O in one case and awful I/o in another? >>> >>> Thanks! >>> >>> -Stumped >>> >>> >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >>> >>> >> -- >> Aaron Knister >> NASA Center for Climate Simulation (Code 606.2) >> Goddard Space Flight Center >> (301) 286-2776 >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > > > > -- > [image: Framestore] Peter Smith ? Senior Systems Engineer > London ? New York ? Los Angeles ? Chicago ? Montr?al > T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 > <+44%20%280%297816%20123009> > 28 Chancery Lane, London WC2A 1LB > > Twitter ? Facebook > ? framestore.com > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alevin at gmail.com Wed May 2 17:10:48 2018 From: alevin at gmail.com (Alex Levin) Date: Wed, 2 May 2018 12:10:48 -0400 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: Aaron, Peter, I'm monitoring the pagepool usage as: buffers=`/usr/lpp/mmfs/bin/mmfsadm dump buffers | grep bufLen | awk '{ SUM += $7} END { print SUM }'` result in bytes If your pagepool is huge - the execution could take some time ( ~5 sec on 100Gb pagepool ) --Alex On Wed, May 2, 2018 at 6:06 AM, Peter Smith wrote: > "how do I see how much of the pagepool is in use and by what? I've looked > at mmfsadm dump and mmdiag --memory and neither has provided me the > information I'm looking for (or at least not in a format I understand)" > > +1. Pointers appreciated! :-) > > On 10 April 2018 at 17:22, Aaron Knister wrote: > >> I wonder if this is an artifact of pagepool exhaustion which makes me ask >> the question-- how do I see how much of the pagepool is in use and by what? >> I've looked at mmfsadm dump and mmdiag --memory and neither has provided me >> the information I'm looking for (or at least not in a format I understand). >> >> -Aaron >> >> On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE >> CORP] wrote: >> >>> I hate admitting this but I?ve found something that?s got me stumped. >>> >>> We have a user running an MPI job on the system. Each rank opens up >>> several output files to which it writes ASCII debug information. The net >>> result across several hundred ranks is an absolute smattering of teeny tiny >>> I/o requests to te underlying disks which they don?t appreciate. >>> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I >>> don?t understand is why these write requests aren?t getting batched up into >>> larger write requests to the underlying disks. >>> >>> If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see >>> that the nasty unaligned 8k io requests are batched up into nice 1M I/o >>> requests before they hit the NSD. >>> >>> As best I can tell the application isn?t doing any fsync?s and isn?t >>> doing direct io to these files. >>> >>> Can anyone explain why seemingly very similar io workloads appear to >>> result in well formed NSD I/O in one case and awful I/o in another? >>> >>> Thanks! >>> >>> -Stumped >>> >>> >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >>> >>> >> -- >> Aaron Knister >> NASA Center for Climate Simulation (Code 606.2) >> Goddard Space Flight Center >> (301) 286-2776 >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > > > > -- > [image: Framestore] Peter Smith ? Senior Systems Engineer > London ? New York ? Los Angeles ? Chicago ? Montr?al > T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 > <+44%20%280%297816%20123009> > 28 Chancery Lane, London WC2A 1LB > > Twitter ? Facebook > ? framestore.com > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vpuvvada at in.ibm.com Wed May 2 18:48:01 2018 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 2 May 2018 23:18:01 +0530 Subject: [gpfsug-discuss] AFM with clones In-Reply-To: <05241944-0A1C-4BC7-90FC-C22BC05F9643@bham.ac.uk> References: <05241944-0A1C-4BC7-90FC-C22BC05F9643@bham.ac.uk> Message-ID: >What I?m not clear on is what happens if we update a clone file at home? I know AFM is supposed to only transfer the exact bytes updated, does this work with clones? i.e. at home do we just get the bytes updated in the >copy-on-write clone, or do we accidentally end up shipping the whole file back? IW mode revalidation detects that file is changed at home, all data blocks are cleared (punches the hole) and the next read pulls whole file from the home. ~Venkat (vpuvvada at in.ibm.com) From: "Simon Thompson (IT Research Support)" To: "gpfsug-discuss at spectrumscale.org" Date: 05/02/2018 05:55 PM Subject: [gpfsug-discuss] AFM with clones Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, We are looking at providing an AFM cache of a home which has a number of cloned files. From the docs: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1ins_afmandafmdrlimitations.htm ? We can see that ?The mmclone command is not supported on AFM cache and AFM DR primary filesets. Clones created at home for AFM filesets are treated as separate files in the cache.? So it?s no surprise that when we pre-cache the files, they space consumed is different. What I?m not clear on is what happens if we update a clone file at home? I know AFM is supposed to only transfer the exact bytes updated, does this work with clones? i.e. at home do we just get the bytes updated in the copy-on-write clone, or do we accidentally end up shipping the whole file back? (note we are using IW mode) Thanks Simon_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=92LOlNh2yLzrrGTDA7HnfF8LFr55zGxghLZtvZcZD7A&m=yLFsan-7rzFW2Nw9k8A-SHKQfNQonl9v_hk9hpXLYjQ&s=7w_-SsCLeUNBZoFD3zUF5ika7PTUIQkKuOhuz-5pr1I&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Thu May 3 10:43:31 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Thu, 3 May 2018 09:43:31 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used Message-ID: Hi all, I'd be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you've employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Thu May 3 12:41:28 2018 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Thu, 3 May 2018 13:41:28 +0200 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Thu May 3 14:03:09 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Thu, 3 May 2018 09:03:09 -0400 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen > On May 3, 2018, at 5:43 AM, Sobey, Richard A wrote: > > Hi all, > > I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. > > On-list or off is fine with me. > > Thanks > Richard > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Thu May 3 15:25:03 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 3 May 2018 14:25:03 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: Hi Lohit, Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz Sent: Thursday, May 03, 2018 6:41 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Note: External Email ________________________________ Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) >, wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says "You can configure one storage cluster and up to five protocol clusters (current limit)." Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Thu May 3 15:37:11 2018 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 3 May 2018 16:37:11 +0200 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: Since I'm pretty proud of my awk one-liner, and maybe it's useful for this kind of charging, here's how to sum up how much data each user has in the filesystem (without regards to if the data blocks are offline, online, replicated or compressed): # cat full-file-list.policy RULE EXTERNAL LIST 'files' EXEC '' RULE LIST 'files' SHOW( VARCHAR(USER_ID) || ' ' || VARCHAR(GROUP_ID) || ' ' || VARCHAR(FILESET_NAME) || ' ' || VARCHAR(FILE_SIZE) || ' ' || VARCHAR(KB_ALLOCATED) ) # mmapplypolicy gpfs0 -P /gpfs/gpfsmgt/etc/full-file-list.policy -I defer -f /tmp/full-file-list # awk '{a[$4] += $7} END{ print "# UID\t Bytes" ; for (i in a) print i, "\t", a[i]}' /tmp/full-file-list.list.files Takes ~15 minutes to run on a 60 million file filesystem. -jf On Thu, May 3, 2018 at 11:43 AM, Sobey, Richard A wrote: > Hi all, > > > > I?d be interested to talk to anyone that is using HSM to move data to > tape, (and stubbing the file(s)) specifically any strategies you?ve > employed to figure out how to charge your customers (where you do charge > anyway) based on usage. > > > > On-list or off is fine with me. > > > > Thanks > > Richard > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 15:41:16 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 10:41:16 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: > Hi Lohit, > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > Mit freundlichen Gr??en / Kind regards > > Mathias Dietz > > Spectrum Scale Development - Release Lead Architect (4.2.x) > Spectrum Scale RAS Architect > --------------------------------------------------------------------------- > IBM Deutschland > Am Weiher 24 > 65451 Kelsterbach > Phone: +49 70342744105 > Mobile: +49-15152801035 > E-Mail: mdietz at de.ibm.com > ----------------------------------------------------------------------------- > IBM Deutschland Research & Development GmbH > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > From: ? ? ? ?valleru at cbio.mskcc.org > To: ? ? ? ?gpfsug main discussion list > Date: ? ? ? ?01/05/2018 16:34 > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > Thanks Simon. > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > Regards, > Lohit > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > You have been able to do this for some time, though I think it's only just supported. > > We've been exporting remote mounts since CES was added. > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > Sent: 30 April 2018 22:11 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Hello All, > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > Because according to the limitations as mentioned in the below link: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > Regards, > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 15:46:09 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 10:46:09 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> Thanks Brian, May i know, if you could explain a bit more on the metadata updates issue? I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers. I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters? Please do correct me if i am wrong. As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers. Thanks, Lohit On May 3, 2018, 10:25 AM -0400, Bryan Banister , wrote: > Hi Lohit, > > Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. > > Cheers, > -Bryan > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz > Sent: Thursday, May 03, 2018 6:41 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Note: External Email > Hi Lohit, > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > Mit freundlichen Gr??en / Kind regards > > Mathias Dietz > > Spectrum Scale Development - Release Lead Architect (4.2.x) > Spectrum Scale RAS Architect > --------------------------------------------------------------------------- > IBM Deutschland > Am Weiher 24 > 65451 Kelsterbach > Phone: +49 70342744105 > Mobile: +49-15152801035 > E-Mail: mdietz at de.ibm.com > ----------------------------------------------------------------------------- > IBM Deutschland Research & Development GmbH > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > From: ? ? ? ?valleru at cbio.mskcc.org > To: ? ? ? ?gpfsug main discussion list > Date: ? ? ? ?01/05/2018 16:34 > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > Thanks Simon. > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > Regards, > Lohit > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > You have been able to do this for some time, though I think it's only just supported. > > We've been exporting remote mounts since CES was added. > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > Sent: 30 April 2018 22:11 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Hello All, > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > Because according to the limitations as mentioned in the below link: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > Regards, > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Thu May 3 16:02:51 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Thu, 3 May 2018 15:02:51 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> Message-ID: Stephen, Bryan, Thanks for the input, it?s greatly appreciated. For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers. So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB. Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Stephen Ulmer Sent: 03 May 2018 14:03 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Recharging where HSM is used I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen On May 3, 2018, at 5:43 AM, Sobey, Richard A > wrote: Hi all, I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Thu May 3 16:14:20 2018 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Thu, 3 May 2018 17:14:20 +0200 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark><8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> Message-ID: yes, deleting all NFS exports which point to a given file system would allow you to unmount it without bringing down the other file systems. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 03/05/2018 16:41 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Thu May 3 16:15:24 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 3 May 2018 15:15:24 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> Message-ID: Hi Lohit, Please see slides 13 and 14 in the presentation that DDN gave at the GPFS UG in the UK this April: http://files.gpfsug.org/presentations/2018/London/2-5_GPFSUG_London_2018_VCC_DDN_Overheads.pdf Multicluster setups with shared file access have a high probability of ?MetaNode Flapping? ? ?MetaNode role transfer occurs when the same files from a filesystem are accessed from two or more ?client? clusters via a MultiCluster relationship.? Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org Sent: Thursday, May 03, 2018 9:46 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Note: External Email ________________________________ Thanks Brian, May i know, if you could explain a bit more on the metadata updates issue? I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers. I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters? Please do correct me if i am wrong. As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers. Thanks, Lohit On May 3, 2018, 10:25 AM -0400, Bryan Banister >, wrote: Hi Lohit, Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz Sent: Thursday, May 03, 2018 6:41 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Note: External Email ________________________________ Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) >, wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From khanhn at us.ibm.com Thu May 3 16:29:57 2018 From: khanhn at us.ibm.com (Khanh V Ngo) Date: Thu, 3 May 2018 15:29:57 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Thu May 3 16:52:44 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 03 May 2018 16:52:44 +0100 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> Message-ID: <1525362764.27337.140.camel@strath.ac.uk> On Thu, 2018-05-03 at 15:02 +0000, Sobey, Richard A wrote: > Stephen, Bryan, > ? > Thanks for the input, it?s greatly appreciated. > ? > For us we?re trying ? as many people are ? to drive down the usage of > under-the-desk NAS appliances and USB HDDs. We offer space on disk, > but you can?t charge for 3TB of storage the same as you would down PC > World and many customers don?t understand the difference between what > we do, and what a USB disk offers. > ? > So, offering tape as a medium to store cold data, but not archive > data, is one offering we?re just getting round to discussing. The > solution is in place. To answer the specific question: for our > customers that adopt HSM, how much less should/could/can we charge > them per TB. We know how much a tape costs, but we don?t necessarily > have the means (or knowledge?) to say that for a given fileset, 80% > of the data is on tape. Then you get into 80% of 1TB is not the same > as 80% of 10TB. > ? The test that I have used in the past for if a file is migrated with a high degree of accuracy is if the space allocated on the file system is less than the file size, and equal to the stub size then presume the file is migrated. There is a small chance it could be sparse instead. However this is really rather remote as sparse files are not common in the first place and even less like that the amount of allocated data in the sparse file exactly matches the stub size. It is an easy step to write a policy to list all the UID and FILE_SIZE where KB_ALLOCATED References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> Message-ID: <6009EFF3-27EF-4E35-9FA1-1730C9ECF1A8@bham.ac.uk> Our charging model for disk storage assumes that a percentage of it is really HSM?d, though in practise we aren?t heavily doing this. My (personal) view on tape really is that anything on tape is FoC, that way people can play games to recall/keep it hot it if they want, but it eats their FoC or paid disk allocations, whereas if they leave it on tape, they benefit in having more total capacity. We currently use the pre-migrate/SOBAR for our DR piece, so we?d already be pre-migrating to tape anyway, so it doesn?t really cost us anything extra to give FoC HSM?d storage. So my suggestion is pitch HSM (or even TCT maybe ? if only we could do both) as your DR proposal, and then you can give it to users for free ? Simon From: on behalf of "Sobey, Richard A" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Thursday, 3 May 2018 at 16:03 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Recharging where HSM is used Stephen, Bryan, Thanks for the input, it?s greatly appreciated. For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers. So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB. Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Stephen Ulmer Sent: 03 May 2018 14:03 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Recharging where HSM is used I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen On May 3, 2018, at 5:43 AM, Sobey, Richard A > wrote: Hi all, I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu May 3 18:30:32 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Thu, 3 May 2018 17:30:32 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> Message-ID: <222D5882-1C2C-48CA-BEF3-478A9D66A0F3@bham.ac.uk> Yes we do this when we really really need to take a remote FS offline, which we try at all costs to avoid unless we have a maintenance window. Note if you only export via SMB, then you don?t have the same effect (unless something has changed recently) Simon From: on behalf of "valleru at cbio.mskcc.org" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Thursday, 3 May 2018 at 15:41 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 19:46:42 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 14:46:42 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> Message-ID: <1f7af581-300d-4526-8c9c-7bde344fbf22@Spark> Thanks Bryan. Yes i do understand it now, with respect to multi clusters reading the same file and metanode flapping. Will make sure the workload design will prevent metanode flapping. Regards, Lohit On May 3, 2018, 11:15 AM -0400, Bryan Banister , wrote: > Hi Lohit, > > Please see slides 13 and 14 in the presentation that DDN gave at the GPFS UG in the UK this April:? http://files.gpfsug.org/presentations/2018/London/2-5_GPFSUG_London_2018_VCC_DDN_Overheads.pdf > > Multicluster setups with shared file access have a high probability of ?MetaNode Flapping? > ? ?MetaNode role transfer occurs when the same files from a filesystem are accessed from two or more ?client? clusters via a MultiCluster relationship.? > > Cheers, > -Bryan > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org > Sent: Thursday, May 03, 2018 9:46 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Note: External Email > Thanks Brian, > May i know, if you could explain a bit more on the metadata updates issue? > I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers. > I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters? > Please do correct me if i am wrong. > As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers. > > Thanks, > Lohit > > On May 3, 2018, 10:25 AM -0400, Bryan Banister , wrote: > > > Hi Lohit, > > > > Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. > > > > Cheers, > > -Bryan > > > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz > > Sent: Thursday, May 03, 2018 6:41 AM > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > > > Note: External Email > > Hi Lohit, > > > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > > > > Mit freundlichen Gr??en / Kind regards > > > > Mathias Dietz > > > > Spectrum Scale Development - Release Lead Architect (4.2.x) > > Spectrum Scale RAS Architect > > --------------------------------------------------------------------------- > > IBM Deutschland > > Am Weiher 24 > > 65451 Kelsterbach > > Phone: +49 70342744105 > > Mobile: +49-15152801035 > > E-Mail: mdietz at de.ibm.com > > ----------------------------------------------------------------------------- > > IBM Deutschland Research & Development GmbH > > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > > > > > From: ? ? ? ?valleru at cbio.mskcc.org > > To: ? ? ? ?gpfsug main discussion list > > Date: ? ? ? ?01/05/2018 16:34 > > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > > > > > Thanks Simon. > > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > > > Regards, > > Lohit > > > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > > You have been able to do this for some time, though I think it's only just supported. > > > > We've been exporting remote mounts since CES was added. > > > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > > > Simon > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > > Sent: 30 April 2018 22:11 > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > > > Hello All, > > > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > > > Because according to the limitations as mentioned in the below link: > > > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > > > > Regards, > > Lohit > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 19:52:23 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 14:52:23 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <222D5882-1C2C-48CA-BEF3-478A9D66A0F3@bham.ac.uk> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> <222D5882-1C2C-48CA-BEF3-478A9D66A0F3@bham.ac.uk> Message-ID: <44e9d877-36b9-43c1-8ee8-ac8437987265@Spark> Thanks Simon. Currently, we are thinking of using the same remote filesystem for both NFS/SMB exports. I do have a related question with respect to SMB and AD integration on user-defined authentication. I have seen a past discussion from you on the usergroup regarding a similar integration, but i am trying a different setup. Will send an email with the related subject. Thanks, Lohit On May 3, 2018, 1:30 PM -0400, Simon Thompson (IT Research Support) , wrote: > Yes we do this when we really really need to take a remote FS offline, which we try at all costs to avoid unless we have a maintenance window. > > Note if you only export via SMB, then you don?t have the same effect (unless something has changed recently) > > Simon > > From: on behalf of "valleru at cbio.mskcc.org" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Thursday, 3 May 2018 at 15:41 > To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Thanks Mathiaz, > Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. > > However, i suppose we could bring down one of the filesystems before a planned downtime? > For example, by unexporting the filesystems on NFS/SMB before the downtime? > > I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. > > Regards, > Lohit > > On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: > > > Hi Lohit, > > > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > > > > Mit freundlichen Gr??en / Kind regards > > > > Mathias Dietz > > > > Spectrum Scale Development - Release Lead Architect (4.2.x) > > Spectrum Scale RAS Architect > > --------------------------------------------------------------------------- > > IBM Deutschland > > Am Weiher 24 > > 65451 Kelsterbach > > Phone: +49 70342744105 > > Mobile: +49-15152801035 > > E-Mail: mdietz at de.ibm.com > > ----------------------------------------------------------------------------- > > IBM Deutschland Research & Development GmbH > > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > > > > > From: ? ? ? ?valleru at cbio.mskcc.org > > To: ? ? ? ?gpfsug main discussion list > > Date: ? ? ? ?01/05/2018 16:34 > > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > > > > > Thanks Simon. > > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > > > Regards, > > Lohit > > > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > > You have been able to do this for some time, though I think it's only just supported. > > > > We've been exporting remote mounts since CES was added. > > > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > > > Simon > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > > Sent: 30 April 2018 22:11 > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > > > Hello All, > > > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > > > Because according to the limitations as mentioned in the below link: > > > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > > > > Regards, > > Lohit > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From JRLang at uwyo.edu Thu May 3 16:38:32 2018 From: JRLang at uwyo.edu (Jeffrey R. Lang) Date: Thu, 3 May 2018 15:38:32 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: Khanh Could you tell us what the policy file name is or where to get it? Thanks Jeff From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Khanh V Ngo Sent: Thursday, May 3, 2018 10:30 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Recharging where HSM is used Specifically with IBM Spectrum Archive EE, there is a script (mmapplypolicy with list rules and python since it outputs many different tables) to provide the total size of user files by file states. This way you can charge more for files that remain on disk and charge less for files migrated to tape. I have seen various prices for the chargeback so it's probably better to calculate based on your environment. The script can easily be changed to output based on GID, filesets, etc. Here's a snippet of the output (in human-readable units): +-------+-----------+-------------+-------------+-----------+ | User | Migrated | Premigrated | Resident | TOTAL | +-------+-----------+-------------+-------------+-----------+ | 0 | 1.563 KB | 50.240 GB | 6.000 bytes | 50.240 GB | | 27338 | 9.338 TB | 1.566 TB | 63.555 GB | 10.965 TB | | 27887 | 58.341 GB | 191.653 KB | | 58.341 GB | | 27922 | 2.111 MB | | | 2.111 MB | | 24089 | 4.657 TB | 22.921 TB | 433.660 GB | 28.002 TB | | 29657 | 29.219 TB | 32.049 TB | | 61.268 TB | | 29210 | 3.057 PB | 399.908 TB | 47.448 TB | 3.494 PB | | 23326 | 7.793 GB | 257.005 MB | 166.364 MB | 8.207 GB | | TOTAL | 3.099 PB | 456.492 TB | 47.933 TB | 3.592 PB | +-------+-----------+-------------+-------------+-----------+ Thanks, Khanh Khanh Ngo, Tape Storage Test Architect Senior Technical Staff Member and Master Inventor Tie-Line 8-321-4802 External Phone: (520)799-4802 9042/1/1467 Tucson, AZ khanhn at us.ibm.com (internet) It's okay to not understand something. It's NOT okay to test something you do NOT understand. ----- Original message ----- From: gpfsug-discuss-request at spectrumscale.org Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: gpfsug-discuss Digest, Vol 76, Issue 7 Date: Thu, May 3, 2018 8:19 AM Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Recharging where HSM is used (Sobey, Richard A) 2. Re: Spectrum Scale CES and remote file system mounts (Mathias Dietz) ---------------------------------------------------------------------- Message: 1 Date: Thu, 3 May 2018 15:02:51 +0000 From: "Sobey, Richard A" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Recharging where HSM is used Message-ID: > Content-Type: text/plain; charset="utf-8" Stephen, Bryan, Thanks for the input, it?s greatly appreciated. For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers. So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB. Richard From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Stephen Ulmer Sent: 03 May 2018 14:03 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Recharging where HSM is used I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen On May 3, 2018, at 5:43 AM, Sobey, Richard A > wrote: Hi all, I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 3 May 2018 17:14:20 +0200 From: "Mathias Dietz" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Message-ID: > Content-Type: text/plain; charset="iso-8859-1" yes, deleting all NFS exports which point to a given file system would allow you to unmount it without bringing down the other file systems. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 03/05/2018 16:41 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz >, wrote: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) >, wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= End of gpfsug-discuss Digest, Vol 76, Issue 7 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 20:14:57 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 15:14:57 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA and AD keytab integration with userdefined authentication Message-ID: <03e2a5c6-3538-4e20-84b8-563b0aedfbe6@Spark> Hello All, I am trying to export a single remote filesystem over NFS/SMB using GPFS CES. ( GPFS 5.0.0.2 and CentOS 7 ). We need NFS exports to be accessible on client nodes, that use public key authentication and ldap authorization. I already have this working with a previous CES setup on user-defined authentication, where users can just login to the client nodes, and access NFS mounts. However, i will also need SAMBA exports for the same GPFS filesystem with AD/kerberos authentication. Previously, we used to have a working SAMBA export for a local filesystem with SSSD and AD integration with SAMBA as mentioned in the below solution from redhat. https://access.redhat.com/solutions/2221561 We find the above as cleaner solution with respect to AD and Samba integration compared to centrify or winbind. I understand that GPFS does offer AD authentication, however i believe i cannot use the same since NFS will need user-defined authentication and SAMBA will need AD authentication. I have thus been trying to use user-defined authentication. I tried to edit smb.cnf from GPFS ( with a bit of help from this blog, written by Simon.?https://www.roamingzebra.co.uk/2015/07/smb-protocol-support-with-spectrum.html) /usr/lpp/mmfs/bin/net conf list realm = xxxx workgroup = xxxx security = ads kerberos method = secrets and key tab idmap config * : backend = tdb template homedir = /home/%U dedicated keytab file = /etc/krb5.keytab I had joined the node to AD with realmd and i do get relevant AD info when i try: /usr/lpp/mmfs/bin/net ads info However, when i try to display keytab or add principals to keytab. It just does not work. /usr/lpp/mmfs/bin/net ads keytab list ?-> does not show the keys present in /etc/krb5.keytab. /usr/lpp/mmfs/bin/net ads keytab add cifs -> does not add the keys to the /etc/krb5.keytab As per the samba documentation, these two parameters should help samba automatically find the keytab file. kerberos method = secrets and key tab dedicated keytab file = /etc/krb5.keytab I have not yet tried to see, if a SAMBA export is working with AD authentication but i am afraid it might not work. Have anyone tried the AD integration with SSSD/SAMBA for GPFS, and any suggestions on how to debug the above would be really helpful. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From valdis.kletnieks at vt.edu Thu May 3 20:16:03 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Thu, 03 May 2018 15:16:03 -0400 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: <1525362764.27337.140.camel@strath.ac.uk> References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> <1525362764.27337.140.camel@strath.ac.uk> Message-ID: <75615.1525374963@turing-police.cc.vt.edu> On Thu, 03 May 2018 16:52:44 +0100, Jonathan Buzzard said: > The test that I have used in the past for if a file is migrated with a > high degree of accuracy is > > if the space allocated on the file system is less than the > file size, and equal to the stub size then presume the file > is migrated. At least for LTFS/EE, we use something like this: define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')) define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%')) define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%')) RULE 'MIGRATED' LIST 'ltfsee_files' FROM POOL 'system' SHOW('migrated ' || xattr('dmapi.IBMTPS') || ' ' || all_attrs) WHERE is_migrated AND (xattr('dmapi.IBMTPS') LIKE '%:%' ) Not sure if the V and M misc_attributes are the same for other tape backends... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Thu May 3 21:13:14 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 3 May 2018 20:13:14 +0000 Subject: [gpfsug-discuss] FYI - SC18 - Hotels are now open for reservations! Message-ID: <1CE10F03-B49C-44DF-A772-B674D059457F@nuance.com> FYI, Hotels for SC18 are now open, and if it?s like any other year, they fill up FAST. Reserve one early since it?s no charge to hold it until 1 month before the conference. https://sc18.supercomputing.org/experience/housing/ Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From zacekm at img.cas.cz Fri May 4 06:53:23 2018 From: zacekm at img.cas.cz (Michal Zacek) Date: Fri, 4 May 2018 07:53:23 +0200 Subject: [gpfsug-discuss] Temporary office files Message-ID: Hello, I have problem with "~$somename.xlsx" files in Samba shares at GPFS Samba cluster. These lock files are supposed to be removed by Samba with "delete on close" function. This function is working? at standard Samba server in Centos but not with Samba cluster at GPFS. Is this function disabled on purpose or is ti an error? I'm not sure if this problem was in older versions, but now with version 5.0.0.0 it's easy to reproduce. Just open and close any excel file, and "~$xxxx.xlsx" file will remain at share. You have to uncheck "hide protected operating system files" on Windows to see them. Any help would be appreciated. Regards, Michal -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3776 bytes Desc: Elektronicky podpis S/MIME URL: From r.sobey at imperial.ac.uk Fri May 4 09:10:33 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Fri, 4 May 2018 08:10:33 +0000 Subject: [gpfsug-discuss] Temporary office files In-Reply-To: References: Message-ID: Hi Michal, We occasionally get a request to close a lock file for an Office document but I wouldn't necessarily say we could easily reproduce it. We're still running 4.2.3.7 though so YMMV. I'm building out my test cluster at the moment to do some experiments and as soon as 5.0.1 is released I'll be upgrading it to check it out. Thanks Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Michal Zacek Sent: 04 May 2018 06:53 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Temporary office files Hello, I have problem with "~$somename.xlsx" files in Samba shares at GPFS Samba cluster. These lock files are supposed to be removed by Samba with "delete on close" function. This function is working? at standard Samba server in Centos but not with Samba cluster at GPFS. Is this function disabled on purpose or is ti an error? I'm not sure if this problem was in older versions, but now with version 5.0.0.0 it's easy to reproduce. Just open and close any excel file, and "~$xxxx.xlsx" file will remain at share. You have to uncheck "hide protected operating system files" on Windows to see them. Any help would be appreciated. Regards, Michal From Achim.Rehor at de.ibm.com Fri May 4 09:17:52 2018 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Fri, 4 May 2018 10:17:52 +0200 Subject: [gpfsug-discuss] Temporary office files In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 7182 bytes Desc: not available URL: From zacekm at img.cas.cz Fri May 4 10:40:50 2018 From: zacekm at img.cas.cz (Michal Zacek) Date: Fri, 4 May 2018 11:40:50 +0200 Subject: [gpfsug-discuss] Temporary office files In-Reply-To: References: Message-ID: Hi Achim Set "gpfs:sharemodes=no" did the trick and I will upgrade to 5.0.0.2 next week. Thank you very much. Regards, Michal Dne 4.5.2018 v 10:17 Achim Rehor napsal(a): > Hi Michal, > > there was an open defect on this, which had been fixed in level > 4.2.3.7 (APAR _IJ03182 _ > ) > gpfs.smb 4.5.15_gpfs_31-1 > should be in gpfs.smb 4.6.11_gpfs_31-1 ?package for the 5.0.0 PTF1 level. > > > > > Mit freundlichen Gr??en / Kind regards > > *Achim Rehor* > > ------------------------------------------------------------------------ > Software Technical Support Specialist AIX/ Emea HPC Support > IBM Certified Advanced Technical Expert - Power Systems with AIX > TSCC Software Service, Dept. 7922 > Global Technology Services > ------------------------------------------------------------------------ > Phone: +49-7034-274-7862 ?IBM Deutschland > E-Mail: Achim.Rehor at de.ibm.com ?Am Weiher 24 > ?65451 Kelsterbach > ?Germany > > ------------------------------------------------------------------------ > IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter > Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, > Stefan Lutz, Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht > Stuttgart, HRB 14562 WEEE-Reg.-Nr. DE 99369940 > > > > > > > From: Michal Zacek > To: gpfsug-discuss at spectrumscale.org > Date: 04/05/2018 08:03 > Subject: [gpfsug-discuss] Temporary office files > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Hello, > > I have problem with "~$somename.xlsx" files in Samba shares at GPFS > Samba cluster. These lock files are supposed to be removed by Samba with > "delete on close" function. This function is working? at standard Samba > server in Centos but not with Samba cluster at GPFS. Is this function > disabled on purpose or is ti an error? I'm not sure if this problem was > in older versions, but now with version 5.0.0.0 it's easy to reproduce. > Just open and close any excel file, and "~$xxxx.xlsx" file will remain > at share. You have to uncheck "hide protected operating system files" on > Windows to see them. > Any help would be appreciated. > > Regards, > Michal > > [attachment "smime.p7s" deleted by Achim Rehor/Germany/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nfhdombajgidkknc.png Type: image/png Size: 7182 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3776 bytes Desc: Elektronicky podpis S/MIME URL: From makaplan at us.ibm.com Fri May 4 15:03:37 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 4 May 2018 10:03:37 -0400 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: <75615.1525374963@turing-police.cc.vt.edu> References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org><1525362764.27337.140.camel@strath.ac.uk> <75615.1525374963@turing-police.cc.vt.edu> Message-ID: "Not sure if the V and M misc_attributes are the same for other tape backends..." define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')) define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%')) define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%')) There are good, valid and fairly efficient tests for any files Spectrum Scale system that has a DMAPI based HSM system installed with it. (TSM/HSM, HPSS, LTFS/EE, ...) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From makaplan at us.ibm.com Fri May 4 16:16:26 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 4 May 2018 11:16:26 -0400 Subject: [gpfsug-discuss] Determining which files are migrated or premigated wrt HSM In-Reply-To: References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org><1525362764.27337.140.camel@strath.ac.uk><75615.1525374963@turing-police.cc.vt.edu> Message-ID: define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')) define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%')) define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%')) THESE are good, valid and fairly efficient tests for any files Spectrum Scale system that has a DMAPI based HSM system installed with it. (TSM/HSM, HPSS, LTFS/EE, ...) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Fri May 4 16:38:57 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 4 May 2018 15:38:57 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? Message-ID: Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Fri May 4 16:52:27 2018 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Fri, 4 May 2018 15:52:27 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From skylar2 at uw.edu Fri May 4 16:49:12 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Fri, 4 May 2018 15:49:12 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <20180504154912.vabqnigzvyacfex4@utumno.gs.washington.edu> Our experience is that CES (at least NFS/ganesha) can easily consume all of the CPU resources on a system. If you're running it on the same hardware as your NSD services, then you risk delaying native GPFS I/O requests as well. We haven't found a great way to limit the amount of resources that NFS/ganesha can use, though maybe in the future it could be put in a cgroup since it's all user-space? On Fri, May 04, 2018 at 03:38:57PM +0000, Buterbaugh, Kevin L wrote: > Hi All, > > In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ??? but I???ve not found any detailed explanation of why not. > > I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ??? say, late model boxes with 2 x 8 core CPU???s, 256 GB RAM, 10 GbE networking ??? is there any reason why I still should not combine the two? > > To answer the question of why I would want to ??? simple, server licenses. > > Thanks??? > > Kevin > > ??? > Kevin Buterbaugh - Senior System Administrator > Vanderbilt University - Advanced Computing Center for Research and Education > Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From Kevin.Buterbaugh at Vanderbilt.Edu Fri May 4 16:56:44 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 4 May 2018 15:56:44 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <9AF296B0-E8B0-4DE9-A235-97CCE9A58F5F@vanderbilt.edu> Hi Anderson, Thanks for the response ? however, the scenario you describe below wouldn?t impact us. We have 8 NSD servers and they can easily provide the needed performance to native GPFS clients. We could also take a downtime if we ever did need to expand in the manner described below. In fact, one of the things that?s kinda surprising to me is that upgrading the SMB portion of CES requires a downtime. Let?s just say that I know for a fact that sernet-samba can be done rolling / live. Kevin On May 4, 2018, at 10:52 AM, Anderson Ferreira Nobre > wrote: Hi Kevin, I think one of the reasons is if you need to add or remove nodes from cluster you will start to face the constrains of this kind of solution. Let's say you have a cluster with two nodes and share the same set of LUNs through SAN. And for some reason you need to add more two nodes that are NSD Servers and Protocol nodes. For the new nodes become NSD Servers, you will have to redistribute the NSD disks among four nodes. But for you do that you will have to umount the filesystems. And for you umount the filesystems you would need to stop protocol services. At the end you will realize that a simple task like that is disrruptive. You won't be able to do online. Abra?os / Regards / Saludos, Anderson Nobre AIX & Power Consultant Master Certified IT Specialist IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone: 55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Buterbaugh, Kevin L" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [gpfsug-discuss] Not recommended, but why not? Date: Fri, May 4, 2018 12:39 PM Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2b0fc12c4dc24aa1f7fb08d5b1d70c9e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610459542553835&sdata=8aArQLzU5q%2BySqHcoQ3SI420XzP08ICph7F18G7C4pw%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Fri May 4 17:26:54 2018 From: oehmes at gmail.com (Sven Oehme) Date: Fri, 04 May 2018 16:26:54 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L < Kevin.Buterbaugh at vanderbilt.edu> wrote: > Hi All, > > In doing some research, I have come across numerous places (IBM docs, > DeveloperWorks posts, etc.) where it is stated that it is not recommended > to run CES on NSD servers ? but I?ve not found any detailed explanation of > why not. > > I understand that CES, especially if you enable SMB, can be a resource > hog. But if I size the servers appropriately ? say, late model boxes with > 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I > still should not combine the two? > > To answer the question of why I would want to ? simple, server licenses. > > Thanks? > > Kevin > > ? > Kevin Buterbaugh - Senior System Administrator > Vanderbilt University - Advanced Computing Center for Research and > Education > Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 <(615)%20875-9633> > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Fri May 4 18:30:05 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Fri, 4 May 2018 17:30:05 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> You also have to be careful with network utilization? we have some very hungry NFS clients in our environment and the NFS traffic can actually DOS other services that need to use the network links. If you configure GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then this could lead to GPFS node evictions if disk leases cannot get renewed. You could limit the amount that SMV/NFS use on the network with something like the tc facility if you?re sharing the network interfaces for GPFS and CES services. HTH, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Sven Oehme Sent: Friday, May 04, 2018 11:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Not recommended, but why not? Note: External Email ________________________________ there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L > wrote: Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Fri May 4 23:08:39 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Fri, 4 May 2018 22:08:39 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <9AF296B0-E8B0-4DE9-A235-97CCE9A58F5F@vanderbilt.edu> References: <9AF296B0-E8B0-4DE9-A235-97CCE9A58F5F@vanderbilt.edu>, Message-ID: An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Sat May 5 09:57:11 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Sat, 5 May 2018 09:57:11 +0100 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> References: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> Message-ID: <426d4185-a163-2eb0-954d-7c1947fea607@strath.ac.uk> On 04/05/18 18:30, Bryan Banister wrote: > You also have to be careful with network utilization? we have some very > hungry NFS clients in our environment and the NFS traffic can actually > DOS other services that need to use the network links.? If you configure > GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then > this could lead to GPFS node evictions if disk leases cannot get > renewed.? You could limit the amount that SMV/NFS use on the network > with something like the tc facility if you?re sharing the network > interfaces for GPFS and CES services. > The right answer to that IMHO is a separate VLAN for the GPFS command/control traffic that is prioritized above all other VLAN's. Do something like mark it as a voice VLAN. Basically don't rely on some OS layer to do the right thing at layer three, enforce it at layer two in the switches. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jagga13 at gmail.com Mon May 7 02:35:19 2018 From: jagga13 at gmail.com (Jagga Soorma) Date: Sun, 6 May 2018 18:35:19 -0700 Subject: [gpfsug-discuss] CES NFS export Message-ID: Hi Guys, We are new to gpfs and have a few client that will be mounting gpfs via nfs. We have configured the exports but all user/group permissions are showing up as nobody. The gateway/protocol nodes can query the uid/gid's via centrify without any issues as well as the clients and the perms look good on a client that natively accesses the gpfs filesystem. Is there some specific config that we might be missing? -- # mmnfs export list --nfsdefs /gpfs/datafs1 Path Delegations Clients Access_Type Protocols Transports Squash Anonymous_uid Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids NFS_Commit ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE TRUE FALSE /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP NO_ROOT_SQUASH -2 -2 SYS FALSE NONE TRUE FALSE /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE TRUE FALSE -- On the nfs clients I see this though: -- # ls -l total 0 drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 -- Here is our mmnfs config: -- # mmnfs config list NFS Ganesha Configuration: ========================== NFS_PROTOCOLS: 3,4 NFS_PORT: 2049 MNT_PORT: 0 NLM_PORT: 0 RQUOTA_PORT: 0 NB_WORKER: 256 LEASE_LIFETIME: 60 DOMAINNAME: VIRTUAL1.COM DELEGATIONS: Disabled ========================== STATD Configuration ========================== STATD_PORT: 0 ========================== CacheInode Configuration ========================== ENTRIES_HWMARK: 1500000 ========================== Export Defaults ========================== ACCESS_TYPE: NONE PROTOCOLS: 3,4 TRANSPORTS: TCP ANONYMOUS_UID: -2 ANONYMOUS_GID: -2 SECTYPE: SYS PRIVILEGEDPORT: FALSE MANAGE_GIDS: TRUE SQUASH: ROOT_SQUASH NFS_COMMIT: FALSE ========================== Log Configuration ========================== LOG_LEVEL: EVENT ========================== Idmapd Configuration ========================== LOCAL-REALMS: LOCALDOMAIN DOMAIN: LOCALDOMAIN ========================== -- Thanks! From jagga13 at gmail.com Mon May 7 04:05:01 2018 From: jagga13 at gmail.com (Jagga Soorma) Date: Sun, 6 May 2018 20:05:01 -0700 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: Looks like this is due to nfs v4 and idmapd domain not being configured correctly. I am going to test further and reach out if more assistance is needed. Thanks! On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > Hi Guys, > > We are new to gpfs and have a few client that will be mounting gpfs > via nfs. We have configured the exports but all user/group > permissions are showing up as nobody. The gateway/protocol nodes can > query the uid/gid's via centrify without any issues as well as the > clients and the perms look good on a client that natively accesses the > gpfs filesystem. Is there some specific config that we might be > missing? > > -- > # mmnfs export list --nfsdefs /gpfs/datafs1 > Path Delegations Clients > Access_Type Protocols Transports Squash Anonymous_uid > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > NFS_Commit > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE NONE > TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > -- > > On the nfs clients I see this though: > > -- > # ls -l > total 0 > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > -- > > Here is our mmnfs config: > > -- > # mmnfs config list > > NFS Ganesha Configuration: > ========================== > NFS_PROTOCOLS: 3,4 > NFS_PORT: 2049 > MNT_PORT: 0 > NLM_PORT: 0 > RQUOTA_PORT: 0 > NB_WORKER: 256 > LEASE_LIFETIME: 60 > DOMAINNAME: VIRTUAL1.COM > DELEGATIONS: Disabled > ========================== > > STATD Configuration > ========================== > STATD_PORT: 0 > ========================== > > CacheInode Configuration > ========================== > ENTRIES_HWMARK: 1500000 > ========================== > > Export Defaults > ========================== > ACCESS_TYPE: NONE > PROTOCOLS: 3,4 > TRANSPORTS: TCP > ANONYMOUS_UID: -2 > ANONYMOUS_GID: -2 > SECTYPE: SYS > PRIVILEGEDPORT: FALSE > MANAGE_GIDS: TRUE > SQUASH: ROOT_SQUASH > NFS_COMMIT: FALSE > ========================== > > Log Configuration > ========================== > LOG_LEVEL: EVENT > ========================== > > Idmapd Configuration > ========================== > LOCAL-REALMS: LOCALDOMAIN > DOMAIN: LOCALDOMAIN > ========================== > -- > > Thanks! From YARD at il.ibm.com Mon May 7 06:16:15 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Mon, 7 May 2018 08:16:15 +0300 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: Hi If you want to use NFSv3 , define only NFSv3 on the export. In case you work with NFSv4 - you should have "DOMAIN\user" all the way - so this way you will not get any user mismatch errors, and see permissions like nobody. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Jagga Soorma To: gpfsug-discuss at spectrumscale.org Date: 05/07/2018 06:05 AM Subject: Re: [gpfsug-discuss] CES NFS export Sent by: gpfsug-discuss-bounces at spectrumscale.org Looks like this is due to nfs v4 and idmapd domain not being configured correctly. I am going to test further and reach out if more assistance is needed. Thanks! On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > Hi Guys, > > We are new to gpfs and have a few client that will be mounting gpfs > via nfs. We have configured the exports but all user/group > permissions are showing up as nobody. The gateway/protocol nodes can > query the uid/gid's via centrify without any issues as well as the > clients and the perms look good on a client that natively accesses the > gpfs filesystem. Is there some specific config that we might be > missing? > > -- > # mmnfs export list --nfsdefs /gpfs/datafs1 > Path Delegations Clients > Access_Type Protocols Transports Squash Anonymous_uid > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > NFS_Commit > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE NONE > TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > -- > > On the nfs clients I see this though: > > -- > # ls -l > total 0 > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > -- > > Here is our mmnfs config: > > -- > # mmnfs config list > > NFS Ganesha Configuration: > ========================== > NFS_PROTOCOLS: 3,4 > NFS_PORT: 2049 > MNT_PORT: 0 > NLM_PORT: 0 > RQUOTA_PORT: 0 > NB_WORKER: 256 > LEASE_LIFETIME: 60 > DOMAINNAME: VIRTUAL1.COM > DELEGATIONS: Disabled > ========================== > > STATD Configuration > ========================== > STATD_PORT: 0 > ========================== > > CacheInode Configuration > ========================== > ENTRIES_HWMARK: 1500000 > ========================== > > Export Defaults > ========================== > ACCESS_TYPE: NONE > PROTOCOLS: 3,4 > TRANSPORTS: TCP > ANONYMOUS_UID: -2 > ANONYMOUS_GID: -2 > SECTYPE: SYS > PRIVILEGEDPORT: FALSE > MANAGE_GIDS: TRUE > SQUASH: ROOT_SQUASH > NFS_COMMIT: FALSE > ========================== > > Log Configuration > ========================== > LOG_LEVEL: EVENT > ========================== > > Idmapd Configuration > ========================== > LOCAL-REALMS: LOCALDOMAIN > DOMAIN: LOCALDOMAIN > ========================== > -- > > Thanks! _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From chetkulk at in.ibm.com Mon May 7 09:08:33 2018 From: chetkulk at in.ibm.com (Chetan R Kulkarni) Date: Mon, 7 May 2018 13:38:33 +0530 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: Make sure NFSv4 ID Mapping value matches on client and server. On server side (i.e. CES nodes); you can set as below: $ mmnfs config change IDMAPD_DOMAIN=test.com On client side (e.g. RHEL NFS client); one can set it using Domain attribute in /etc/idmapd.conf file. $ egrep ^Domain /etc/idmapd.conf Domain = test.com [root at rh73node2 2018_05_07-13:31:11 ~]$ $ service nfs-idmap restart Please refer following link for the details: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/b1ladm_authconsidfornfsv4access.htm Thanks, Chetan. From: "Yaron Daniel" To: gpfsug main discussion list Date: 05/07/2018 10:46 AM Subject: Re: [gpfsug-discuss] CES NFS export Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi If you want to use NFSv3 , define only NFSv3 on the export. In case you work with NFSv4 - you should have "DOMAIN\user" all the way - so this way you will not get any user mismatch errors, and see permissions like nobody. Regards Yaron 94 Em Daniel Ha'Moshavot Rd Storage Petach Tiqva, Architect 49527 IBM Israel Global Markets, Systems HW Sales Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel IBM Storage Strategy and Solutions v1IBM Storage Management and Data Protection v1 Related image From: Jagga Soorma To: gpfsug-discuss at spectrumscale.org Date: 05/07/2018 06:05 AM Subject: Re: [gpfsug-discuss] CES NFS export Sent by: gpfsug-discuss-bounces at spectrumscale.org Looks like this is due to nfs v4 and idmapd domain not being configured correctly. I am going to test further and reach out if more assistance is needed. Thanks! On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > Hi Guys, > > We are new to gpfs and have a few client that will be mounting gpfs > via nfs. We have configured the exports but all user/group > permissions are showing up as nobody. The gateway/protocol nodes can > query the uid/gid's via centrify without any issues as well as the > clients and the perms look good on a client that natively accesses the > gpfs filesystem. Is there some specific config that we might be > missing? > > -- > # mmnfs export list --nfsdefs /gpfs/datafs1 > Path Delegations Clients > Access_Type Protocols Transports Squash Anonymous_uid > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > NFS_Commit > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE NONE > TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > -- > > On the nfs clients I see this though: > > -- > # ls -l > total 0 > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > -- > > Here is our mmnfs config: > > -- > # mmnfs config list > > NFS Ganesha Configuration: > ========================== > NFS_PROTOCOLS: 3,4 > NFS_PORT: 2049 > MNT_PORT: 0 > NLM_PORT: 0 > RQUOTA_PORT: 0 > NB_WORKER: 256 > LEASE_LIFETIME: 60 > DOMAINNAME: VIRTUAL1.COM > DELEGATIONS: Disabled > ========================== > > STATD Configuration > ========================== > STATD_PORT: 0 > ========================== > > CacheInode Configuration > ========================== > ENTRIES_HWMARK: 1500000 > ========================== > > Export Defaults > ========================== > ACCESS_TYPE: NONE > PROTOCOLS: 3,4 > TRANSPORTS: TCP > ANONYMOUS_UID: -2 > ANONYMOUS_GID: -2 > SECTYPE: SYS > PRIVILEGEDPORT: FALSE > MANAGE_GIDS: TRUE > SQUASH: ROOT_SQUASH > NFS_COMMIT: FALSE > ========================== > > Log Configuration > ========================== > LOG_LEVEL: EVENT > ========================== > > Idmapd Configuration > ========================== > LOCAL-REALMS: LOCALDOMAIN > DOMAIN: LOCALDOMAIN > ========================== > -- > > Thanks! _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=uic-29lyJ5TCiTRi0FyznYhKJx5I7Vzu80WyYuZ4_iM&m=3k9qWcL7UfySpNVW2J8S1XsIekUHTHBBYQhN7cPVg3Q&s=844KFrfpsN6nT-DKV6HdfS8EEejdwHuQxbNR8cX2cyc&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15633834.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15657152.gif Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15750750.gif Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15967392.gif Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15858665.gif Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15884206.jpg Type: image/jpeg Size: 11294 bytes Desc: not available URL: From Kevin.Buterbaugh at Vanderbilt.Edu Mon May 7 16:05:36 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Mon, 7 May 2018 15:05:36 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <4E0D4232-14FC-4229-BFBC-B61242473456@vanderbilt.edu> Hi All, I want to thank all of you who took the time to respond to this question ? your thoughts / suggestions are much appreciated. What I?m taking away from all of this is that it is OK to run CES on NSD servers as long as you are very careful in how you set things up. This would include: 1. Making sure you have enough CPU horsepower and using cgroups to limit how much CPU SMB and NFS can utilize. 2. Making sure you have enough RAM ? 256 GB sounds like it should be ?enough? when using SMB. 3. Making sure you have your network config properly set up. We would be able to provide three separate, dedicated 10 GbE links for GPFS daemon communication, GPFS multi-cluster link to our HPC cluster, and SMB / NFS communication. 4. Making sure you have good monitoring of all of the above in place. Have I missed anything or does anyone have any additional thoughts? Thanks? Kevin On May 4, 2018, at 11:26 AM, Sven Oehme > wrote: there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L > wrote: Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C6ec06d262ea84752b1d408d5b1dbe2cc%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610480314880560&sdata=J5%2F9X4dNeLrGKH%2BwmhIObVK%2BQ4oyoIa1vZ9F2yTU854%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Mon May 7 17:53:19 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 7 May 2018 16:53:19 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <426d4185-a163-2eb0-954d-7c1947fea607@strath.ac.uk> References: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> <426d4185-a163-2eb0-954d-7c1947fea607@strath.ac.uk> Message-ID: <9b83806da68c4afe85a048ac736e0d5c@jumptrading.com> Sure, many ways to solve the same problem, just depends on where you want to have the controls. Having a separate VLAN doesn't give you as fine grained controls over each network workload you are using, such as metrics collection, monitoring, GPFS, SSH, NFS vs SMB, vs Object, etc. But it doesn't matter how it's done as long as you ensure GPFS has enough bandwidth to function, cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Jonathan Buzzard Sent: Saturday, May 05, 2018 3:57 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Not recommended, but why not? Note: External Email ------------------------------------------------- On 04/05/18 18:30, Bryan Banister wrote: > You also have to be careful with network utilization? we have some very > hungry NFS clients in our environment and the NFS traffic can actually > DOS other services that need to use the network links. If you configure > GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then > this could lead to GPFS node evictions if disk leases cannot get > renewed. You could limit the amount that SMV/NFS use on the network > with something like the tc facility if you?re sharing the network > interfaces for GPFS and CES services. > The right answer to that IMHO is a separate VLAN for the GPFS command/control traffic that is prioritized above all other VLAN's. Do something like mark it as a voice VLAN. Basically don't rely on some OS layer to do the right thing at layer three, enforce it at layer two in the switches. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From jfosburg at mdanderson.org Tue May 8 14:32:54 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Tue, 8 May 2018 13:32:54 +0000 Subject: [gpfsug-discuss] Snapshots for backups Message-ID: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From LloydDean at us.ibm.com Tue May 8 15:59:37 2018 From: LloydDean at us.ibm.com (Lloyd Dean) Date: Tue, 8 May 2018 14:59:37 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: Jonathan, First it must be understood the snap is either at the filesystems or fileset, and more importantly is not an application level backup. This is a huge difference to say Protects many application integrations like exchange, databases, etc. With that understood the approach is similar to what others are doing. Just understand the restrictions. Lloyd Dean IBM Software Storage Architect/Specialist Communication & CSI Heartland Email: LloydDean at us.ibm.com Phone: (720) 395-1246 > On May 8, 2018, at 8:44 AM, Fosburgh,Jonathan wrote: > > We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: > > Replicate to a remote filesystem (I assume this is best done via AFM). > Take periodic (probably daily) snapshots at the remote site. > > The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? > The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From UWEFALKE at de.ibm.com Tue May 8 18:20:49 2018 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Tue, 8 May 2018 19:20:49 +0200 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: One thought: file A is created and synched out. it is changed bit later (say a few days). You have the original version in one snapshot, and the modified in the eternal fs (unless changed again). At some day you will need to delete the snapshot with the initial version since you can keep only a finite number. The initial version is gone then forever. Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Thomas Wolter, Sven Schoo? Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: "Fosburgh,Jonathan" To: gpfsug main discussion list Date: 08/05/2018 15:44 Subject: [gpfsug-discuss] Snapshots for backups Sent by: gpfsug-discuss-bounces at spectrumscale.org We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From valdis.kletnieks at vt.edu Tue May 8 18:24:37 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Tue, 08 May 2018 13:24:37 -0400 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: Message-ID: <29277.1525800277@turing-police.cc.vt.edu> On Tue, 08 May 2018 14:59:37 -0000, "Lloyd Dean" said: > First it must be understood the snap is either at the filesystems or fileset, > and more importantly is not an application level backup. This is a huge > difference to say Protects many application integrations like exchange, > databases, etc. And remember that a GPFS snapshot will only capture the disk as GPFS knows about it - any memory-cached data held by databases etc will *not* be captured (leading to the possibility of an inconsistent version being snapped). You'll need to do some sort of handshaking with any databases to get them to do a "flush everything to disk" to ensure on-disk consistency. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From Kevin.Buterbaugh at Vanderbilt.Edu Tue May 8 19:23:35 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Tue, 8 May 2018 18:23:35 +0000 Subject: [gpfsug-discuss] Node list error Message-ID: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue May 8 21:51:02 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 8 May 2018 20:51:02 +0000 Subject: [gpfsug-discuss] Node list error In-Reply-To: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> References: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> Message-ID: <342034e96e1f409b889b0e9aa4036098@jumptrading.com> What does `mmlsnodeclass -N ` give you? -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] Node list error Note: External Email ________________________________ Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Tue May 8 22:38:09 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Tue, 8 May 2018 21:38:09 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed May 9 13:16:03 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 9 May 2018 12:16:03 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: Our existing environments are using Scale+Protect with tape. Management wants us to move away from tape where possible. We do one filesystem per cluster. So, there will be two new clusters. We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range. We understand that if we replicate corrupted data, the corruption will go with it. But the same would be true for a backup (unless I am not quite following you). The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data. FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup. Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup. We abandoned tape backups on our NAS at around 600TB. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Tuesday, May 8, 2018 at 4:38 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups Hi Jonathan, First off a couple of questions: 1) your using Scale+Protect with Tape today? 2) your new filesystems will be within the same cluster ? 3) What capacity are the new filesystems Based on the above then: AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent) Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Snapshots for backups Date: Tue, May 8, 2018 11:43 PM We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed May 9 13:50:20 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 9 May 2018 12:50:20 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: , <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 9 14:13:04 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2018 14:13:04 +0100 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: , <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: <1525871584.27337.200.camel@strath.ac.uk> On Wed, 2018-05-09 at 12:50 +0000, Andrew Beattie wrote: > ? > From my perspective the difference / benefits of using something like > Protect and using backup policies over snapshot policies - even if > its disk based rather than tape based,? is that with a backup you get > far better control over your Disaster Recovery process. The policy > integration with Scale and Protect is very comprehensive.? If the > issue is Tape time for recovery - simply change from tape medium to a > Disk storage pool as your repository for Protect, you get all the > benefits of Spectrum Protect and the restore speeds of disk, (you > might even - subject to type of data start to see some benefits of > duplication and compression for your backups as you will be able to > take advantage of Protect's dedupe and compression for the disk based > storage pool, something that's not available on your tape > environment. The way I see it is that snapshots are not backup. They are handy for quick recovery from file deletion mistakes. They are utterly useless when your disaster recovery is needed because for example all your NSD descriptors have been overwritten (not my mistake I hasten to add). AT that point your snapshots are for jack. > ? > If your looking for a way to further reduce your disk costs then > potentially the benefits of Object Storage erasure coding might be > worth looking at although for a 1 or 2 site scenario the overheads > are pretty much the same if you use some variant of distributed raid > or if you use erasure coding. > ? At scale tape is a lot cheaper than disk. Also sorry your data is going to take a couple of weeks to recover goes down a lot better than sorry your data is gone for ever. Finally it's also hard for a hacker or disgruntled admin to wipe your tapes in a short period of time. The robot don't go that fast. Your disks/file systems on the other hand effectively be gone in seconds. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jfosburg at mdanderson.org Wed May 9 14:29:23 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 9 May 2018 13:29:23 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: I agree with your points. The thought here, is that if we had a complete loss of the primary site, we could bring up the secondary in relatively short order (hours or days instead of weeks or months). Maybe this is true, and maybe this isn?t, though I do see (and have advocated for) a DR setup much like that. My concern is that the use of snapshots as a substitute for traditional backups for a Scale environment is that that is an inappropriate use of the technology, particularly when we have a tool designed for that and that works. Let me take a moment to reiterate something that may be getting lost. The snapshots will be taken against the remote copy and recovered from there. We will not be relying on the primary site for this function. We were starting to look at ESS as a destination for these backups. I have also considered that a multisite ICOS implementation might work to satisfy some of our general backup requirements. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Wednesday, May 9, 2018 at 7:51 AM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups From my perspective the difference / benefits of using something like Protect and using backup policies over snapshot policies - even if its disk based rather than tape based, is that with a backup you get far better control over your Disaster Recovery process. The policy integration with Scale and Protect is very comprehensive. If the issue is Tape time for recovery - simply change from tape medium to a Disk storage pool as your repository for Protect, you get all the benefits of Spectrum Protect and the restore speeds of disk, (you might even - subject to type of data start to see some benefits of duplication and compression for your backups as you will be able to take advantage of Protect's dedupe and compression for the disk based storage pool, something that's not available on your tape environment. If your looking for a way to further reduce your disk costs then potentially the benefits of Object Storage erasure coding might be worth looking at although for a 1 or 2 site scenario the overheads are pretty much the same if you use some variant of distributed raid or if you use erasure coding. Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: Re: [gpfsug-discuss] Snapshots for backups Date: Wed, May 9, 2018 10:28 PM Our existing environments are using Scale+Protect with tape. Management wants us to move away from tape where possible. We do one filesystem per cluster. So, there will be two new clusters. We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range. We understand that if we replicate corrupted data, the corruption will go with it. But the same would be true for a backup (unless I am not quite following you). The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data. FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup. Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup. We abandoned tape backups on our NAS at around 600TB. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Tuesday, May 8, 2018 at 4:38 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups Hi Jonathan, First off a couple of questions: 1) your using Scale+Protect with Tape today? 2) your new filesystems will be within the same cluster ? 3) What capacity are the new filesystems Based on the above then: AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent) Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Snapshots for backups Date: Tue, May 8, 2018 11:43 PM We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed May 9 14:31:36 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 9 May 2018 13:31:36 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1525871584.27337.200.camel@strath.ac.uk> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> Message-ID: <81738C1C-FAFC-416A-9937-B99E86809EE4@mdanderson.org> That is the use case for snapshots, taken at the remote site. Recovery from accidental deletion. ?On 5/9/18, 8:13 AM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Jonathan Buzzard" wrote: The way I see it is that snapshots are not backup. They are handy for quick recovery from file deletion mistakes. They are utterly useless when your disaster recovery is needed because for example all your NSD descriptors have been overwritten (not my mistake I hasten to add). AT that point your snapshots are for jack. The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. From MKEIGO at jp.ibm.com Wed May 9 14:36:37 2018 From: MKEIGO at jp.ibm.com (Keigo Matsubara) Date: Wed, 9 May 2018 22:36:37 +0900 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1525871584.27337.200.camel@strath.ac.uk> References: , <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> Message-ID: Not sure if the topic is appropriate, but I know an installation case which employs IBM Spectrum Scale's snapshot function along with IBM Spectrum Protect to save the backup date onto LTO7 tape media. Both software components running on Linux on Power (RHEL 7.3 BE) if that matters. Of course, snapshots are taken per independent fileset. --- Keigo Matsubara, Storage Solutions Client Technical Specialist, IBM Japan TEL: +81-50-3150-0595, T/L: 6205-0595 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Wed May 9 14:37:43 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Wed, 9 May 2018 13:37:43 +0000 Subject: [gpfsug-discuss] mmlsnsd -m or -M Message-ID: <6f1760ea2d1244959d25763442ba96c0@SMXRF105.msg.hukrf.de> Hallo All, we experience some difficults in using mmlsnsd -m on 4.2.3.8 and 5.0.0.2. Are there any known bugs or changes happening here, that these function don?t does what it wants. The outputs are now for these suboption -m or -M the same!!??. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 9 15:23:59 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 9 May 2018 14:23:59 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: <08326DC0-30CF-4A63-A111-1EDBDC19E3F0@bham.ac.uk> For DR, what about making your secondary site mostly an object store, use TCT to pre-migrate the data out and then use SOBAR to dump the catalogue. You then restore the SOBAR dump to the DR site and have pretty much instant most of your data available. You could do the DR with tape/pre-migration as well, it?s just slower. OFC with SOBAR, you are just restoring the data that is being accessed or you target to migrate back in. Equally Protect can also backup/migrate to an object pool (note you can?t currently migrate in the Protect sense from a TSM object pool to a TSM disk/tape pool). And put snapshots in at home for the instant ?need to restore a file?. If this is appropriate depends on what you agree your RPO to be. Scale/Protect for us allows us to recover data N months after the user deleted the file and didn?t notice. Simon From: on behalf of "jfosburg at mdanderson.org" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Wednesday, 9 May 2018 at 14:30 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups I agree with your points. The thought here, is that if we had a complete loss of the primary site, we could bring up the secondary in relatively short order (hours or days instead of weeks or months). Maybe this is true, and maybe this isn?t, though I do see (and have advocated for) a DR setup much like that. My concern is that the use of snapshots as a substitute for traditional backups for a Scale environment is that that is an inappropriate use of the technology, particularly when we have a tool designed for that and that works. Let me take a moment to reiterate something that may be getting lost. The snapshots will be taken against the remote copy and recovered from there. We will not be relying on the primary site for this function. We were starting to look at ESS as a destination for these backups. I have also considered that a multisite ICOS implementation might work to satisfy some of our general backup requirements. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Wednesday, May 9, 2018 at 7:51 AM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups From my perspective the difference / benefits of using something like Protect and using backup policies over snapshot policies - even if its disk based rather than tape based, is that with a backup you get far better control over your Disaster Recovery process. The policy integration with Scale and Protect is very comprehensive. If the issue is Tape time for recovery - simply change from tape medium to a Disk storage pool as your repository for Protect, you get all the benefits of Spectrum Protect and the restore speeds of disk, (you might even - subject to type of data start to see some benefits of duplication and compression for your backups as you will be able to take advantage of Protect's dedupe and compression for the disk based storage pool, something that's not available on your tape environment. If your looking for a way to further reduce your disk costs then potentially the benefits of Object Storage erasure coding might be worth looking at although for a 1 or 2 site scenario the overheads are pretty much the same if you use some variant of distributed raid or if you use erasure coding. Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: Re: [gpfsug-discuss] Snapshots for backups Date: Wed, May 9, 2018 10:28 PM Our existing environments are using Scale+Protect with tape. Management wants us to move away from tape where possible. We do one filesystem per cluster. So, there will be two new clusters. We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range. We understand that if we replicate corrupted data, the corruption will go with it. But the same would be true for a backup (unless I am not quite following you). The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data. FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup. Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup. We abandoned tape backups on our NAS at around 600TB. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Tuesday, May 8, 2018 at 4:38 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups Hi Jonathan, First off a couple of questions: 1) your using Scale+Protect with Tape today? 2) your new filesystems will be within the same cluster ? 3) What capacity are the new filesystems Based on the above then: AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent) Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Snapshots for backups Date: Tue, May 8, 2018 11:43 PM We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkr at lbl.gov Wed May 9 17:01:30 2018 From: kkr at lbl.gov (Kristy Kallback-Rose) Date: Wed, 9 May 2018 09:01:30 -0700 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1525871584.27337.200.camel@strath.ac.uk> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> Message-ID: +1 for benefits of tape and also power consumption/heat production (may help a case to management) is obviously better with things that don?t have to be spinning all the time. > > At scale tape is a lot cheaper than disk. Also sorry your data is going > to take a couple of weeks to recover goes down a lot better than sorry > your data is gone for ever. > > Finally it's also hard for a hacker or disgruntled admin to wipe your > tapes in a short period of time. The robot don't go that fast. Your > disks/file systems on the other hand effectively be gone in seconds. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Wed May 9 20:01:55 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 9 May 2018 15:01:55 -0400 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org><1525871584.27337.200.camel@strath.ac.uk> Message-ID: I see there are also low-power / zero-power disk archive/arrays available. Any experience with those? From: Kristy Kallback-Rose To: gpfsug main discussion list Date: 05/09/2018 12:20 PM Subject: Re: [gpfsug-discuss] Snapshots for backups Sent by: gpfsug-discuss-bounces at spectrumscale.org +1 for benefits of tape and also power consumption/heat production (may help a case to management) is obviously better with things that don?t have to be spinning all the time. > > At scale tape is a lot cheaper than disk. Also sorry your data is going > to take a couple of weeks to recover goes down a lot better than sorry > your data is gone for ever. > > Finally it's also hard for a hacker or disgruntled admin to wipe your > tapes in a short period of time. The robot don't go that fast. Your > disks/file systems on the other hand effectively be gone in seconds. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valdis.kletnieks at vt.edu Wed May 9 21:33:26 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Wed, 09 May 2018 16:33:26 -0400 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org><1525871584.27337.200.camel@strath.ac.uk> Message-ID: <31428.1525898006@turing-police.cc.vt.edu> On Wed, 09 May 2018 15:01:55 -0400, "Marc A Kaplan" said: > I see there are also low-power / zero-power disk archive/arrays available. > Any experience with those? The last time I looked at those (which was a few years ago) they were competitive with tape for power consumption, but not on cost per terabyte - it takes a lot less cable and hardware to hook up a dozen tape drives and a robot arm that can reach 10,000 volumes than it does to wire up 10,000 disks of which only 500 are actually spinning at any given time... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From skylar2 at uw.edu Wed May 9 21:46:45 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Wed, 9 May 2018 20:46:45 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <31428.1525898006@turing-police.cc.vt.edu> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> <31428.1525898006@turing-police.cc.vt.edu> Message-ID: <20180509204645.fy5js7kjxslihjjr@utumno.gs.washington.edu> On Wed, May 09, 2018 at 04:33:26PM -0400, valdis.kletnieks at vt.edu wrote: > On Wed, 09 May 2018 15:01:55 -0400, "Marc A Kaplan" said: > > > I see there are also low-power / zero-power disk archive/arrays available. > > Any experience with those? > > The last time I looked at those (which was a few years ago) they were competitive > with tape for power consumption, but not on cost per terabyte - it takes a lot less > cable and hardware to hook up a dozen tape drives and a robot arm that can > reach 10,000 volumes than it does to wire up 10,000 disks of which only 500 are > actually spinning at any given time... I also wonder what the lifespan of cold-storage hard drives are relative to tape. With BaFe universal for LTO now, our failure rate for tapes has gone way down (not that it was very high relative to HDDs anyways). FWIW, the operating+capital costs we recharge our grants for tape storage is ~50% of what we recharge them for bulk disk storage. -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From daniel.kidger at uk.ibm.com Thu May 10 11:19:49 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Thu, 10 May 2018 10:19:49 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <4E0D4232-14FC-4229-BFBC-B61242473456@vanderbilt.edu> Message-ID: One additional point to consider is what happens on a hardware failure. eg. If you have two NSD servers that are both CES servers and one fails, then there is a double-failure at exactly the same point in time. Daniel Dr Daniel Kidger IBM Technical Sales Specialist Software Defined Solution Sales +44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 7 May 2018, at 16:39, Buterbaugh, Kevin L wrote: > > Hi All, > > I want to thank all of you who took the time to respond to this question ? your thoughts / suggestions are much appreciated. > > What I?m taking away from all of this is that it is OK to run CES on NSD servers as long as you are very careful in how you set things up. This would include: > > 1. Making sure you have enough CPU horsepower and using cgroups to limit how much CPU SMB and NFS can utilize. > 2. Making sure you have enough RAM ? 256 GB sounds like it should be ?enough? when using SMB. > 3. Making sure you have your network config properly set up. We would be able to provide three separate, dedicated 10 GbE links for GPFS daemon communication, GPFS multi-cluster link to our HPC cluster, and SMB / NFS communication. > 4. Making sure you have good monitoring of all of the above in place. > > Have I missed anything or does anyone have any additional thoughts? Thanks? > > Kevin > >> On May 4, 2018, at 11:26 AM, Sven Oehme wrote: >> >> there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. >> the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. >> >> sven >> >>> On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L wrote: >>> Hi All, >>> >>> In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. >>> >>> I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? >>> >>> To answer the question of why I would want to ? simple, server licenses. >>> >>> Thanks? >>> >>> Kevin >>> >>> ? >>> Kevin Buterbaugh - Senior System Administrator >>> Vanderbilt University - Advanced Computing Center for Research and Education >>> Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 >>> >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C6ec06d262ea84752b1d408d5b1dbe2cc%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610480314880560&sdata=J5%2F9X4dNeLrGKH%2BwmhIObVK%2BQ4oyoIa1vZ9F2yTU854%3D&reserved=0 > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Thu May 10 13:51:45 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Thu, 10 May 2018 15:51:45 +0300 Subject: [gpfsug-discuss] Node list error In-Reply-To: <342034e96e1f409b889b0e9aa4036098@jumptrading.com> References: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> <342034e96e1f409b889b0e9aa4036098@jumptrading.com> Message-ID: Hi Just to verify - there is no Firewalld running or Selinux ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Bryan Banister To: gpfsug main discussion list Date: 05/08/2018 11:51 PM Subject: Re: [gpfsug-discuss] Node list error Sent by: gpfsug-discuss-bounces at spectrumscale.org What does `mmlsnodeclass -N ` give you? -B From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] Node list error Note: External Email Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From Kevin.Buterbaugh at Vanderbilt.Edu Thu May 10 14:37:05 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 10 May 2018 13:37:05 +0000 Subject: [gpfsug-discuss] Node list error In-Reply-To: References: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> <342034e96e1f409b889b0e9aa4036098@jumptrading.com> Message-ID: Hi Yaron, Thanks for the response ? no firewalld nor SELinux. I went ahead and opened up a PMR and it turns out this is a known defect (at least in GPFS 5, I may have been the first to report it in GPFS 4.2.3.x) and IBM is working on a fix. Thanks? Kevin On May 10, 2018, at 7:51 AM, Yaron Daniel > wrote: Hi Just to verify - there is no Firewalld running or Selinux ? Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Bryan Banister > To: gpfsug main discussion list > Date: 05/08/2018 11:51 PM Subject: Re: [gpfsug-discuss] Node list error Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ What does `mmlsnodeclass -N ` give you? -B From:gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Node list error Note: External Email ________________________________ Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu- (615)875-9633 ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C58826c68a116427f5c2d08d5b674e2b2%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636615535509439494&sdata=eB3wc4PtGINXs0pAA9GYowE6ERimMahPBWzejHuOexQ%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From JRLang at uwyo.edu Thu May 10 20:32:00 2018 From: JRLang at uwyo.edu (Jeffrey R. Lang) Date: Thu, 10 May 2018 19:32:00 +0000 Subject: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? In-Reply-To: References: Message-ID: Hi Just a quick check. I upgraded my test GPFS system to RHEL 7.5 today and now GPFS 4.2.3-6 and 4.2.3-8 no longer compile properly. What version of GPFS (Spectrum Scale) is support on RHEL 7.5? Thanks Jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (IT Research Support) Sent: Monday, December 4, 2017 4:29 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? The FAQ at: https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it would work with a build your own kernel, but that doesn?t mean it is **supported** Simon On 04/12/2017, 09:52, "gpfsug-discuss-bounces at spectrumscale.org on behalf of z.han at imperial.ac.uk" wrote: Hi All, Any one is using a Linux kernel 3.12.x or above to run gpfs 4.2.3-4.2? I mean you've compiled your own kernel without paying for a professional service. We're stuck by CentOS/RHEL's distributed kernel as the PCI passthrough is required for VMs. Your comments or suggestions are much appreciated. Kind regards, Zong-Pei _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luis.bolinches at fi.ibm.com Thu May 10 23:22:01 2018 From: luis.bolinches at fi.ibm.com (Luis Bolinches) Date: Fri, 11 May 2018 00:22:01 +0200 Subject: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? In-Reply-To: References: Message-ID: https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html#linuxrest By reading table 30, none at this point Thanks -- Yst?v?llisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis-bolinches "If you always give you will always have" -- Anonymous From: "Jeffrey R. Lang" To: gpfsug main discussion list Date: 11/05/2018 00:05 Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Just a quick check. I upgraded my test GPFS system to RHEL 7.5 today and now GPFS 4.2.3-6 and 4.2.3-8 no longer compile properly. What version of GPFS (Spectrum Scale) is support on RHEL 7.5? Thanks Jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (IT Research Support) Sent: Monday, December 4, 2017 4:29 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? The FAQ at: https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it would work with a build your own kernel, but that doesn?t mean it is **supported** Simon On 04/12/2017, 09:52, "gpfsug-discuss-bounces at spectrumscale.org on behalf of z.han at imperial.ac.uk" wrote: Hi All, Any one is using a Linux kernel 3.12.x or above to run gpfs 4.2.3-4.2? I mean you've compiled your own kernel without paying for a professional service. We're stuck by CentOS/RHEL's distributed kernel as the PCI passthrough is required for VMs. Your comments or suggestions are much appreciated. Kind regards, Zong-Pei _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Ellei edell? ole toisin mainittu: / Unless stated otherwise above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Fri May 11 04:32:42 2018 From: knop at us.ibm.com (Felipe Knop) Date: Thu, 10 May 2018 23:32:42 -0400 Subject: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x orabove? In-Reply-To: References: Message-ID: Luis, Correct. Jeff: The Spectrum Scale team has been actively working on the support for RHEL 7.5 . Since code changes will be required, the support will require upcoming 4.2.3 and 5.0 PTFs. The FAQ will be updated when support for 7.5 becomes available. Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: Luis Bolinches To: gpfsug main discussion list Date: 05/10/2018 06:22 PM Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? Sent by: gpfsug-discuss-bounces at spectrumscale.org https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html#linuxrest By reading table 30, none at this point Thanks -- Yst?v?llisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis-bolinches "If you always give you will always have" -- Anonymous From: "Jeffrey R. Lang" To: gpfsug main discussion list Date: 11/05/2018 00:05 Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Just a quick check. I upgraded my test GPFS system to RHEL 7.5 today and now GPFS 4.2.3-6 and 4.2.3-8 no longer compile properly. What version of GPFS (Spectrum Scale) is support on RHEL 7.5? Thanks Jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (IT Research Support) Sent: Monday, December 4, 2017 4:29 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? The FAQ at: https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it would work with a build your own kernel, but that doesn?t mean it is **supported** Simon On 04/12/2017, 09:52, "gpfsug-discuss-bounces at spectrumscale.org on behalf of z.han at imperial.ac.uk" wrote: Hi All, Any one is using a Linux kernel 3.12.x or above to run gpfs 4.2.3-4.2? I mean you've compiled your own kernel without paying for a professional service. We're stuck by CentOS/RHEL's distributed kernel as the PCI passthrough is required for VMs. Your comments or suggestions are much appreciated. Kind regards, Zong-Pei _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Ellei edell? ole toisin mainittu: / Unless stated otherwise above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From bbanister at jumptrading.com Fri May 11 17:25:06 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Fri, 11 May 2018 16:25:06 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out Message-ID: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> It's on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Fri May 11 18:11:12 2018 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Fri, 11 May 2018 17:11:12 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out In-Reply-To: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> References: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> Message-ID: <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> I'd normally be excited by this, since we do aggressively apply GPFS upgrades. But it's worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you're also in the habit of aggressively upgrading RedHat then you're going to have to wait for 5.0.1-1 before you can resume that practice. From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Bryan Banister Sent: Friday, May 11, 2018 12:25 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out It's on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Fri May 11 18:56:49 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 11 May 2018 17:56:49 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out In-Reply-To: <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> References: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> Message-ID: On the other hand, we are very excited by this (from the README): File systems: Traditional NSD nodes and servers can use checksums NSD clients and servers that are configured with IBM Spectrum Scale can use checksums to verify data integrity and detect network corruption of file data that the client reads from or writes to the NSD server. For more information, see the nsdCksumTraditional and nsdDumpBuffersOnCksumError attributes in the topic mmchconfig command. Finally! Thanks, IBM (seriously)? Kevin On May 11, 2018, at 12:11 PM, Sanchez, Paul > wrote: I?d normally be excited by this, since we do aggressively apply GPFS upgrades. But it?s worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re also in the habit of aggressively upgrading RedHat then you?re going to have to wait for 5.0.1-1 before you can resume that practice. From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Bryan Banister Sent: Friday, May 11, 2018 12:25 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) > Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out It?s on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cfba17a5bf8c54359d5a308d5b7636fc4%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636616560077181684&sdata=ymNFnAFOsfzWoFLXWiQMgaHdUKn9sAC8WMv4%2FNjCP%2B0%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri May 11 19:34:30 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 11 May 2018 18:34:30 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out -NSD Checksum Message-ID: <30E7142C-3D77-4A97-834B-D54FFF06564B@nuance.com> Ah be careful! looking at the man page for mmchconfig ?nsdCksumTraditional: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1adm_mmchconfig.htm * Enabling this feature can result in significant I/O performance degradation and a considerable increase in CPU usage. Bob Oesterlin Sr Principal Storage Engineer, Nuance From: on behalf of "Buterbaugh, Kevin L" Reply-To: gpfsug main discussion list Date: Friday, May 11, 2018 at 1:29 PM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out On the other hand, we are very excited by this (from the README): File systems: Traditional NSD nodes and servers can use checksums NSD clients and servers that are configured with IBM Spectrum Scale can use checksums to verify data integrity and detect network corruption of file data that the client reads from or writes to the NSD server. For more information, see the nsdCksumTraditional and nsdDumpBuffersOnCksumError attributes in the topic mmchconfig command. Finally! Thanks, IBM (seriously)? Kevin On May 11, 2018, at 12:11 PM, Sanchez, Paul > wrote: I?d normally be excited by this, since we do aggressively apply GPFS upgrades. But it?s worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re also in the habit of aggressively upgrading RedHat then you?re going to have to wait for 5.0.1-1 before you can resume that practice. From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Bryan Banister Sent: Friday, May 11, 2018 12:25 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) > Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out It?s on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cfba17a5bf8c54359d5a308d5b7636fc4%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636616560077181684&sdata=ymNFnAFOsfzWoFLXWiQMgaHdUKn9sAC8WMv4%2FNjCP%2B0%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri May 11 20:02:30 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 11 May 2018 19:02:30 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out -NSD Checksum In-Reply-To: <30E7142C-3D77-4A97-834B-D54FFF06564B@nuance.com> Message-ID: >From some graphs I have seen the overhead varies a lot depending on the I/O size and if read or write and if random IO or not. So definitely YMMV. Remember too that ESS uses powerful processors in order to do the erasure coding and hence has performance to do checksums too. Traditionally ordinary NSD servers are merely ?routers? and as such are often using low spec cpus which may not be fast enough for the extra load? Daniel Dr Daniel Kidger IBM Technical Sales Specialist Software Defined Solution Sales + 44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 11 May 2018, at 19:34, Oesterlin, Robert wrote: > > Ah be careful! looking at the man page for mmchconfig ?nsdCksumTraditional: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1adm_mmchconfig.htm > > Enabling this feature can result in significant I/O performance degradation and a considerable increase in CPU usage. > > > Bob Oesterlin > Sr Principal Storage Engineer, Nuance > > > From: on behalf of "Buterbaugh, Kevin L" > Reply-To: gpfsug main discussion list > Date: Friday, May 11, 2018 at 1:29 PM > To: gpfsug main discussion list > Subject: [EXTERNAL] Re: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out > > On the other hand, we are very excited by this (from the README): > File systems: Traditional NSD nodes and servers can use checksums > > NSD clients and servers that are configured with IBM Spectrum Scale can use checksums > > to verify data integrity and detect network corruption of file data that the client > > reads from or writes to the NSD server. For more information, see the > > nsdCksumTraditional and nsdDumpBuffersOnCksumError attributes in the topic mmchconfig command. > > Finally! Thanks, IBM (seriously)? > > Kevin > > > On May 11, 2018, at 12:11 PM, Sanchez, Paul wrote: > > I?d normally be excited by this, since we do aggressively apply GPFS upgrades. But it?s worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re also in the habit of aggressively upgrading RedHat then you?re going to have to wait for 5.0.1-1 before you can resume that practice. > > From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Bryan Banister > Sent: Friday, May 11, 2018 12:25 PM > To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) > Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out > > It?s on fix central, https://www-945.ibm.com/support/fixcentral > > Cheers, > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cfba17a5bf8c54359d5a308d5b7636fc4%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636616560077181684&sdata=ymNFnAFOsfzWoFLXWiQMgaHdUKn9sAC8WMv4%2FNjCP%2B0%3D&reserved=0 > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From valdis.kletnieks at vt.edu Fri May 11 20:35:40 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Fri, 11 May 2018 15:35:40 -0400 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out -NSD Checksum In-Reply-To: References: Message-ID: <112843.1526067340@turing-police.cc.vt.edu> On Fri, 11 May 2018 19:02:30 -0000, "Daniel Kidger" said: > Remember too that ESS uses powerful processors in order to do the erasure > coding and hence has performance to do checksums too. Traditionally ordinary > NSD servers are merely ???routers??? and as such are often using low spec cpus > which may not be fast enough for the extra load? More to the point - if you're at all clever, you can do the erasure encoding in such a way that a perfectly usable checksum just drops out the bottom free of charge, so no additional performance is needed to checksum stuff.... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From jonathan at buzzard.me.uk Fri May 11 21:38:03 2018 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 11 May 2018 21:38:03 +0100 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out In-Reply-To: <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> References: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> Message-ID: <7a6eeed3-134f-620a-b49b-ed79ade90733@buzzard.me.uk> On 11/05/18 18:11, Sanchez, Paul wrote: > I?d normally be excited by this, since we do aggressively apply GPFS > upgrades.? But it?s worth noting that no released version of Scale works > with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re > also in the habit of aggressively upgrading RedHat then you?re going to > have to wait for 5.0.1-1 before you can resume that practice. > You can upgrade to RHEL 7.5 and then just boot the last of the 7.4 kernels. I have done that in the past with early RHEL 5. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From goncalves.erika at gene.com Fri May 11 22:55:42 2018 From: goncalves.erika at gene.com (Erika Goncalves) Date: Fri, 11 May 2018 14:55:42 -0700 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: I'm new on the Forum (hello to everyone!!) Quick question related to Chetan mail, How is the procedure when you have more than one domain? Make sure NFSv4 ID Mapping value matches on client and server. On server side (i.e. CES nodes); you can set as below: $ mmnfs config change IDMAPD_DOMAIN=test.com On client side (e.g. RHEL NFS client); one can set it using Domain attribute in /etc/idmapd.conf file. $ egrep ^Domain /etc/idmapd.conf Domain = test.com [root at rh73node2 2018_05_07-13:31:11 ~]$ $ service nfs-idmap restart It is possible to configure the IDMAPD_DOMAIN to support more than one? Thanks! -- *E**rika Goncalves* SSF Agile Operations Global IT Infrastructure & Solutions (GIS) Genentech - A member of the Roche Group +1 (650) 529 5458 goncalves.erika at gene.com *Confidentiality Note: *This message is intended only for the use of the named recipient(s) and may contain confidential and/or proprietary information. If you are not the intended recipient, please contact the sender and delete this message. Any unauthorized use of the information contained in this message is prohibited. On Mon, May 7, 2018 at 1:08 AM, Chetan R Kulkarni wrote: > Make sure NFSv4 ID Mapping value matches on client and server. > > On server side (i.e. CES nodes); you can set as below: > > $ mmnfs config change IDMAPD_DOMAIN=test.com > > On client side (e.g. RHEL NFS client); one can set it using Domain > attribute in /etc/idmapd.conf file. > > $ egrep ^Domain /etc/idmapd.conf > Domain = test.com > [root at rh73node2 2018_05_07-13:31:11 ~]$ > $ service nfs-idmap restart > > Please refer following link for the details: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0. > 0/com.ibm.spectrum.scale.v5r00.doc/b1ladm_authconsidfornfsv4access.htm > > Thanks, > Chetan. > > [image: Inactive hide details for "Yaron Daniel" ---05/07/2018 10:46:32 > AM---Hi If you want to use NFSv3 , define only NFSv3 on the exp]"Yaron > Daniel" ---05/07/2018 10:46:32 AM---Hi If you want to use NFSv3 , define > only NFSv3 on the export. > > From: "Yaron Daniel" > To: gpfsug main discussion list > Date: 05/07/2018 10:46 AM > > Subject: Re: [gpfsug-discuss] CES NFS export > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hi > > If you want to use NFSv3 , define only NFSv3 on the export. > In case you work with NFSv4 - you should have "DOMAIN\user" all the way - > so this way you will not get any user mismatch errors, and see permissions > like nobody. > > > > Regards > ------------------------------ > > *Yaron Daniel* 94 Em Ha'Moshavot Rd > *Storage Architect* Petach Tiqva, 49527 > *IBM Global Markets, Systems HW Sales* Israel > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > *IBM Israel* > > [image: IBM Storage Strategy and Solutions v1][image: IBM Storage > Management and Data Protection v1] [image: Related image] > > > > From: Jagga Soorma > To: gpfsug-discuss at spectrumscale.org > Date: 05/07/2018 06:05 AM > Subject: Re: [gpfsug-discuss] CES NFS export > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Looks like this is due to nfs v4 and idmapd domain not being > configured correctly. I am going to test further and reach out if > more assistance is needed. > > Thanks! > > On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > > Hi Guys, > > > > We are new to gpfs and have a few client that will be mounting gpfs > > via nfs. We have configured the exports but all user/group > > permissions are showing up as nobody. The gateway/protocol nodes can > > query the uid/gid's via centrify without any issues as well as the > > clients and the perms look good on a client that natively accesses the > > gpfs filesystem. Is there some specific config that we might be > > missing? > > > > -- > > # mmnfs export list --nfsdefs /gpfs/datafs1 > > Path Delegations Clients > > Access_Type Protocols Transports Squash Anonymous_uid > > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > > NFS_Commit > > ------------------------------------------------------------ > ------------------------------------------------------------ > ------------------------------------------------------------ > ----------------------- > > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > > ROOT_SQUASH -2 -2 SYS FALSE NONE > > TRUE FALSE > > /gpfs/datafs1 NONE {nodenames} RW 3,4 > > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > > NONE TRUE FALSE > > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > > ROOT_SQUASH -2 -2 SYS FALSE > > NONE TRUE FALSE > > -- > > > > On the nfs clients I see this though: > > > > -- > > # ls -l > > total 0 > > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > > -- > > > > Here is our mmnfs config: > > > > -- > > # mmnfs config list > > > > NFS Ganesha Configuration: > > ========================== > > NFS_PROTOCOLS: 3,4 > > NFS_PORT: 2049 > > MNT_PORT: 0 > > NLM_PORT: 0 > > RQUOTA_PORT: 0 > > NB_WORKER: 256 > > LEASE_LIFETIME: 60 > > DOMAINNAME: VIRTUAL1.COM > > DELEGATIONS: Disabled > > ========================== > > > > STATD Configuration > > ========================== > > STATD_PORT: 0 > > ========================== > > > > CacheInode Configuration > > ========================== > > ENTRIES_HWMARK: 1500000 > > ========================== > > > > Export Defaults > > ========================== > > ACCESS_TYPE: NONE > > PROTOCOLS: 3,4 > > TRANSPORTS: TCP > > ANONYMOUS_UID: -2 > > ANONYMOUS_GID: -2 > > SECTYPE: SYS > > PRIVILEGEDPORT: FALSE > > MANAGE_GIDS: TRUE > > SQUASH: ROOT_SQUASH > > NFS_COMMIT: FALSE > > ========================== > > > > Log Configuration > > ========================== > > LOG_LEVEL: EVENT > > ========================== > > > > Idmapd Configuration > > ========================== > > LOCAL-REALMS: LOCALDOMAIN > > DOMAIN: LOCALDOMAIN > > ========================== > > -- > > > > Thanks! > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss* > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug. > org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_ > iaSHvJObTbx-siA1ZOg&r=uic-29lyJ5TCiTRi0FyznYhKJx5I7Vzu80WyYuZ4_iM&m= > 3k9qWcL7UfySpNVW2J8S1XsIekUHTHBBYQhN7cPVg3Q&s=844KFrfpsN6nT- > DKV6HdfS8EEejdwHuQxbNR8cX2cyc&e= > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15633834.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15884206.jpg Type: image/jpeg Size: 11294 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15750750.gif Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15967392.gif Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15858665.gif Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15657152.gif Type: image/gif Size: 4376 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Mon May 14 11:09:10 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Mon, 14 May 2018 10:09:10 +0000 Subject: [gpfsug-discuss] SMB quotas query Message-ID: Hi all, I want to run this past the group to see if I?m going mad or not. We do have an open PMR about the issue which is currently being escalated. We have 400 independent filesets all linked to a path in the filesystem. The root of that path is then exported via SMB, e.g.: Fileset1: /gpfs/rootsmb/fileset1 Fileset2: /gpfs/rootsmb/fileset2 The CES export is /gpfs/rootsmb and the name of the share is (for example) ?share?. All our filesets have block quotas applied to them with the hard and soft limit being the same. Customers then map drives to these filesets using the following path: \\ces-cluster\share\fileset1 \\ces-cluster\share\fileset2 ?fileset400 Some customers have one drive mapping only, others have two or more. For the customers that map two or more drives, the quota that Windows reports is identical for each fileset, and is usually for the last fileset that gets mapped. I do not believe this has always been the case: our customers have only recently (since the New Year at least) started complaining in the three+ years we?ve been running GPFS. In my test cluster I?ve tried rolling back to 4.2.3-2 which we were running last Summer and I can easily reproduce the problem. So a couple of questions: 1. Am I right to think that since GPFS is actually exposing the quota of a fileset over SMB then each fileset mapped as a drive in the manner above *should* each report the correct quota? 2. Does anyone else see the same behaviour? 3. There is suspicion this could be recent changes from a Microsoft Update and I?m not ruling that out just yet. Ok so that?s not a question ? I am worried that IBM may tell us we?re doing it wrong (humm) and to create individual exports for each fileset but this will quickly become tiresome! Thanks Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From z.han at imperial.ac.uk Mon May 14 11:33:07 2018 From: z.han at imperial.ac.uk (z.han at imperial.ac.uk) Date: Mon, 14 May 2018 11:33:07 +0100 (BST) Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Message-ID: Dear All, Any one has the same problem? /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); ^ ...... From jonathan.buzzard at strath.ac.uk Mon May 14 11:44:51 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 14 May 2018 11:44:51 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: Message-ID: <1526294691.17680.18.camel@strath.ac.uk> On Mon, 2018-05-14 at 10:09 +0000, Sobey, Richard A wrote: [SNIP] > ? > I am worried that IBM may tell us we?re doing it wrong (humm) and to > create individual exports for each fileset but this will quickly > become tiresome! > Worst case scenario you could fall back to using the dfree option in smb.conf and then use a program to get the file quota. I have the ~100 lines of C that you need it. Though it has been ~5 years since I last used it. In fact the whole reporting the fileset quota as the disk size is my idea, and the dfree config option is how I implemented it prior to IBM adding it to the vfs_gpfs module. A quick check shows a commit from Jeremy Allison on June 18th last year to use const stuct smb_filename, the comment on the commit is ?instead of const char *. We need to migrate all pathname based VFS calls to use a struct to finish modernising the VFS with extra timestamp and flags parameters. I suspect this change has broken the behaviour. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From spectrumscale at kiranghag.com Mon May 14 11:56:37 2018 From: spectrumscale at kiranghag.com (KG) Date: Mon, 14 May 2018 16:26:37 +0530 Subject: [gpfsug-discuss] pool-metadata_high_error Message-ID: Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rohwedder at de.ibm.com Mon May 14 12:18:55 2018 From: rohwedder at de.ibm.com (Markus Rohwedder) Date: Mon, 14 May 2018 13:18:55 +0200 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: Hello, the pool metadata high error reports issues with the free blocks in the metadataOnly and/or dataAndMetadata NSDs in the system pool. mmlspool and subsequently the GPFSPool sensor is the source of the information that is used be the threshold that reports this error. So please compare with mmlspool and mmperfmon query gpfs_pool_disksize, gpfs_pool_free_fullkb -b 86400 -n 1 Once inodes are allocated I am not aware of a method to de-allocate them. This is what the Knowledge Center says: "Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata space that cannot be reclaimed. " Mit freundlichen Gr??en / Kind regards Dr. Markus Rohwedder Spectrum Scale GUI Development Phone: +49 7034 6430190 IBM Deutschland Research & Development E-Mail: rohwedder at de.ibm.com Am Weiher 24 65451 Kelsterbach Germany From: KG To: gpfsug main discussion list Date: 14.05.2018 12:57 Subject: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1A908817.gif Type: image/gif Size: 4659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stockf at us.ibm.com Mon May 14 12:28:58 2018 From: stockf at us.ibm.com (Frederick Stock) Date: Mon, 14 May 2018 07:28:58 -0400 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: The difference in your inode information is presumably because the fileset you reference is an independent fileset and it has its own inode space distinct from the indoe space used for the "root" fileset (file system). Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com From: "Markus Rohwedder" To: gpfsug main discussion list Date: 05/14/2018 07:19 AM Subject: Re: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello, the pool metadata high error reports issues with the free blocks in the metadataOnly and/or dataAndMetadata NSDs in the system pool. mmlspool and subsequently the GPFSPool sensor is the source of the information that is used be the threshold that reports this error. So please compare with mmlspool and mmperfmon query gpfs_pool_disksize, gpfs_pool_free_fullkb -b 86400 -n 1 Once inodes are allocated I am not aware of a method to de-allocate them. This is what the Knowledge Center says: "Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata space that cannot be reclaimed. " Mit freundlichen Gr??en / Kind regards Dr. Markus Rohwedder Spectrum Scale GUI Development Phone: +49 7034 6430190 IBM Deutschland Research & Development E-Mail: rohwedder at de.ibm.com Am Weiher 24 65451 Kelsterbach Germany KG ---14.05.2018 12:57:33---Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. From: KG To: gpfsug main discussion list Date: 14.05.2018 12:57 Subject: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 4659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From arc at b4restore.com Mon May 14 12:10:18 2018 From: arc at b4restore.com (Andi Rhod Christiansen) Date: Mon, 14 May 2018 11:10:18 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 In-Reply-To: References: Message-ID: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> Hi, Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 and latest support is 7.4. You have to revert back to 3.10.0-693 ? I just had the same issue Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. Best regards Andi R. Christiansen -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk Sendt: 14. maj 2018 12:33 Til: gpfsug main discussion list Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Dear All, Any one has the same problem? /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); ^ ...... From spectrumscale at kiranghag.com Mon May 14 12:35:47 2018 From: spectrumscale at kiranghag.com (KG) Date: Mon, 14 May 2018 17:05:47 +0530 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: On Mon, May 14, 2018 at 4:48 PM, Markus Rohwedder wrote: > Once inodes are allocated I am not aware of a method to de-allocate them. > This is what the Knowledge Center says: > > *"Inodes are allocated when they are used. When a file is deleted, the > inode is reused, but inodes are never deallocated. When setting the maximum > number of inodes in a file system, there is the option to preallocate > inodes. However, in most cases there is no need to preallocate inodes > because, by default, inodes are allocated in sets as needed. If you do > decide to preallocate inodes, be careful not to preallocate more inodes > than will be used; otherwise, the allocated inodes will unnecessarily > consume metadata space that cannot be reclaimed. "* > > > I believe the Maximum number of inodes cannot be reduced but allocated number of inodes can be. Not sure why the GUI isnt allowing to reduce it. ? > > From: KG > To: gpfsug main discussion list > Date: 14.05.2018 12:57 > Subject: [gpfsug-discuss] pool-metadata_high_error > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hi Folks > > IHAC who is reporting pool-metadata_high_error on GUI. > > The inode utilisation on filesystem is as below > Used inodes - 92922895 > free inodes - 1684812529 > allocated - 1777735424 > max inodes - 1911363520 > > the inode utilization on one fileset (it is only one being used) is below > Used inodes - 93252664 > allocated - 1776624128 > max inodes 1876624064 > > is this because the difference in allocated and max inodes is very less? > > Customer tried reducing allocated inodes on fileset (between max and used > inode) and GUI complains that it is out of range. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 26124 bytes Desc: not available URL: From rohwedder at de.ibm.com Mon May 14 12:50:49 2018 From: rohwedder at de.ibm.com (Markus Rohwedder) Date: Mon, 14 May 2018 13:50:49 +0200 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: Hi, The GUI behavior is correct. You can reduce the maximum number of inodes of an inode space, but not below the allocated inodes level. See below: # Setting inode levels to 300000 max/ 200000 preallocated [root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 300000:200000 Set maxInodes for inode space 0 to 300000 Fileset root changed. # The actually allocated values may be sloightly different: [root at cache-11 ~]# mmlsfileset gpfs0 -L Filesets in file system 'gpfs0': Name Id RootInode ParentId Created InodeSpace MaxInodes AllocInodes Comment root 0 3 -- Mon Feb 26 11:34:06 2018 0 300000 200032 root fileset # Lowering the allocated values is not allowed [root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 300000:150000 The number of inodes to preallocate cannot be lower than the 200032 inodes already allocated. Input parameter value for inode limit out of range. mmchfileset: Command failed. Examine previous error messages to determine cause. # However, you can change the max inodes up to the allocated value [root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 200032:200032 Set maxInodes for inode space 0 to 200032 Fileset root changed. [root at cache-11 ~]# mmlsfileset gpfs0 -L Filesets in file system 'gpfs0': Name Id RootInode ParentId Created InodeSpace MaxInodes AllocInodes Comment root 0 3 -- Mon Feb 26 11:34:06 2018 0 200032 200032 root fileset Mit freundlichen Gr??en / Kind regards Dr. Markus Rohwedder Spectrum Scale GUI Development Phone: +49 7034 6430190 IBM Deutschland Research & Development E-Mail: rohwedder at de.ibm.com Am Weiher 24 65451 Kelsterbach Germany From: KG To: gpfsug main discussion list Date: 14.05.2018 13:37 Subject: Re: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org On Mon, May 14, 2018 at 4:48 PM, Markus Rohwedder wrote: Once inodes are allocated I am not aware of a method to de-allocate them. This is what the Knowledge Center says: "Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata space that cannot be reclaimed. " I believe the Maximum number of inodes cannot be reduced but allocated number of inodes can be. Not sure why the GUI isnt allowing to reduce it. ? From: KG To: gpfsug main discussion list Date: 14.05.2018 12:57 Subject: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 18426749.gif Type: image/gif Size: 4659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 18361734.gif Type: image/gif Size: 26124 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Mon May 14 12:54:17 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Mon, 14 May 2018 11:54:17 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526294691.17680.18.camel@strath.ac.uk> References: <1526294691.17680.18.camel@strath.ac.uk> Message-ID: Thanks Jonathan. What I failed to mention in my OP was that MacOS clients DO report the correct size of each mounted folder. Not sure how that changes anything except to reinforce the idea that it's Windows at fault. Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: 14 May 2018 11:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query On Mon, 2018-05-14 at 10:09 +0000, Sobey, Richard A wrote: [SNIP] > ? > I am worried that IBM may tell us we?re doing it wrong (humm) and to > create individual exports for each fileset but this will quickly > become tiresome! > Worst case scenario you could fall back to using the dfree option in smb.conf and then use a program to get the file quota. I have the ~100 lines of C that you need it. Though it has been ~5 years since I last used it. In fact the whole reporting the fileset quota as the disk size is my idea, and the dfree config option is how I implemented it prior to IBM adding it to the vfs_gpfs module. A quick check shows a commit from Jeremy Allison on June 18th last year to use const stuct smb_filename, the comment on the commit is ?instead of const char *. We need to migrate all pathname based VFS calls to use a struct to finish modernising the VFS with extra timestamp and flags parameters. I suspect this change has broken the behaviour. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From z.han at imperial.ac.uk Mon May 14 12:59:25 2018 From: z.han at imperial.ac.uk (z.han at imperial.ac.uk) Date: Mon, 14 May 2018 12:59:25 +0100 (BST) Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 In-Reply-To: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> Message-ID: Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh https://access.redhat.com/errata/RHSA-2018:1318 Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) Kernel: error in exception handling leads to DoS (CVE-2018-8897) Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) ... On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > Date: Mon, 14 May 2018 11:10:18 +0000 > From: Andi Rhod Christiansen > Reply-To: gpfsug main discussion list > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Hi, > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > I just had the same issue > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > Best regards > Andi R. Christiansen > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk > Sendt: 14. maj 2018 12:33 > Til: gpfsug main discussion list > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > > Dear All, > > Any one has the same problem? > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > exit 1;\ > fi > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > ^ ...... > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From arc at b4restore.com Mon May 14 13:13:21 2018 From: arc at b4restore.com (Andi Rhod Christiansen) Date: Mon, 14 May 2018 12:13:21 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> Message-ID: <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" Best regards. -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk Sendt: 14. maj 2018 13:59 Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh https://access.redhat.com/errata/RHSA-2018:1318 Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) Kernel: error in exception handling leads to DoS (CVE-2018-8897) Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) ... On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > Date: Mon, 14 May 2018 11:10:18 +0000 > From: Andi Rhod Christiansen > Reply-To: gpfsug main discussion list > > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Hi, > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > I just had the same issue > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > Best regards > Andi R. Christiansen > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af > z.han at imperial.ac.uk > Sendt: 14. maj 2018 12:33 > Til: gpfsug main discussion list > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Dear All, > > Any one has the same problem? > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > exit 1;\ > fi > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > ^ ...... > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From jonathan.buzzard at strath.ac.uk Mon May 14 13:19:43 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 14 May 2018 13:19:43 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> Message-ID: <1526300383.17680.20.camel@strath.ac.uk> On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From knop at us.ibm.com Mon May 14 14:30:41 2018 From: knop at us.ibm.com (Felipe Knop) Date: Mon, 14 May 2018 09:30:41 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> Message-ID: All, Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed in Scale to support this kernel level, upgrading to one of those upcoming PTFs will be required in order to run with that kernel. Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: Andi Rhod Christiansen To: gpfsug main discussion list Date: 05/14/2018 08:15 AM Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Sent by: gpfsug-discuss-bounces at spectrumscale.org You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" Best regards. -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk Sendt: 14. maj 2018 13:59 Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh https://access.redhat.com/errata/RHSA-2018:1318 Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) Kernel: error in exception handling leads to DoS (CVE-2018-8897) Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) ... On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > Date: Mon, 14 May 2018 11:10:18 +0000 > From: Andi Rhod Christiansen > Reply-To: gpfsug main discussion list > > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Hi, > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > I just had the same issue > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > Best regards > Andi R. Christiansen > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af > z.han at imperial.ac.uk > Sendt: 14. maj 2018 12:33 > Til: gpfsug main discussion list > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Dear All, > > Any one has the same problem? > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > exit 1;\ > fi > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > ^ ...... > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From bbanister at jumptrading.com Mon May 14 21:29:02 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 14 May 2018 20:29:02 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas Message-ID: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> Hi all, I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? Can't find anything in man pages, thanks! -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Mon May 14 22:26:44 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Tue, 15 May 2018 00:26:44 +0300 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526300383.17680.20.camel@strath.ac.uk> References: <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: Hi What is the output of mmlsfs - does you have --filesetdf enabled ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Jonathan Buzzard To: gpfsug main discussion list Date: 05/14/2018 03:22 PM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From peserocka at gmail.com Mon May 14 22:51:36 2018 From: peserocka at gmail.com (Peter Serocka) Date: Mon, 14 May 2018 23:51:36 +0200 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> Message-ID: <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From kywang at us.ibm.com Mon May 14 23:12:48 2018 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Mon, 14 May 2018 18:12:48 -0400 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> Message-ID: Try disabling and re-enabling default quotas withthe -d option for that fileset. mmdefquotaon command Activates default quota limit usage. Synopsis mmdefquotaon [?u] [?g] [?j] [?v] [?d] {Device [Device... ] | ?a} or mmdefquotaon [?u] [?g] [?v] [?d] {Device:Fileset ... | ?a} ... ?d Assigns default quota limits to existing users, groups, or filesets when the mmdefedquota command is issued. When ??perfileset?quota is not in effect for the file system, this option will only affect existing users, groups, or filesets with no established quota limits. When ??perfileset?quota is in effect for the file system, this option will affect existing users, groups, or filesets with no established quota limits, and it will also change existing users or groups that refer to default quotas at the file system level into users or groups that refer to fileset?level default quota. For more information about default quota priorities, see the following IBM Spectrum Scale: Administration and Programming Reference topic: Default quotas. If this option is not chosen, existing quota entries remain in effect and are not governed by the default quota rules. Kuei-Yu Wang-Knop IBM Scalable I/O development From: Bryan Banister To: "gpfsug main discussion list (gpfsug-discuss at spectrumscale.org)" Date: 05/14/2018 04:29 PM Subject: [gpfsug-discuss] How to clear explicitly set quotas Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi all, I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? Can?t find anything in man pages, thanks! -Bryan Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From christof.schmitt at us.ibm.com Mon May 14 23:17:45 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Mon, 14 May 2018 22:17:45 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: , <1526294691.17680.18.camel@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Tue May 15 06:59:38 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Tue, 15 May 2018 05:59:38 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de> Message-ID: <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Tue May 15 08:10:32 2018 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Tue, 15 May 2018 09:10:32 +0200 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de> <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> Message-ID: An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Tue May 15 09:10:21 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Tue, 15 May 2018 08:10:21 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: Hi Yaron It's currently set to no. Thanks Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Yaron Daniel Sent: 14 May 2018 22:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query Hi What is the output of mmlsfs - does you have --filesetdfenabled ? Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd [cid:image001.gif at 01D3EC2C.8ACE5310] Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel [IBM Storage Strategy and Solutions v1][IBM Storage Management and Data Protection v1][cid:image004.gif at 01D3EC2C.8ACE5310][cid:image005.gif at 01D3EC2C.8ACE5310] [Related image] From: Jonathan Buzzard > To: gpfsug main discussion list > Date: 05/14/2018 03:22 PM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 1851 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 4376 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 5093 bytes Desc: image003.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.gif Type: image/gif Size: 4746 bytes Desc: image004.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.gif Type: image/gif Size: 4557 bytes Desc: image005.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 11294 bytes Desc: image006.jpg URL: From YARD at il.ibm.com Tue May 15 11:10:45 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Tue, 15 May 2018 13:10:45 +0300 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: Hi So - u want to get quota report per fileset quota - right ? We use this param when we want to monitor the NFS exports with df , i think this should also affect the SMB filesets. Can u try to enable it and see if it works ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: "Sobey, Richard A" To: gpfsug main discussion list Date: 05/15/2018 11:11 AM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Yaron It?s currently set to no. Thanks Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Yaron Daniel Sent: 14 May 2018 22:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query Hi What is the output of mmlsfs - does you have --filesetdfenabled ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Jonathan Buzzard To: gpfsug main discussion list Date: 05/14/2018 03:22 PM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Tue May 15 11:23:49 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 15 May 2018 11:23:49 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: <1526379829.17680.27.camel@strath.ac.uk> On Tue, 2018-05-15 at 13:10 +0300, Yaron Daniel wrote: > Hi > > So - u want to get quota report per fileset quota - right ? > We use this param when we want to monitor the NFS exports with df , i > think this should also affect the SMB filesets. > > Can u try to enable it and see if it works ? > It is irrelevant to Samba, this is or should be handled in vfs_gpfs as Christof said earlier. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Tue May 15 11:28:00 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 15 May 2018 11:28:00 +0100 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> Message-ID: <1526380080.17680.29.camel@strath.ac.uk> On Mon, 2018-05-14 at 09:30 -0400, Felipe Knop wrote: > All, > > Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are > needed in Scale to support this kernel level, upgrading to one of > those upcoming PTFs will be required in order to run with that > kernel. > One wonders what the mmfs26/mmfslinux does that you can't achieve with fuse these days? Sure I understand back in the day fuse didn't exist and it could be a significant rewrite of code to use fuse instead. On the plus side though it would make all these sorts of security issues, can't upgrade your distro etc. disappear. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From valdis.kletnieks at vt.edu Tue May 15 13:51:07 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Tue, 15 May 2018 08:51:07 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <1526380080.17680.29.camel@strath.ac.uk> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <1526380080.17680.29.camel@strath.ac.uk> Message-ID: <201401.1526388667@turing-police.cc.vt.edu> On Tue, 15 May 2018 11:28:00 +0100, Jonathan Buzzard said: > One wonders what the mmfs26/mmfslinux does that you can't achieve with > fuse these days? Handling each disk I/O request without several transitions to/from userspace comes to mind... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From ulmer at ulmer.org Tue May 15 16:09:01 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 15 May 2018 10:09:01 -0500 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <1526380080.17680.29.camel@strath.ac.uk> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <1526380080.17680.29.camel@strath.ac.uk> Message-ID: <26DF1F4F-BC66-40C8-89F1-3A64E94CE5B4@ulmer.org> > On May 15, 2018, at 5:28 AM, Jonathan Buzzard wrote: > > On Mon, 2018-05-14 at 09:30 -0400, Felipe Knop wrote: >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is >> planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are >> needed in Scale to support this kernel level, upgrading to one of >> those upcoming PTFs will be required in order to run with that >> kernel. >> > > One wonders what the mmfs26/mmfslinux does that you can't achieve with > fuse these days? Sure I understand back in the day fuse didn't exist > and it could be a significant rewrite of code to use fuse instead. On > the plus side though it would make all these sorts of security issues, > can't upgrade your distro etc. disappear. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > More lines of code. More code is bad. :) Liberty, -- Stephen From bbanister at jumptrading.com Tue May 15 16:35:51 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 15:35:51 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> Message-ID: <723293fee7214938ae20cdfdbaf99149@jumptrading.com> That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue May 15 16:59:56 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 15:59:56 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <723293fee7214938ae20cdfdbaf99149@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> <723293fee7214938ae20cdfdbaf99149@jumptrading.com> Message-ID: <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Tue May 15 16:13:15 2018 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Tue, 15 May 2018 15:13:15 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> Message-ID: <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> I know these dates can move, but any vague idea of a timeframe target for release (this quarter, next quarter, etc.)? Thanks! -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' > On May 14, 2018, at 9:30 AM, Felipe Knop wrote: > > All, > > Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed in Scale to support this kernel level, upgrading to one of those upcoming PTFs will be required in order to run with that kernel. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that > > From: Andi Rhod Christiansen > To: gpfsug main discussion list > Date: 05/14/2018 08:15 AM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > You are welcome. > > I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. > > they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" > > Best regards. > > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk > Sendt: 14. maj 2018 13:59 > Til: gpfsug main discussion list > Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > > Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh > > > https://access.redhat.com/errata/RHSA-2018:1318 > > Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) > > Kernel: error in exception handling leads to DoS (CVE-2018-8897) > Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) > > kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) > > ... > > > On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > > Date: Mon, 14 May 2018 11:10:18 +0000 > > From: Andi Rhod Christiansen > > Reply-To: gpfsug main discussion list > > > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Hi, > > > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > > > I just had the same issue > > > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > > > > Best regards > > Andi R. Christiansen > > > > -----Oprindelig meddelelse----- > > Fra: gpfsug-discuss-bounces at spectrumscale.org > > P? vegne af > > z.han at imperial.ac.uk > > Sendt: 14. maj 2018 12:33 > > Til: gpfsug main discussion list > > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Dear All, > > > > Any one has the same problem? > > > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > > exit 1;\ > > fi > > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > > ^ ...... > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: Message signed with OpenPGP URL: From bbanister at jumptrading.com Tue May 15 19:04:40 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 18:04:40 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Message-ID: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> I am now trying to get our system automation to play with the new Spectrum Scale Protocols 5.0.1-0 release and have found that the nfs-ganesha.service can no longer start: # systemctl status nfs-ganesha ? nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2018-05-15 12:43:23 CDT; 8s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 8398 ExecStart=/usr/bin/ganesha.nfsd $OPTIONS (code=exited, status=203/EXEC) May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Starting NFS-Ganesha file server... May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[8398]: Failed at step EXEC spawning /usr/bin/ganesha.nfsd: No such file or directory May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service: control process exited, code=exited status=203 May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Failed to start NFS-Ganesha file server. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Unit nfs-ganesha.service entered failed state. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service failed. Sure enough, it?s not there anymore: # ls /usr/bin/*ganesha* /usr/bin/ganesha_conf /usr/bin/ganesha_mgr /usr/bin/ganesha_stats /usr/bin/gpfs.ganesha.nfsd /usr/bin/sm_notify.ganesha So I wondered what does provide it: # yum whatprovides /usr/bin/ganesha.nfsd Loaded plugins: etckeeper, priorities 2490 packages excluded due to repository priority protections [snip] nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 : NFS-Ganesha is a NFS Server running in user space Repo : @rhel7-universal-linux-production Matched from: Filename : /usr/bin/ganesha.nfsd Confirmed again just for sanity sake: # rpm -ql nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" /usr/bin/ganesha.nfsd But it?s not in the latest release: # rpm -ql nfs-ganesha-2.5.3-ibm020.00.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" # I also looked in every RPM package that was provided in the Spectrum Scale 5.0.1-0 download. So should it be provided? Or should the service really try to start `/usr/bin/gpfs.ganesha.nfsd`?? Or should there be a symlink between the two??? Is this something the magical Spectrum Scale Install Toolkit would do under the covers???? Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue May 15 19:08:08 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 18:08:08 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> Message-ID: <6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> BTW, I just tried the symlink option and it seems to work: # ln -s gpfs.ganesha.nfsd ganesha.nfsd # ls -ld ganesha.nfsd lrwxrwxrwx 1 root root 17 May 15 13:05 ganesha.nfsd -> gpfs.ganesha.nfsd # # systemctl restart nfs-ganesha.service # systemctl status nfs-ganesha.service ? nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2018-05-15 13:06:10 CDT; 5s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 62888 ExecStop=/bin/dbus-send --system --dest=org.ganesha.nfsd --type=method_call /org/ganesha/nfsd/admin org.ganesha.nfsd.admin.shutdown (code=exited, status=0/SUCCESS) Process: 63091 ExecStartPost=/bin/bash -c prlimit --pid $MAINPID --nofile=$NOFILE:$NOFILE (code=exited, status=0/SUCCESS) Process: 63089 ExecStart=/usr/bin/ganesha.nfsd $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 63090 (ganesha.nfsd) Memory: 6.1M CGroup: /system.slice/nfs-ganesha.service ??63090 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT May 15 13:06:10 fpia-gpfs-testing-cnfs01 systemd[1]: Starting NFS-Ganesha file server... May 15 13:06:10 fpia-gpfs-testing-cnfs01 systemd[1]: Started NFS-Ganesha file server. [root at fpia-gpfs-testing-cnfs01 bin]# Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 1:05 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Note: External Email ________________________________ I am now trying to get our system automation to play with the new Spectrum Scale Protocols 5.0.1-0 release and have found that the nfs-ganesha.service can no longer start: # systemctl status nfs-ganesha ? nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2018-05-15 12:43:23 CDT; 8s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 8398 ExecStart=/usr/bin/ganesha.nfsd $OPTIONS (code=exited, status=203/EXEC) May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Starting NFS-Ganesha file server... May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[8398]: Failed at step EXEC spawning /usr/bin/ganesha.nfsd: No such file or directory May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service: control process exited, code=exited status=203 May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Failed to start NFS-Ganesha file server. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Unit nfs-ganesha.service entered failed state. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service failed. Sure enough, it?s not there anymore: # ls /usr/bin/*ganesha* /usr/bin/ganesha_conf /usr/bin/ganesha_mgr /usr/bin/ganesha_stats /usr/bin/gpfs.ganesha.nfsd /usr/bin/sm_notify.ganesha So I wondered what does provide it: # yum whatprovides /usr/bin/ganesha.nfsd Loaded plugins: etckeeper, priorities 2490 packages excluded due to repository priority protections [snip] nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 : NFS-Ganesha is a NFS Server running in user space Repo : @rhel7-universal-linux-production Matched from: Filename : /usr/bin/ganesha.nfsd Confirmed again just for sanity sake: # rpm -ql nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" /usr/bin/ganesha.nfsd But it?s not in the latest release: # rpm -ql nfs-ganesha-2.5.3-ibm020.00.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" # I also looked in every RPM package that was provided in the Spectrum Scale 5.0.1-0 download. So should it be provided? Or should the service really try to start `/usr/bin/gpfs.ganesha.nfsd`?? Or should there be a symlink between the two??? Is this something the magical Spectrum Scale Install Toolkit would do under the covers???? Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue May 15 19:31:13 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 15 May 2018 19:31:13 +0100 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> <6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> Message-ID: <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From christof.schmitt at us.ibm.com Tue May 15 19:49:44 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Tue, 15 May 2018 18:49:44 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526379829.17680.27.camel@strath.ac.uk> References: <1526379829.17680.27.camel@strath.ac.uk>, <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: From knop at us.ibm.com Tue May 15 20:02:53 2018 From: knop at us.ibm.com (Felipe Knop) Date: Tue, 15 May 2018 15:02:53 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com><83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: All, Validation of RHEL 7.5 on Scale is currently under way, and we are currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which will include the corresponding fix. Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: Ryan Novosielski To: gpfsug main discussion list Date: 05/15/2018 12:56 PM Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Sent by: gpfsug-discuss-bounces at spectrumscale.org I know these dates can move, but any vague idea of a timeframe target for release (this quarter, next quarter, etc.)? Thanks! -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' > On May 14, 2018, at 9:30 AM, Felipe Knop wrote: > > All, > > Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed in Scale to support this kernel level, upgrading to one of those upcoming PTFs will be required in order to run with that kernel. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that > > From: Andi Rhod Christiansen > To: gpfsug main discussion list > Date: 05/14/2018 08:15 AM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > You are welcome. > > I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. > > they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" > > Best regards. > > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk > Sendt: 14. maj 2018 13:59 > Til: gpfsug main discussion list > Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > > Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh > > > https://access.redhat.com/errata/RHSA-2018:1318 > > Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) > > Kernel: error in exception handling leads to DoS (CVE-2018-8897) > Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) > > kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) > > ... > > > On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > > Date: Mon, 14 May 2018 11:10:18 +0000 > > From: Andi Rhod Christiansen > > Reply-To: gpfsug main discussion list > > > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Hi, > > > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > > > I just had the same issue > > > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > > > > Best regards > > Andi R. Christiansen > > > > -----Oprindelig meddelelse----- > > Fra: gpfsug-discuss-bounces at spectrumscale.org > > P? vegne af > > z.han at imperial.ac.uk > > Sendt: 14. maj 2018 12:33 > > Til: gpfsug main discussion list > > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Dear All, > > > > Any one has the same problem? > > > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > > exit 1;\ > > fi > > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > > ^ ...... > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stijn.deweirdt at ugent.be Tue May 15 20:25:31 2018 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Tue, 15 May 2018 21:25:31 +0200 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: so this means running out-of-date kernels for at least another month? oh boy... i hope this is not some new trend in gpfs support. othwerwise all RHEL based sites will have to start adding EUS as default cost to run gpfs with basic security compliance. stijn On 05/15/2018 09:02 PM, Felipe Knop wrote: > All, > > Validation of RHEL 7.5 on Scale is currently under way, and we are > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > will include the corresponding fix. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > > > From: Ryan Novosielski > To: gpfsug main discussion list > Date: 05/15/2018 12:56 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > I know these dates can move, but any vague idea of a timeframe target for > release (this quarter, next quarter, etc.)? > > Thanks! > > -- > ____ > || \\UTGERS, > |---------------------------*O*--------------------------- > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > || \\ of NJ | Office of Advanced Research Computing - MSB > C630, Newark > `' > >> On May 14, 2018, at 9:30 AM, Felipe Knop wrote: >> >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > in Scale to support this kernel level, upgrading to one of those upcoming > PTFs will be required in order to run with that kernel. >> >> Regards, >> >> Felipe >> >> ---- >> Felipe Knop knop at us.ibm.com >> GPFS Development and Security >> IBM Systems >> IBM Building 008 >> 2455 South Rd, Poughkeepsie, NY 12601 >> (845) 433-9314 T/L 293-9314 >> >> >> >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > welcome. I see your concern but as long as IBM has not released spectrum > scale for 7.5 that >> >> From: Andi Rhod Christiansen >> To: gpfsug main discussion list >> Date: 05/14/2018 08:15 AM >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> >> >> >> You are welcome. >> >> I see your concern but as long as IBM has not released spectrum scale for > 7.5 that is their only solution, in regards to them caring about security I > would say yes they do care, but from their point of view either they tell > the customer to upgrade as soon as red hat releases new versions and > forcing the customer to be down until they have a new release or they tell > them to stay on supported level to a new release is ready. >> >> they should release a version supporting the new kernel soon, IBM told me > when I asked that they are "currently testing and have a support date soon" >> >> Best regards. >> >> >> -----Oprindelig meddelelse----- >> Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af z.han at imperial.ac.uk >> Sendt: 14. maj 2018 13:59 >> Til: gpfsug main discussion list >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> >> Thanks. Does IBM care about security, one would ask? In this case I'd > choose to use the new kernel for my virtualization over gpfs ... sigh >> >> >> https://access.redhat.com/errata/RHSA-2018:1318 >> >> Kernel: KVM: error in exception handling leads to wrong debug stack value > (CVE-2018-1087) >> >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > escalation (CVE-2017-16939) >> >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > netfilter/ebtables.c (CVE-2018-1068) >> >> ... >> >> >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: >>> Date: Mon, 14 May 2018 11:10:18 +0000 >>> From: Andi Rhod Christiansen >>> Reply-To: gpfsug main discussion list >>> >>> To: gpfsug main discussion list >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Hi, >>> >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? >>> >>> I just had the same issue >>> >>> Revert to previous working kernel at redhat 7.4 release which is > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > level. >>> >>> >>> Best regards >>> Andi R. Christiansen >>> >>> -----Oprindelig meddelelse----- >>> Fra: gpfsug-discuss-bounces at spectrumscale.org >>> P? vegne af >>> z.han at imperial.ac.uk >>> Sendt: 14. maj 2018 12:33 >>> Til: gpfsug main discussion list >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Dear All, >>> >>> Any one has the same problem? >>> >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > [ $? -ne 0 ]; then \ >>> exit 1;\ >>> fi >>> make[2]: Entering directory > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > no member named ?i_wb_list? >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); >>> ^ ...... >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From abeattie at au1.ibm.com Tue May 15 22:45:47 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Tue, 15 May 2018 21:45:47 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: , <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com><83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com><4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Tue May 15 23:00:48 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 15 May 2018 18:00:48 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks Message-ID: Hello All, Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? I understand that i will not need a redundant SMB server configuration. I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Tue May 15 22:57:12 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Tue, 15 May 2018 21:57:12 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: All, I have to kind of agree with Andrew ? it seems that there is a broad range of takes on kernel upgrades ? everything from ?install the latest kernel the day it comes out? to ?stick with this kernel, we know it works.? Related to that, let me throw out this question ? what about those who haven?t upgraded their kernel in a while at least because they?re concerned with the negative performance impacts of the meltdown / spectre patches??? So let?s just say a customer has upgraded the non-GPFS servers in their cluster, but they?ve left their NSD servers unpatched (I?m talking about the kernel only here; all other updates are applied) due to the aforementioned performance concerns ? as long as they restrict access (i.e. who can log in) and use appropriate host-based firewall rules, is their some risk that they should be aware of? Discuss. Thanks! Kevin On May 15, 2018, at 4:45 PM, Andrew Beattie > wrote: this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux that they "just can't move off" Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: Stijn De Weirdt > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Date: Wed, May 16, 2018 5:35 AM so this means running out-of-date kernels for at least another month? oh boy... i hope this is not some new trend in gpfs support. othwerwise all RHEL based sites will have to start adding EUS as default cost to run gpfs with basic security compliance. stijn On 05/15/2018 09:02 PM, Felipe Knop wrote: > All, > > Validation of RHEL 7.5 on Scale is currently under way, and we are > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > will include the corresponding fix. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > > > From: Ryan Novosielski > > To: gpfsug main discussion list > > Date: 05/15/2018 12:56 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > I know these dates can move, but any vague idea of a timeframe target for > release (this quarter, next quarter, etc.)? > > Thanks! > > -- > ____ > || \\UTGERS, > |---------------------------*O*--------------------------- > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > || \\ of NJ | Office of Advanced Research Computing - MSB > C630, Newark > `' > >> On May 14, 2018, at 9:30 AM, Felipe Knop > wrote: >> >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > in Scale to support this kernel level, upgrading to one of those upcoming > PTFs will be required in order to run with that kernel. >> >> Regards, >> >> Felipe >> >> ---- >> Felipe Knop knop at us.ibm.com >> GPFS Development and Security >> IBM Systems >> IBM Building 008 >> 2455 South Rd, Poughkeepsie, NY 12601 >> (845) 433-9314 T/L 293-9314 >> >> >> >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > welcome. I see your concern but as long as IBM has not released spectrum > scale for 7.5 that >> >> From: Andi Rhod Christiansen > >> To: gpfsug main discussion list > >> Date: 05/14/2018 08:15 AM >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> >> >> >> You are welcome. >> >> I see your concern but as long as IBM has not released spectrum scale for > 7.5 that is their only solution, in regards to them caring about security I > would say yes they do care, but from their point of view either they tell > the customer to upgrade as soon as red hat releases new versions and > forcing the customer to be down until they have a new release or they tell > them to stay on supported level to a new release is ready. >> >> they should release a version supporting the new kernel soon, IBM told me > when I asked that they are "currently testing and have a support date soon" >> >> Best regards. >> >> >> -----Oprindelig meddelelse----- >> Fra: gpfsug-discuss-bounces at spectrumscale.org > > P? vegne af z.han at imperial.ac.uk >> Sendt: 14. maj 2018 13:59 >> Til: gpfsug main discussion list > >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> >> Thanks. Does IBM care about security, one would ask? In this case I'd > choose to use the new kernel for my virtualization over gpfs ... sigh >> >> >> https://access.redhat.com/errata/RHSA-2018:1318 >> >> Kernel: KVM: error in exception handling leads to wrong debug stack value > (CVE-2018-1087) >> >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > escalation (CVE-2017-16939) >> >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > netfilter/ebtables.c (CVE-2018-1068) >> >> ... >> >> >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: >>> Date: Mon, 14 May 2018 11:10:18 +0000 >>> From: Andi Rhod Christiansen > >>> Reply-To: gpfsug main discussion list >>> > >>> To: gpfsug main discussion list > >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Hi, >>> >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? >>> >>> I just had the same issue >>> >>> Revert to previous working kernel at redhat 7.4 release which is > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > level. >>> >>> >>> Best regards >>> Andi R. Christiansen >>> >>> -----Oprindelig meddelelse----- >>> Fra: gpfsug-discuss-bounces at spectrumscale.org >>> > P? vegne af >>> z.han at imperial.ac.uk >>> Sendt: 14. maj 2018 12:33 >>> Til: gpfsug main discussion list > >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Dear All, >>> >>> Any one has the same problem? >>> >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > [ $? -ne 0 ]; then \ >>> exit 1;\ >>> fi >>> make[2]: Entering directory > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > no member named ?i_wb_list? >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); >>> ^ ...... >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From leslie.james.elliott at gmail.com Tue May 15 23:18:45 2018 From: leslie.james.elliott at gmail.com (leslie elliott) Date: Wed, 16 May 2018 08:18:45 +1000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: you might want to read the license details of gpfs before you try do this :) pretty sure you need a server license to re-export the files from a GPFS mount On 16 May 2018 at 08:00, wrote: > Hello All, > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on > GPFS client? Is it supported and does it lead to any issues? > I understand that i will not need a redundant SMB server configuration. > > I could use CES, but CES does not support follow-symlinks outside > respective SMB export. Follow-symlinks is a however a hard-requirement for > to follow links outside GPFS filesystems. > > Thanks, > Lohit > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Tue May 15 23:32:02 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Tue, 15 May 2018 22:32:02 +0000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 15 23:46:18 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 15 May 2018 18:46:18 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com><83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com><4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: Kevin, that seems to be a good point. IF you have dedicated hardware to acting only as a storage and/or file server, THEN neither meltdown nor spectre should not be a worry. BECAUSE meltdown and spectre are just about an adversarial process spying on another process or kernel memory. IF we're not letting any potential adversary run her code on our file server, what's the exposure? NOW, let the security experts tell us where the flaw is in this argument... From: "Buterbaugh, Kevin L" To: gpfsug main discussion list Date: 05/15/2018 06:12 PM Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Sent by: gpfsug-discuss-bounces at spectrumscale.org All, I have to kind of agree with Andrew ? it seems that there is a broad range of takes on kernel upgrades ? everything from ?install the latest kernel the day it comes out? to ?stick with this kernel, we know it works.? Related to that, let me throw out this question ? what about those who haven?t upgraded their kernel in a while at least because they?re concerned with the negative performance impacts of the meltdown / spectre patches??? So let?s just say a customer has upgraded the non-GPFS servers in their cluster, but they?ve left their NSD servers unpatched (I?m talking about the kernel only here; all other updates are applied) due to the aforementioned performance concerns ? as long as they restrict access (i.e. who can log in) and use appropriate host-based firewall rules, is their some risk that they should be aware of? Discuss. Thanks! Kevin On May 15, 2018, at 4:45 PM, Andrew Beattie wrote: this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux that they "just can't move off" Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: Stijn De Weirdt Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Date: Wed, May 16, 2018 5:35 AM so this means running out-of-date kernels for at least another month? oh boy... i hope this is not some new trend in gpfs support. othwerwise all RHEL based sites will have to start adding EUS as default cost to run gpfs with basic security compliance. stijn On 05/15/2018 09:02 PM, Felipe Knop wrote: > All, > > Validation of RHEL 7.5 on Scale is currently under way, and we are > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > will include the corresponding fix. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > > > From: Ryan Novosielski > To: gpfsug main discussion list > Date: 05/15/2018 12:56 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > I know these dates can move, but any vague idea of a timeframe target for > release (this quarter, next quarter, etc.)? > > Thanks! > > -- > ____ > || \\UTGERS, > |---------------------------*O*--------------------------- > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > || \\ of NJ | Office of Advanced Research Computing - MSB > C630, Newark > `' > >> On May 14, 2018, at 9:30 AM, Felipe Knop wrote: >> >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > in Scale to support this kernel level, upgrading to one of those upcoming > PTFs will be required in order to run with that kernel. >> >> Regards, >> >> Felipe >> >> ---- >> Felipe Knop knop at us.ibm.com >> GPFS Development and Security >> IBM Systems >> IBM Building 008 >> 2455 South Rd, Poughkeepsie, NY 12601 >> (845) 433-9314 T/L 293-9314 >> >> >> >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > welcome. I see your concern but as long as IBM has not released spectrum > scale for 7.5 that >> >> From: Andi Rhod Christiansen >> To: gpfsug main discussion list >> Date: 05/14/2018 08:15 AM >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> >> >> >> You are welcome. >> >> I see your concern but as long as IBM has not released spectrum scale for > 7.5 that is their only solution, in regards to them caring about security I > would say yes they do care, but from their point of view either they tell > the customer to upgrade as soon as red hat releases new versions and > forcing the customer to be down until they have a new release or they tell > them to stay on supported level to a new release is ready. >> >> they should release a version supporting the new kernel soon, IBM told me > when I asked that they are "currently testing and have a support date soon" >> >> Best regards. >> >> >> -----Oprindelig meddelelse----- >> Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af z.han at imperial.ac.uk >> Sendt: 14. maj 2018 13:59 >> Til: gpfsug main discussion list >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> >> Thanks. Does IBM care about security, one would ask? In this case I'd > choose to use the new kernel for my virtualization over gpfs ... sigh >> >> >> https://access.redhat.com/errata/RHSA-2018:1318 >> >> Kernel: KVM: error in exception handling leads to wrong debug stack value > (CVE-2018-1087) >> >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > escalation (CVE-2017-16939) >> >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > netfilter/ebtables.c (CVE-2018-1068) >> >> ... >> >> >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: >>> Date: Mon, 14 May 2018 11:10:18 +0000 >>> From: Andi Rhod Christiansen >>> Reply-To: gpfsug main discussion list >>> >>> To: gpfsug main discussion list >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Hi, >>> >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? >>> >>> I just had the same issue >>> >>> Revert to previous working kernel at redhat 7.4 release which is > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > level. >>> >>> >>> Best regards >>> Andi R. Christiansen >>> >>> -----Oprindelig meddelelse----- >>> Fra: gpfsug-discuss-bounces at spectrumscale.org >>> P? vegne af >>> z.han at imperial.ac.uk >>> Sendt: 14. maj 2018 12:33 >>> Til: gpfsug main discussion list >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Dear All, >>> >>> Any one has the same problem? >>> >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > [ $? -ne 0 ]; then \ >>> exit 1;\ >>> fi >>> make[2]: Entering directory > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > no member named ?i_wb_list? >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); >>> ^ ...... >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Wed May 16 00:48:40 2018 From: valleru at cbio.mskcc.org (Lohit Valleru) Date: Tue, 15 May 2018 19:48:40 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: <7aef4353-058f-4741-9760-319bcd037213@Spark> Thanks Christof. The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. Now we are migrating most of the data to GPFS keeping the symlinks as they are. Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? Regards, Lohit On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. > > Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. > > > Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? > > Regards, > > Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ > christof.schmitt at us.ibm.com? ||? +1-520-799-2469??? (T/L: 321-2469) > > > > ----- Original message ----- > > From: valleru at cbio.mskcc.org > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > To: gpfsug main discussion list > > Cc: > > Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > Date: Tue, May 15, 2018 3:04 PM > > > > Hello All, > > > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? > > I understand that i will not need a redundant SMB server configuration. > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > Thanks, > > Lohit > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron.s.knister at nasa.gov Wed May 16 02:03:36 2018 From: aaron.s.knister at nasa.gov (Aaron Knister) Date: Tue, 15 May 2018 21:03:36 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: The one thing that comes to mind is if you're able to affect some unprivileged process on the NSD servers. Let's say there's a daemon that listens on a port but runs as an unprivileged user in which a vulnerability appears (lets say a 0-day remote code execution bug). One might be tempted to ignore that vulnerability for one reason or another but you couple that with something like meltdown/spectre and in *theory* you could do something like sniff ssh key material and get yourself on the box. In principle I agree with your argument but I've find that when one accepts and justifies a particular risk it can become easy to remember which vulnerability risks you've accepted and end up more exposed than one may realize. Still, the above scenario is low risk (but potentially very high impact), though :) -Aaron On 5/15/18 6:46 PM, Marc A Kaplan wrote: > Kevin, that seems to be a good point. > > IF you have dedicated hardware to acting only as a storage and/or file > server, THEN neither meltdown nor spectre should not be a worry. > > BECAUSE meltdown and spectre are just about an adversarial process > spying on another process or kernel memory. ?IF we're not letting any > potential adversary run her code on our file server, what's the exposure? > > NOW, let the security experts tell us where the flaw is in this argument... > > > > From: "Buterbaugh, Kevin L" > To: gpfsug main discussion list > Date: 05/15/2018 06:12 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working > ?withkernel ? ? ? ?3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------------------------------------------------ > > > > All, > > I have to kind of agree with Andrew ? it seems that there is a broad > range of takes on kernel upgrades ? everything from ?install the latest > kernel the day it comes out? to ?stick with this kernel, we know it works.? > > Related to that, let me throw out this question ? what about those who > haven?t upgraded their kernel in a while at least because they?re > concerned with the negative performance impacts of the meltdown / > spectre patches??? ?So let?s just say a customer has upgraded the > non-GPFS servers in their cluster, but they?ve left their NSD servers > unpatched (I?m talking about the kernel only here; all other updates are > applied) due to the aforementioned performance concerns ? as long as > they restrict access (i.e. who can log in) and use appropriate > host-based firewall rules, is their some risk that they should be aware of? > > Discuss. ?Thanks! > > Kevin > > On May 15, 2018, at 4:45 PM, Andrew Beattie <_abeattie at au1.ibm.com_ > > wrote: > > this thread is mildly amusing, given we regularly get customers asking > why we are dropping support for versions of linux > that they "just can't move off" > > > *Andrew Beattie* > *Software Defined Storage ?- IT Specialist* > *Phone: *614-2133-7927 > *E-mail: *_abeattie at au1.ibm.com_ > > > ----- Original message ----- > From: Stijn De Weirdt <_stijn.deweirdt at ugent.be_ > > > Sent by: _gpfsug-discuss-bounces at spectrumscale.org_ > > To: _gpfsug-discuss at spectrumscale.org_ > > Cc: > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Date: Wed, May 16, 2018 5:35 AM > > so this means running out-of-date kernels for at least another month? oh > boy... > > i hope this is not some new trend in gpfs support. othwerwise all RHEL > based sites will have to start adding EUS as default cost to run gpfs > with basic security compliance. > > stijn > > > On 05/15/2018 09:02 PM, Felipe Knop wrote: > > All, > > > > Validation of RHEL 7.5 on Scale is currently under way, and we are > > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > > will include the corresponding fix. > > > > Regards, > > > > ? Felipe > > > > ---- > > Felipe Knop _knop at us.ibm.com_ > > GPFS Development and Security > > IBM Systems > > IBM Building 008 > > 2455 South Rd, Poughkeepsie, NY 12601 > > (845) 433-9314 ?T/L 293-9314 > > > > > > > > > > > > From: Ryan Novosielski <_novosirj at rutgers.edu_ > > > > To: gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > > Date: 05/15/2018 12:56 PM > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > > ? ? ? ? ? ? 3.10.0-862.2.3.el7 > > Sent by: _gpfsug-discuss-bounces at spectrumscale.org_ > > > > > > > > > I know these dates can move, but any vague idea of a timeframe target for > > release (this quarter, next quarter, etc.)? > > > > Thanks! > > > > -- > > ____ > > || \\UTGERS, > > |---------------------------*O*--------------------------- > > ||_// the State ?| ? ? ? ? Ryan Novosielski - _novosirj at rutgers.edu_ > > > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS > Campus > > || ?\\ ? ?of NJ ?| Office of Advanced Research Computing - MSB > > C630, Newark > > ? ? ?`' > > > >> On May 14, 2018, at 9:30 AM, Felipe Knop <_knop at us.ibm.com_ > > wrote: > >> > >> All, > >> > >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > > in Scale to support this kernel level, upgrading to one of those upcoming > > PTFs will be required in order to run with that kernel. > >> > >> Regards, > >> > >> Felipe > >> > >> ---- > >> Felipe Knop _knop at us.ibm.com_ > >> GPFS Development and Security > >> IBM Systems > >> IBM Building 008 > >> 2455 South Rd, Poughkeepsie, NY 12601 > >> (845) 433-9314 T/L 293-9314 > >> > >> > >> > >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > > welcome. I see your concern but as long as IBM has not released spectrum > > scale for 7.5 that > >> > >> From: ?Andi Rhod Christiansen <_arc at b4restore.com_ > > > >> To: ?gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > >> Date: ?05/14/2018 08:15 AM > >> Subject: ?Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> Sent by: _gpfsug-discuss-bounces at spectrumscale.org_ > > >> > >> > >> > >> > >> You are welcome. > >> > >> I see your concern but as long as IBM has not released spectrum > scale for > > 7.5 that is their only solution, in regards to them caring about > security I > > would say yes they do care, but from their point of view either they tell > > the customer to upgrade as soon as red hat releases new versions and > > forcing the customer to be down until they have a new release or they > tell > > them to stay on supported level to a new release is ready. > >> > >> they should release a version supporting the new kernel soon, IBM > told me > > when I asked that they are "currently testing and have a support date > soon" > >> > >> Best regards. > >> > >> > >> -----Oprindelig meddelelse----- > >> Fra: _gpfsug-discuss-bounces at spectrumscale.org_ > > > <_gpfsug-discuss-bounces at spectrumscale.org_ > > P? vegne af > _z.han at imperial.ac.uk_ > >> Sendt: 14. maj 2018 13:59 > >> Til: gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> > >> Thanks. Does IBM care about security, one would ask? In this case I'd > > choose to use the new kernel for my virtualization over gpfs ... sigh > >> > >> > >> _https://access.redhat.com/errata/RHSA-2018:1318_ > > >> > >> Kernel: KVM: error in exception handling leads to wrong debug stack > value > > (CVE-2018-1087) > >> > >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) > >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > > escalation (CVE-2017-16939) > >> > >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > > netfilter/ebtables.c (CVE-2018-1068) > >> > >> ... > >> > >> > >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > >>> Date: Mon, 14 May 2018 11:10:18 +0000 > >>> From: Andi Rhod Christiansen <_arc at b4restore.com_ > > > >>> Reply-To: gpfsug main discussion list > >>> <_gpfsug-discuss at spectrumscale.org_ > > > >>> To: gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> ? ? 3.10.0-862.2.3.el7 > >>> > >>> Hi, > >>> > >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? > >>> > >>> I just had the same issue > >>> > >>> Revert to previous working kernel at redhat 7.4 release which is > > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > > level. > >>> > >>> > >>> Best regards > >>> Andi R. Christiansen > >>> > >>> -----Oprindelig meddelelse----- > >>> Fra: _gpfsug-discuss-bounces at spectrumscale.org_ > > >>> <_gpfsug-discuss-bounces at spectrumscale.org_ > > P? vegne af > >>> _z.han at imperial.ac.uk_ > >>> Sendt: 14. maj 2018 12:33 > >>> Til: gpfsug main discussion list > <_gpfsug-discuss at spectrumscale.org_ > > > >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> 3.10.0-862.2.3.el7 > >>> > >>> Dear All, > >>> > >>> Any one has the same problem? > >>> > >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ?; \ if > > [ $? -ne 0 ]; then \ > >>> exit 1;\ > >>> fi > >>> make[2]: Entering directory > > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > >>> ? LD ? ? ?/usr/lpp/mmfs/src/gpl-linux/built-in.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/tracelin.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/relaytrc.o > >>> ? LD [M] ?/usr/lpp/mmfs/src/gpl-linux/tracedev.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > >>> ? LD [M] ?/usr/lpp/mmfs/src/gpl-linux/mmfs26.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > >>> ? ? ? ? ? ? ? ? ?from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > >>> ? ? ? ? ? ? ? ? ?from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > > no member named ?i_wb_list? > >>> ? ? ?_TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > >>> ? ? ? ? ? ? ? ? ?^ ...... > >>> _______________________________________________ > >>> gpfsug-discuss mailing list > >>> gpfsug-discuss at _spectrumscale.org_ > >>> _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at _spectrumscale.org_ > >> _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > >> > >> > >> > >> > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at _spectrumscale.org_ > >> > > > _https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0_ > > > > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at _spectrumscale.org_ > > _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at _spectrumscale.org_ > > _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at _spectrumscale.org_ _ > __http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at _spectrumscale.org_ _ > __https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0_ > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Aaron Knister NASA Center for Climate Simulation (Code 606.2) Goddard Space Flight Center (301) 286-2776 From ulmer at ulmer.org Wed May 16 03:19:47 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 15 May 2018 21:19:47 -0500 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: <7aef4353-058f-4741-9760-319bcd037213@Spark> References: <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: Lohit, Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You?ve mentioned client a few times now. :) -- Stephen > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > Thanks Christof. > > The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. > > The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. > > Now we are migrating most of the data to GPFS keeping the symlinks as they are. > Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. > The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. > > I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? > > Regards, > > Lohit > > On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: >> > I could use CES, but CES does not support follow-symlinks outside respective SMB export. >> >> Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. >> >> > Follow-symlinks is a however a hard-requirement for to follow links outside GPFS filesystems. >> >> I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? >> >> Regards, >> >> Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ >> christof.schmitt at us.ibm.com || +1-520-799-2469 (T/L: 321-2469 ) >> >> >> ----- Original message ----- >> From: valleru at cbio.mskcc.org >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> To: gpfsug main discussion list >> Cc: >> Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks >> Date: Tue, May 15, 2018 3:04 PM >> >> Hello All, >> >> Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? >> I understand that i will not need a redundant SMB server configuration. >> >> I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement for to follow links outside GPFS filesystems. >> >> Thanks, >> Lohit >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed May 16 03:22:48 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 15 May 2018 21:22:48 -0500 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> There isn?t a flaw in that argument, but where the security experts are concerned there is no argument. Apparently this time Red Hat just told all of their RHEL 7.4 customers to upgrade to RHEL 7.5, rather than back-porting the security patches. So this time the retirement to upgrade distributions is much worse than normal. -- Stephen > On May 15, 2018, at 5:46 PM, Marc A Kaplan wrote: > > Kevin, that seems to be a good point. > > IF you have dedicated hardware to acting only as a storage and/or file server, THEN neither meltdown nor spectre should not be a worry. > > BECAUSE meltdown and spectre are just about an adversarial process spying on another process or kernel memory. IF we're not letting any potential adversary run her code on our file server, what's the exposure? > > NOW, let the security experts tell us where the flaw is in this argument... > > > > From: "Buterbaugh, Kevin L" > To: gpfsug main discussion list > Date: 05/15/2018 06:12 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > All, > > I have to kind of agree with Andrew ? it seems that there is a broad range of takes on kernel upgrades ? everything from ?install the latest kernel the day it comes out? to ?stick with this kernel, we know it works.? > > Related to that, let me throw out this question ? what about those who haven?t upgraded their kernel in a while at least because they?re concerned with the negative performance impacts of the meltdown / spectre patches??? So let?s just say a customer has upgraded the non-GPFS servers in their cluster, but they?ve left their NSD servers unpatched (I?m talking about the kernel only here; all other updates are applied) due to the aforementioned performance concerns ? as long as they restrict access (i.e. who can log in) and use appropriate host-based firewall rules, is their some risk that they should be aware of? > > Discuss. Thanks! > > Kevin > > On May 15, 2018, at 4:45 PM, Andrew Beattie > wrote: > > this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux > that they "just can't move off" > > > Andrew Beattie > Software Defined Storage - IT Specialist > Phone: 614-2133-7927 > E-mail: abeattie at au1.ibm.com > > > ----- Original message ----- > From: Stijn De Weirdt > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Cc: > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 > Date: Wed, May 16, 2018 5:35 AM > > so this means running out-of-date kernels for at least another month? oh > boy... > > i hope this is not some new trend in gpfs support. othwerwise all RHEL > based sites will have to start adding EUS as default cost to run gpfs > with basic security compliance. > > stijn > > > On 05/15/2018 09:02 PM, Felipe Knop wrote: > > All, > > > > Validation of RHEL 7.5 on Scale is currently under way, and we are > > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > > will include the corresponding fix. > > > > Regards, > > > > Felipe > > > > ---- > > Felipe Knop knop at us.ibm.com > > GPFS Development and Security > > IBM Systems > > IBM Building 008 > > 2455 South Rd, Poughkeepsie, NY 12601 > > (845) 433-9314 T/L 293-9314 > > > > > > > > > > > > From: Ryan Novosielski > > > To: gpfsug main discussion list > > > Date: 05/15/2018 12:56 PM > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > > 3.10.0-862.2.3.el7 > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > > > > I know these dates can move, but any vague idea of a timeframe target for > > release (this quarter, next quarter, etc.)? > > > > Thanks! > > > > -- > > ____ > > || \\UTGERS, > > |---------------------------*O*--------------------------- > > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > > || \\ of NJ | Office of Advanced Research Computing - MSB > > C630, Newark > > `' > > > >> On May 14, 2018, at 9:30 AM, Felipe Knop > wrote: > >> > >> All, > >> > >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > > in Scale to support this kernel level, upgrading to one of those upcoming > > PTFs will be required in order to run with that kernel. > >> > >> Regards, > >> > >> Felipe > >> > >> ---- > >> Felipe Knop knop at us.ibm.com > >> GPFS Development and Security > >> IBM Systems > >> IBM Building 008 > >> 2455 South Rd, Poughkeepsie, NY 12601 > >> (845) 433-9314 T/L 293-9314 > >> > >> > >> > >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > > welcome. I see your concern but as long as IBM has not released spectrum > > scale for 7.5 that > >> > >> From: Andi Rhod Christiansen > > >> To: gpfsug main discussion list > > >> Date: 05/14/2018 08:15 AM > >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> Sent by: gpfsug-discuss-bounces at spectrumscale.org > >> > >> > >> > >> > >> You are welcome. > >> > >> I see your concern but as long as IBM has not released spectrum scale for > > 7.5 that is their only solution, in regards to them caring about security I > > would say yes they do care, but from their point of view either they tell > > the customer to upgrade as soon as red hat releases new versions and > > forcing the customer to be down until they have a new release or they tell > > them to stay on supported level to a new release is ready. > >> > >> they should release a version supporting the new kernel soon, IBM told me > > when I asked that they are "currently testing and have a support date soon" > >> > >> Best regards. > >> > >> > >> -----Oprindelig meddelelse----- > >> Fra: gpfsug-discuss-bounces at spectrumscale.org > > > P? vegne af z.han at imperial.ac.uk > >> Sendt: 14. maj 2018 13:59 > >> Til: gpfsug main discussion list > > >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> > >> Thanks. Does IBM care about security, one would ask? In this case I'd > > choose to use the new kernel for my virtualization over gpfs ... sigh > >> > >> > >> https://access.redhat.com/errata/RHSA-2018:1318 > >> > >> Kernel: KVM: error in exception handling leads to wrong debug stack value > > (CVE-2018-1087) > >> > >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) > >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > > escalation (CVE-2017-16939) > >> > >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > > netfilter/ebtables.c (CVE-2018-1068) > >> > >> ... > >> > >> > >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > >>> Date: Mon, 14 May 2018 11:10:18 +0000 > >>> From: Andi Rhod Christiansen > > >>> Reply-To: gpfsug main discussion list > >>> > > >>> To: gpfsug main discussion list > > >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> 3.10.0-862.2.3.el7 > >>> > >>> Hi, > >>> > >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? > >>> > >>> I just had the same issue > >>> > >>> Revert to previous working kernel at redhat 7.4 release which is > > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > > level. > >>> > >>> > >>> Best regards > >>> Andi R. Christiansen > >>> > >>> -----Oprindelig meddelelse----- > >>> Fra: gpfsug-discuss-bounces at spectrumscale.org > >>> > P? vegne af > >>> z.han at imperial.ac.uk > >>> Sendt: 14. maj 2018 12:33 > >>> Til: gpfsug main discussion list > > >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> 3.10.0-862.2.3.el7 > >>> > >>> Dear All, > >>> > >>> Any one has the same problem? > >>> > >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > > [ $? -ne 0 ]; then \ > >>> exit 1;\ > >>> fi > >>> make[2]: Entering directory > > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > > no member named ?i_wb_list? > >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > >>> ^ ...... > >>> _______________________________________________ > >>> gpfsug-discuss mailing list > >>> gpfsug-discuss at spectrumscale.org > >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> > >> > >> > >> > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> > > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Wed May 16 03:21:22 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 15 May 2018 22:21:22 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: Thanks Stephen, Yes i do acknowledge, that it will need a SERVER license and thank you for reminding me. I just wanted to make sure, from the technical point of view that we won?t face any issues by exporting a GPFS mount as a SMB export. I remember, i had seen in documentation about few years ago that it is not recommended to export a GPFS mount via Third party SMB services (not CES). But i don?t exactly remember why. Regards, Lohit On May 15, 2018, 10:19 PM -0400, Stephen Ulmer , wrote: > Lohit, > > Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You?ve mentioned client a few times now. :) > > -- > Stephen > > > > > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > > > Thanks Christof. > > > > The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. > > > > The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. > > > > Now we are migrating most of the data to GPFS keeping the symlinks as they are. > > Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. > > The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. > > > > I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? > > > > Regards, > > > > Lohit > > > > On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. > > > > > > Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. > > > > > > > Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? > > > > > > Regards, > > > > > > Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ > > > christof.schmitt at us.ibm.com? ||? +1-520-799-2469??? (T/L: 321-2469) > > > > > > > > > > ----- Original message ----- > > > > From: valleru at cbio.mskcc.org > > > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > To: gpfsug main discussion list > > > > Cc: > > > > Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > > > Date: Tue, May 15, 2018 3:04 PM > > > > > > > > Hello All, > > > > > > > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? > > > > I understand that i will not need a redundant SMB server configuration. > > > > > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > > > Thanks, > > > > Lohit > > > > > > > > > > > > _______________________________________________ > > > > gpfsug-discuss mailing list > > > > gpfsug-discuss at spectrumscale.org > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed May 16 03:38:59 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 16 May 2018 02:38:59 +0000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: , <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Wed May 16 04:05:50 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 15 May 2018 23:05:50 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: Thank you for the detailed answer Andrew. I do understand that anything above the posix level will not be supported by IBM and might lead to scaling/other issues. We will start small, and discuss with IBM representative on any other possible efforts. Regards, Lohit On May 15, 2018, 10:39 PM -0400, Andrew Beattie , wrote: > Lohit, > > There is no technical reason why if you use the correct licensing that you can't publish a Posix fileystem using external Protocol tool rather than CES > the key thing to note is that if its not the IBM certified solution that IBM support stops at the Posix level and the protocol issues are your own to resolve. > > The reason we provide the CES environment is to provide a supported architecture to deliver protocol access,? does it have some limitations - certainly > but it is a supported environment.? Moving away from this moves the risk onto the customer to resolve and maintain. > > The other part of this, and potentially the reason why you might have been warned off using an external solution is that not all systems provide scalability and resiliency > so you may end up bumping into scaling issues by building your own environment --- and from the sound of things this is a large complex environment.? These issues are clearly defined in the CES stack and are well understood.? moving away from this will move you into the realm of the unknown -- again the risk becomes yours. > > it may well be worth putting a request in with your local IBM representative to have IBM Scale protocol development team involved in your design and see what we can support for your requirements. > > > Regards, > Andrew Beattie > Software Defined Storage? - IT Specialist > Phone: 614-2133-7927 > E-mail: abeattie at au1.ibm.com > > > > ----- Original message ----- > > From: valleru at cbio.mskcc.org > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > To: gpfsug main discussion list > > Cc: > > Subject: Re: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > Date: Wed, May 16, 2018 12:25 PM > > > > Thanks Stephen, > > > > Yes i do acknowledge, that it will need a SERVER license and thank you for reminding me. > > > > I just wanted to make sure, from the technical point of view that we won?t face any issues by exporting a GPFS mount as a SMB export. > > > > I remember, i had seen in documentation about few years ago that it is not recommended to export a GPFS mount via Third party SMB services (not CES). But i don?t exactly remember why. > > > > Regards, > > Lohit > > > > On May 15, 2018, 10:19 PM -0400, Stephen Ulmer , wrote: > > > Lohit, > > > > > > Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You?ve mentioned client a few times now. :) > > > > > > -- > > > Stephen > > > > > > > > > > > > > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > > > > > > > Thanks Christof. > > > > > > > > The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. > > > > > > > > The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. > > > > > > > > Now we are migrating most of the data to GPFS keeping the symlinks as they are. > > > > Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. > > > > The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. > > > > > > > > I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? > > > > > > > > Regards, > > > > > > > > Lohit > > > > > > > > On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: > > > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. > > > > > > > > > > Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. > > > > > > > > > > > Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > > > > > I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? > > > > > > > > > > Regards, > > > > > > > > > > Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ > > > > > christof.schmitt at us.ibm.com? ||? +1-520-799-2469??? (T/L: 321-2469) > > > > > > > > > > > > > > > > ----- Original message ----- > > > > > > From: valleru at cbio.mskcc.org > > > > > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > > To: gpfsug main discussion list > > > > > > Cc: > > > > > > Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > > > > > Date: Tue, May 15, 2018 3:04 PM > > > > > > > > > > > > Hello All, > > > > > > > > > > > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? > > > > > > I understand that i will not need a redundant SMB server configuration. > > > > > > > > > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > > > > > > > Thanks, > > > > > > Lohit > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > gpfsug-discuss mailing list > > > > > > gpfsug-discuss at spectrumscale.org > > > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > > > > _______________________________________________ > > > > > gpfsug-discuss mailing list > > > > > gpfsug-discuss at spectrumscale.org > > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > > > > gpfsug-discuss mailing list > > > > gpfsug-discuss at spectrumscale.org > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From stijn.deweirdt at ugent.be Wed May 16 05:55:24 2018 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Wed, 16 May 2018 06:55:24 +0200 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> Message-ID: <3cab44ce-42c0-c8e4-01f7-3876541d2511@ugent.be> hi stephen, > There isn?t a flaw in that argument, but where the security experts > are concerned there is no argument. we have gpfs clients hosts where users can login, we can't update those. that is a certain worry. > > Apparently this time Red Hat just told all of their RHEL 7.4 > customers to upgrade to RHEL 7.5, rather than back-porting the > security patches. So this time the retirement to upgrade > distributions is much worse than normal. there's no 'this time', this is the default rhel support model. only with EUS you get patches for non-latest minor releases. stijn > > > > _______________________________________________ gpfsug-discuss > mailing list gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From mnaineni at in.ibm.com Wed May 16 06:18:30 2018 From: mnaineni at in.ibm.com (Malahal R Naineni) Date: Wed, 16 May 2018 10:48:30 +0530 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com><6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> Message-ID: The systemd service file also was updated to account for the daemon binary rename (the rename itself was done to avoid SELinux issues). It is possible that the systemd was using an old cache (unlikely as I didn't see daemon-reload message here) or the rpm update couldn't update the file as user changed the systemd unit service file (most likely case here). Please provide "rpm -qV ", the RPM shipped unit file should NOT have any reference to ganesha.nfsd (it should have gpfs.ganesha.nfsd). Regards, Malahal. PS: No symlink magic is necessary with usual cases! From: Jonathan Buzzard To: gpfsug-discuss at spectrumscale.org Date: 05/16/2018 12:01 AM Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Sent by: gpfsug-discuss-bounces at spectrumscale.org On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 16 09:14:14 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 16 May 2018 08:14:14 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de> <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> Message-ID: <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: on behalf of "olaf.weiser at de.ibm.com" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" To: "'gpfsug-discuss at spectrumscale.org'" Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Wed May 16 09:51:25 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Wed, 16 May 2018 08:51:25 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526379829.17680.27.camel@strath.ac.uk>, <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: For us the only one that matters is the fileset quota. With or without ?perfileset-quota set, we simply see a quota value for one of the filesets that is mapped to a drive, and every other mapped drives inherits the same value. whether it?s true or not. Just about to do some SMB tracing for my PMR. Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Christof Schmitt Sent: 15 May 2018 19:50 To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] SMB quotas query To maybe clarify a few points: There are three quotas: user, group and fileset. User and group quota can be applied on the fileset level or the file system level. Samba with the vfs_gpfs module, only queries the user and group quotas on the requested path. If the fileset quota should also be applied to the reported free space, that has to be done through the --filesetdf parameter. We had the fileset quota query from Samba in the past, but that was a very problematic codepath, and it was removed as --filesetdf is the more reliabel way to achieve the same result. So another part of the question is which quotas should be applied to the reported free space. Regards, Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ christof.schmitt at us.ibm.com || +1-520-799-2469 (T/L: 321-2469) ----- Original message ----- From: Jonathan Buzzard > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: Re: [gpfsug-discuss] SMB quotas query Date: Tue, May 15, 2018 3:24 AM On Tue, 2018-05-15 at 13:10 +0300, Yaron Daniel wrote: > Hi > > So - u want to get quota report per fileset quota - right ? > We use this param when we want to monitor the NFS exports with df , i > think this should also affect the SMB filesets. > > Can u try to enable it and see if it works ? > It is irrelevant to Samba, this is or should be handled in vfs_gpfs as Christof said earlier. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 16 10:02:06 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 16 May 2018 10:02:06 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526379829.17680.27.camel@strath.ac.uk> , <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: <1526461326.17680.48.camel@strath.ac.uk> On Wed, 2018-05-16 at 08:51 +0000, Sobey, Richard A wrote: > For us the only one that matters is the fileset quota. With or > without ?perfileset-quota set, we simply see a quota value for one of > the filesets that is mapped to a drive, and every other mapped drives > inherits the same value. whether it?s true or not. > ? > Just about to do some SMB tracing for my PMR. > ? I have a fully working solution that uses the dfree option in Samba if you want. I am with you here in that a lot of places will be carving a GPFS file system up with file sets with a quota that are then shared to a group of users and you want the disk size, and amount free to show up on the clients based on the quota for the fileset not the whole file system. I am really not sure what the issue with the code path for this as it is 35 lines of C including comments to get the fileset if one exists for a given path on a GPFS file system. You open a random file on the path, call gpfs_fcntl and then gpfs_getfilesetid. It's then a simple call to gpfs_quotactl. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From r.sobey at imperial.ac.uk Wed May 16 10:08:09 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Wed, 16 May 2018 09:08:09 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526461326.17680.48.camel@strath.ac.uk> References: <1526379829.17680.27.camel@strath.ac.uk> , <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> <1526461326.17680.48.camel@strath.ac.uk> Message-ID: Thanks Jonathan for the offer, but I'd prefer to have this working without implementing unsupported options in production. I'd be willing to give it a go in my test cluster though, which is exhibiting the same symptoms, so if you wouldn't mind getting in touch off list I can see how it works? I am almost certain that this used to work properly in the past though. My customers would surely have noticed a problem like this - they like to say when things are wrong ? Cheers Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: 16 May 2018 10:02 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query On Wed, 2018-05-16 at 08:51 +0000, Sobey, Richard A wrote: > For us the only one that matters is the fileset quota. With or without > ?perfileset-quota set, we simply see a quota value for one of the > filesets that is mapped to a drive, and every other mapped drives > inherits the same value. whether it?s true or not. > ? > Just about to do some SMB tracing for my PMR. > ? I have a fully working solution that uses the dfree option in Samba if you want. I am with you here in that a lot of places will be carving a GPFS file system up with file sets with a quota that are then shared to a group of users and you want the disk size, and amount free to show up on the clients based on the quota for the fileset not the whole file system. I am really not sure what the issue with the code path for this as it is 35 lines of C including comments to get the fileset if one exists for a given path on a GPFS file system. You open a random file on the path, call gpfs_fcntl and then gpfs_getfilesetid. It's then a simple call to gpfs_quotactl. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From smita.raut at in.ibm.com Wed May 16 11:23:05 2018 From: smita.raut at in.ibm.com (Smita J Raut) Date: Wed, 16 May 2018 15:53:05 +0530 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> Message-ID: You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm >From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" To: gpfsug main discussion list Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: on behalf of "olaf.weiser at de.ibm.com" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" To: "'gpfsug-discuss at spectrumscale.org'" Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 16 13:23:41 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 16 May 2018 13:23:41 +0100 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: <1526473421.17680.57.camel@strath.ac.uk> On Tue, 2018-05-15 at 22:32 +0000, Christof Schmitt wrote: > > I could use CES, but CES does not support follow-symlinks outside > respective SMB export. > ? > Samba has the 'wide links' option, that we currently do not test and > support as part of the mmsmb integration. You can always open a RFE > and ask that we support this option in a future release. > ? Note?that if unix extensions are on then you also need the "allow insecure wide links" option, which is a pretty good hint as to why one should steer several parsecs wide of using symlinks on SMB exports. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From daniel.kidger at uk.ibm.com Wed May 16 13:37:27 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Wed, 16 May 2018 12:37:27 +0000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: <1526473421.17680.57.camel@strath.ac.uk> References: <1526473421.17680.57.camel@strath.ac.uk>, Message-ID: An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Wed May 16 14:31:30 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Wed, 16 May 2018 13:31:30 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> Message-ID: <5ef78d14aa0c4a23b2979b13deeecab7@SMXRF108.msg.hukrf.de> Hallo Smita, i will search in wich rhel-release is the 0.15 release available. If we found one I want to install, and give feedback. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 +++ Bitte beachten Sie die neuen Telefonnummern +++ +++ Siehe auch: https://www.huk.de/presse/pressekontakt/ansprechpartner.html +++ E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" > To: gpfsug main discussion list > Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: > on behalf of "olaf.weiser at de.ibm.com" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" > To: "'gpfsug-discuss at spectrumscale.org'" > Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed May 16 15:05:19 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Wed, 16 May 2018 09:05:19 -0500 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: <3cab44ce-42c0-c8e4-01f7-3876541d2511@ugent.be> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> <3cab44ce-42c0-c8e4-01f7-3876541d2511@ugent.be> Message-ID: <20485D89-2F0F-4905-A5C7-FCACAAAB1FCC@ulmer.org> > On May 15, 2018, at 11:55 PM, Stijn De Weirdt wrote: > > hi stephen, > >> There isn?t a flaw in that argument, but where the security experts >> are concerned there is no argument. > we have gpfs clients hosts where users can login, we can't update those. > that is a certain worry. The original statement from Marc was about dedicated hardware for storage and/or file serving. If that?s not the use case, then neither his logic nor my support of it apply. >> >> Apparently this time Red Hat just told all of their RHEL 7.4 >> customers to upgrade to RHEL 7.5, rather than back-porting the >> security patches. So this time the retirement to upgrade >> distributions is much worse than normal. > there's no 'this time', this is the default rhel support model. only > with EUS you get patches for non-latest minor releases. > > stijn > You are correct! I did a quick check and most of my customers are enterprise-y, and many of them seem to have EUS. I thought it was standard, but it is not. I could be mixing Red Hat up with another Linux vendor at this point? Liberty, -- Stephen From bbanister at jumptrading.com Wed May 16 16:30:14 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 16 May 2018 15:30:14 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com><6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> Message-ID: <5b7aacf8e9c246b4ae06b2a0fa706ed6@jumptrading.com> Malahal is correct, we did modify our version of the systemd unit and the update is being overwritten. My bad. We seemed to have issues with the original version, but will try to use the new version and will open a ticket if we have issues. Definitely do not want to modify the IBM provided configs as this is an obvious example of how that can come back to bite you!! Not symlink is needed as Malahal states. Sorry for the confusion and false alarms. Thanks Malahal!! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Malahal R Naineni Sent: Wednesday, May 16, 2018 12:19 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Note: External Email ________________________________ The systemd service file also was updated to account for the daemon binary rename (the rename itself was done to avoid SELinux issues). It is possible that the systemd was using an old cache (unlikely as I didn't see daemon-reload message here) or the rpm update couldn't update the file as user changed the systemd unit service file (most likely case here). Please provide "rpm -qV ", the RPM shipped unit file should NOT have any reference to ganesha.nfsd (it should have gpfs.ganesha.nfsd). Regards, Malahal. PS: No symlink magic is necessary with usual cases! From: Jonathan Buzzard > To: gpfsug-discuss at spectrumscale.org Date: 05/16/2018 12:01 AM Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Wed May 16 17:01:18 2018 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Wed, 16 May 2018 16:01:18 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <5b7aacf8e9c246b4ae06b2a0fa706ed6@jumptrading.com> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com><6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> , <5b7aacf8e9c246b4ae06b2a0fa706ed6@jumptrading.com> Message-ID: <3D5B04DE-3BC4-478D-A32F-C4417358A003@rutgers.edu> Thing to do here ought to be using overrides in /etc/systemd, not modifying the vendor scripts. I can?t think of a case where one would want to do otherwise, but it may be out there. -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' On May 16, 2018, at 11:30, Bryan Banister > wrote: Malahal is correct, we did modify our version of the systemd unit and the update is being overwritten. My bad. We seemed to have issues with the original version, but will try to use the new version and will open a ticket if we have issues. Definitely do not want to modify the IBM provided configs as this is an obvious example of how that can come back to bite you!! Not symlink is needed as Malahal states. Sorry for the confusion and false alarms. Thanks Malahal!! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Malahal R Naineni Sent: Wednesday, May 16, 2018 12:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Note: External Email ________________________________ The systemd service file also was updated to account for the daemon binary rename (the rename itself was done to avoid SELinux issues). It is possible that the systemd was using an old cache (unlikely as I didn't see daemon-reload message here) or the rpm update couldn't update the file as user changed the systemd unit service file (most likely case here). Please provide "rpm -qV ", the RPM shipped unit file should NOT have any reference to ganesha.nfsd (it should have gpfs.ganesha.nfsd). Regards, Malahal. PS: No symlink magic is necessary with usual cases! From: Jonathan Buzzard > To: gpfsug-discuss at spectrumscale.org Date: 05/16/2018 12:01 AM Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C333d1c944c464856be7008d5bb41f07f%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636620814253162614&sdata=ihaClVwGs9Cp69UflH7eYp%2F0q7%2FR29AY%2FbM1IzbZrsI%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed May 16 18:01:52 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 16 May 2018 17:01:52 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526461326.17680.48.camel@strath.ac.uk> References: <1526461326.17680.48.camel@strath.ac.uk>, <1526379829.17680.27.camel@strath.ac.uk> , <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: From bevans at pixitmedia.com Thu May 17 14:41:57 2018 From: bevans at pixitmedia.com (Barry Evans) Date: Thu, 17 May 2018 14:41:57 +0100 Subject: [gpfsug-discuss] =?utf-8?Q?=E2=80=94subblocks-per-full-block_?=in 5.0.1 Message-ID: Slight wonkiness in mmcrfs script that spits this out ?subblocks-per-full-block as an invalid option. No worky: ? ? 777 ? ? ? ? subblocks-per-full-block ) ? ? 778 ? ? ? ? ? if [[ -z $optArg ]] ? ? 779 ? ? ? ? ? then ? ? 780 ? ? ? ? ? ? # The expected argument is not in the same string as its ? ? 781 ? ? ? ? ? ? # option name. ?Get it from the next token. ? ? 782 ? ? ? ? ? ? eval optArg="\${$OPTIND}" ? ? 783 ? ? ? ? ? ? [[ -z $optArg ]] && ?\ ? ? 784 ? ? ? ? ? ? ? syntaxError "missingValue" $noUsageMsg "--$optName_lc" ? ? 785 ? ? ? ? ? ? shift 1 ? ? 786 ? ? ? ? ? fi ? ? 787 ? ? ? ? ? [[ -n $subblocksPerFullBlockOpt ]] && ?\ ? ? 788 ? ? ? ? ? ? syntaxError "multiple" $noUsageMsg "--$optName_lc" ? ? 789 ? ? ? ? ? subblocksPerFullBlockOpt="--$optName_lc" ? ? 790 ? ? 791 ? ? ? ? ? nSubblocksArg=$(checkIntRange --subblocks-per-full-block $optArg 32 8192) ? ? 792 ? ? ? ? ? [[ $? -ne 0 ]] && syntaxError nomsg $noUsageMsg ? ? 793 ? ? ? ? ? tscrfsParms="$tscrfsParms --subblocks-per-full-block $nSubblocksArg" ? ? 794 ? ? ? ? ? ;; Worky: ? ? 777 ? ? ? ? subblocks-per-full-block ) ? ? 778 ? ? ? ? ? if [[ -z $optArg ]] ? ? 779 ? ? ? ? ? then ? ? 780 ? ? ? ? ? ? # The expected argument is not in the same string as its ? ? 781 ? ? ? ? ? ? # option name. ?Get it from the next token. ? ? 782 ? ? ? ? ? ? eval optArg="\${$OPTIND}" ? ? 783 ? ? ? ? ? ? [[ -z $optArg ]] && ?\ ? ? 784 ? ? ? ? ? ? ? syntaxError "missingValue" $noUsageMsg "--$optName_lc" ? ? 785 ? ? ? ? ? ? shift 1 ? ? 786 ? ? ? ? ? fi ? ? 787 ? ? ? ? ? #[[ -n $subblocksPerFullBlockOpt ]] && ?\ ? ? 788 ? ? ? ? ? [[ -n $nSubblocksArg ?]] && ?\ ? ? 789 ? ? ? ? ? ? syntaxError "multiple" $noUsageMsg "--$optName_lc" ? ? 790 ? ? ? ? ? #subblocksPerFullBlockOpt="--$optName_lc" ? ? 791 ? ? ? ? ? nSubblocksArg="--$optName_lc" ? ? 792 ? ? 793 ? ? ? ? ? nSubblocksArg=$(checkIntRange --subblocks-per-full-block $optArg 32 8192) ? ? 794 ? ? ? ? ? [[ $? -ne 0 ]] && syntaxError nomsg $noUsageMsg ? ? 795 ? ? ? ? ? tscrfsParms="$tscrfsParms --subblocks-per-full-block $nSubblocksArg" ? ? 796 ? ? ? ? ? ;; Looks like someone got halfway through the variable change ?subblocksPerFullBlockOpt"?is referenced elsewhere in the script: if [[ -z $forceOption ]] then ? [[ -n $fflag ]] && ?\ ? ? syntaxError "invalidOption" $usageMsg "$fflag" ? [[ -n $subblocksPerFullBlockOpt ]] && ?\ ? ? syntaxError "invalidOption" $usageMsg "$subblocksPerFullBlockOpt" fi ...so this is probably naughty on my behalf. Kind Regards, Barry Evans CTO/Co-Founder Pixit Media Ltd +44 7950 666 248 bevans at pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Thu May 17 16:31:47 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 17 May 2018 16:31:47 +0100 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: <1526473421.17680.57.camel@strath.ac.uk> , Message-ID: <1526571107.17680.81.camel@strath.ac.uk> On Wed, 2018-05-16 at 12:37 +0000, Daniel Kidger wrote: > Jonathan, > ? > Are you suggesting that a SMB?exported symlink to /etc/shadow is > somehow a bad thing ??:-) > The irony is that people are busy complaining about not being able to update their kernels for security reasons while someone else is complaining about not being able to do what can only be described in 2018 as very bad practice. The right answer IMHO is to forget about symlinks being followed server side and take the opportunity that migrating it all to GPFS gives you to re-architect your storage so they are no longer needed. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From Renar.Grunenberg at huk-coburg.de Thu May 17 17:13:30 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Thu, 17 May 2018 16:13:30 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> Message-ID: <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> Hallo Smita, I checks these now, today there are no real way to get these package from a rhel channel. All are on 0.13.1. I checked the pike repository and see that following packages are available: python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm python2-cryptography-1.7.2-1.el7.x86_64.rpm python2-urllib3-1.21.1-1.el7.noarch.rpm My Request and question here. Why are these packages are not in the pike-release that IBM shipped. Is it possible to implement and test these package for the next ptf 5.0.1.1. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" > To: gpfsug main discussion list > Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: > on behalf of "olaf.weiser at de.ibm.com" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" > To: "'gpfsug-discuss at spectrumscale.org'" > Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at ca.ibm.com Fri May 18 16:25:52 2018 From: bzhang at ca.ibm.com (Bohai Zhang) Date: Fri, 18 May 2018 11:25:52 -0400 Subject: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Message-ID: IBM Spectrum Scale Support Webinar Spectrum Scale Disk Lease, Expel & Recovery About this Webinar IBM Spectrum Scale webinars are hosted by IBM Spectrum Scale support to share expertise and knowledge of the Spectrum Scale product, as well as product updates and best practices based on various use cases. This webinar introduces various concepts and features related to disk lease, node expel, and node recovery. It explains the mechanism of disk lease, the common scenarios and causes for node expel, and different phases of node recovery. It also explains DMS (Deadman Switch) timer which could trigger kernel panic as a result of lease expiry and hang I/O. This webinar also talks about best practice tuning, recent improvements to mitigate node expels and RAS improvements for expel debug data collection. Recent critical defects about node expel will also be discussed in this webinar. Please note that our webinars are free of charge and will be held online via WebEx. Agenda: ? Disk lease concept and mechanism ? Node expel concept, causes and use cases ? Node recover concept and explanation ? Parameter explanation and tuning ? Recent improvement and critical issues ? Q&A NA/EU Session Date: June 6, 2018 Time: 10 AM ? 11AM EDT (2 PM ? 3PM GMT) Registration: https://ibm.biz/BdZLgY Audience: Spectrum Scale Administrators AP/JP Session Date: June 6, 2018 Time: 10 AM ? 11 AM Beijing Time (11 AM ? 12 AM Tokyo Time) Registration: https://ibm.biz/BdZLgi Audience: Spectrum Scale Administrators If you have any questions, please contact IBM Spectrum Scale support. Regards, IBM Spectrum Computing Bohai Zhang Critical Senior Technical Leader, IBM Systems Situation Tel: 1-905-316-2727 Resolver Mobile: 1-416-897-7488 Expert Badge Email: bzhang at ca.ibm.com 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada Live Chat at IBMStorageSuptMobile Apps Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM | dWA We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to recommend IBM. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73794593.gif Type: image/gif Size: 2665 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73540552.gif Type: image/gif Size: 275 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73219387.gif Type: image/gif Size: 305 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73169142.gif Type: image/gif Size: 331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73563875.gif Type: image/gif Size: 3621 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73474166.gif Type: image/gif Size: 1243 bytes Desc: not available URL: From skylar2 at uw.edu Fri May 18 16:32:05 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Fri, 18 May 2018 15:32:05 +0000 Subject: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery In-Reply-To: References: Message-ID: <20180518153205.beb5brsgadpnf7y3@utumno.gs.washington.edu> Hi Bohai, Will this be recorded? I'll be on vacation but am interested to learn about the topics under discussion. On Fri, May 18, 2018 at 11:25:52AM -0400, Bohai Zhang wrote: > > > > > > IBM Spectrum Scale Support Webinar > Spectrum Scale Disk Lease, Expel & Recovery > > > > > > > About this Webinar > IBM Spectrum Scale webinars are hosted by IBM Spectrum Scale support to > share expertise and knowledge of the Spectrum Scale product, as well as > product updates and best practices based on various use cases. This webinar > introduces various concepts and features related to disk lease, node expel, > and node recovery. It explains the mechanism of disk lease, the common > scenarios and causes for node expel, and different phases of node recovery. > It also explains DMS (Deadman Switch) timer which could trigger kernel > panic as a result of lease expiry and hang I/O. This webinar also talks > about best practice tuning, recent improvements to mitigate node expels and > RAS improvements for expel debug data collection. Recent critical defects > about node expel will also be discussed in this webinar. > > > > > Please note that our webinars are free of charge and will be held online > via WebEx. > > Agenda: > > ? Disk lease concept and mechanism > > ? Node expel concept, causes and use cases > > ? Node recover concept and explanation > > > ? Parameter explanation and tuning > > > ? Recent improvement and critical issues > > > ? Q&A > > NA/EU Session > Date: June 6, 2018 > Time: 10 AM ??? 11AM EDT (2 PM ??? 3PM GMT) > Registration: https://ibm.biz/BdZLgY > Audience: Spectrum Scale Administrators > > AP/JP Session > Date: June 6, 2018 > Time: 10 AM ??? 11 AM Beijing Time (11 AM ??? 12 AM Tokyo Time) > Registration: https://ibm.biz/BdZLgi > Audience: Spectrum Scale Administrators > > > If you have any questions, please contact IBM Spectrum Scale support. > > Regards, > > > > > > > IBM > Spectrum > Computing > > Bohai Zhang Critical > Senior Technical Leader, IBM Systems Situation > Tel: 1-905-316-2727 Resolver > Mobile: 1-416-897-7488 Expert Badge > Email: bzhang at ca.ibm.com > 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada > Live Chat at IBMStorageSuptMobile Apps > > > > Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM > | dWA > We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to > recommend IBM. > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From Robert.Oesterlin at nuance.com Fri May 18 16:37:48 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 18 May 2018 15:37:48 +0000 Subject: [gpfsug-discuss] Presentations from the May 16-17 User Group meeting in Cambridge Message-ID: Thanks to all the presenters and attendees, it was a great get-together. I?ll be posting these soon to spectrumscale.org, but I need to sort out the size restrictions with Simon, so it may be a few more days. Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From smita.raut at in.ibm.com Fri May 18 17:10:11 2018 From: smita.raut at in.ibm.com (Smita J Raut) Date: Fri, 18 May 2018 21:40:11 +0530 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de><6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> Message-ID: Hi Renar, Yes we plan to include newer pyOpenSSL in 5.0.1.1 Thanks, Smita From: "Grunenberg, Renar" To: 'gpfsug main discussion list' Date: 05/17/2018 09:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo Smita, I checks these now, today there are no real way to get these package from a rhel channel. All are on 0.13.1. I checked the pike repository and see that following packages are available: python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm python2-cryptography-1.7.2-1.el7.x86_64.rpm python2-urllib3-1.21.1-1.el7.noarch.rpm My Request and question here. Why are these packages are not in the pike-release that IBM shipped. Is it possible to implement and test these package for the next ptf 5.0.1.1. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. Von: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm >From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: on behalf of " olaf.weiser at de.ibm.com" Reply-To: "gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" To: "'gpfsug-discuss at spectrumscale.org'" < gpfsug-discuss at spectrumscale.org> Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Fri May 18 18:07:56 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Fri, 18 May 2018 17:07:56 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de><6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> Message-ID: Hallo Smita, thanks that sounds good. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Freitag, 18. Mai 2018 18:10 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Hi Renar, Yes we plan to include newer pyOpenSSL in 5.0.1.1 Thanks, Smita From: "Grunenberg, Renar" > To: 'gpfsug main discussion list' > Date: 05/17/2018 09:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo Smita, I checks these now, today there are no real way to get these package from a rhel channel. All are on 0.13.1. I checked the pike repository and see that following packages are available: python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm python2-cryptography-1.7.2-1.el7.x86_64.rpm python2-urllib3-1.21.1-1.el7.noarch.rpm My Request and question here. Why are these packages are not in the pike-release that IBM shipped. Is it possible to implement and test these package for the next ptf 5.0.1.1. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list > Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" > To: gpfsug main discussion list > Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: > on behalf of "olaf.weiser at de.ibm.com" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" > To: "'gpfsug-discuss at spectrumscale.org'" > Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at ca.ibm.com Fri May 18 19:19:24 2018 From: bzhang at ca.ibm.com (Bohai Zhang) Date: Fri, 18 May 2018 14:19:24 -0400 Subject: [gpfsug-discuss] Fw: IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Message-ID: Regards, IBM Spectrum Computing Bohai Zhang Critical Senior Technical Leader, IBM Systems Situation Tel: 1-905-316-2727 Resolver Mobile: 1-416-897-7488 Expert Badge Email: bzhang at ca.ibm.com 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada Live Chat at IBMStorageSuptMobile Apps Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM | dWA We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to recommend IBM. ----- Forwarded by Bohai Zhang/Ontario/IBM on 2018/05/18 02:18 PM ----- From: Bohai Zhang/Ontario/IBM To: Skylar Thompson Date: 2018/05/18 11:40 AM Subject: Re: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Hi Skylar, Thanks for your interesting. It will be recorded. If you register, we will send you a following up email after the webinar which will contain the link to the recording. Have a nice weekend. Regards, IBM Spectrum Computing Bohai Zhang Critical Senior Technical Leader, IBM Systems Situation Tel: 1-905-316-2727 Resolver Mobile: 1-416-897-7488 Expert Badge Email: bzhang at ca.ibm.com 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada Live Chat at IBMStorageSuptMobile Apps Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM | dWA We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to recommend IBM. From: Skylar Thompson To: bzhang at ca.ibm.com Cc: gpfsug-discuss at spectrumscale.org Date: 2018/05/18 11:34 AM Subject: Re: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Hi Bohai, Will this be recorded? I'll be on vacation but am interested to learn about the topics under discussion. On Fri, May 18, 2018 at 11:25:52AM -0400, Bohai Zhang wrote: > > > > > > IBM Spectrum Scale Support Webinar > Spectrum Scale Disk Lease, Expel & Recovery > > > > > > > About this Webinar > IBM Spectrum Scale webinars are hosted by IBM Spectrum Scale support to > share expertise and knowledge of the Spectrum Scale product, as well as > product updates and best practices based on various use cases. This webinar > introduces various concepts and features related to disk lease, node expel, > and node recovery. It explains the mechanism of disk lease, the common > scenarios and causes for node expel, and different phases of node recovery. > It also explains DMS (Deadman Switch) timer which could trigger kernel > panic as a result of lease expiry and hang I/O. This webinar also talks > about best practice tuning, recent improvements to mitigate node expels and > RAS improvements for expel debug data collection. Recent critical defects > about node expel will also be discussed in this webinar. > > > > > Please note that our webinars are free of charge and will be held online > via WebEx. > > Agenda: > > ? Disk lease concept and mechanism > > ? Node expel concept, causes and use cases > > ? Node recover concept and explanation > > > ? Parameter explanation and tuning > > > ? Recent improvement and critical issues > > > ? Q&A > > NA/EU Session > Date: June 6, 2018 > Time: 10 AM ??? 11AM EDT (2 PM ??? 3PM GMT) > Registration: https://ibm.biz/BdZLgY > Audience: Spectrum Scale Administrators > > AP/JP Session > Date: June 6, 2018 > Time: 10 AM ??? 11 AM Beijing Time (11 AM ??? 12 AM Tokyo Time) > Registration: https://ibm.biz/BdZLgi > Audience: Spectrum Scale Administrators > > > If you have any questions, please contact IBM Spectrum Scale support. > > Regards, > > > > > > > IBM > Spectrum > Computing > > Bohai Zhang Critical > Senior Technical Leader, IBM Systems Situation > Tel: 1-905-316-2727 Resolver > Mobile: 1-416-897-7488 Expert Badge > Email: bzhang at ca.ibm.com > 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada > Live Chat at IBMStorageSuptMobile Apps > > > > Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM > | dWA > We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to > recommend IBM. > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F310241.gif Type: image/gif Size: 2665 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F811734.gif Type: image/gif Size: 275 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F210195.gif Type: image/gif Size: 305 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F911712.gif Type: image/gif Size: 331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F859587.gif Type: image/gif Size: 3621 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F303375.gif Type: image/gif Size: 1243 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From hopii at interia.pl Fri May 18 19:53:57 2018 From: hopii at interia.pl (hopii at interia.pl) Date: Fri, 18 May 2018 20:53:57 +0200 Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos authentication issue Message-ID: Hi there, I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. NFS mount with keberos works with no issues as well. But I ran out of ideas how to configure SMB using LDAP with kerberos. I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. But error message seems to point to keytab file, which is present on both, server and client nodes. I ran into simillar post, dated few days ago, so I'm not the only one. https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html Below is my configuration and error message, and I'd appreciate any hints or help. Thank you, d. Error message from /var/adm/ras/log.smbd [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) GENSEC backend 'ntlmssp_resume_ccache' registered [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR Cluster nodes spectrum1.example.com RedHat 7.4 spectrum2.example.com RedHat 7.4 spectrum3.example.com RedHat 7.4 Protocols nodes: labs1.example.com lasb2.example.com labs3.example.com ssipa.example.com Centos 7.5 spectrum scale server: [root at spectrum1 security]# klist -k Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 host/labs1.example.com at example.com 1 host/labs1.example.com at example.com 1 host/labs2.example.com at example.com 1 host/labs2.example.com at example.com 1 host/labs3.example.com at example.com 1 host/labs3.example.com at example.com 1 nfs/labs1.example.com at example.com 1 nfs/labs1.example.com at example.com 1 nfs/labs2.example.com at example.com 1 nfs/labs2.example.com at example.com 1 nfs/labs3.example.com at example.com 1 nfs/labs3.example.com at example.com 1 cifs/labs1.example.com at example.com 1 cifs/labs1.example.com at example.com 1 cifs/labs2.example.com at example.com 1 cifs/labs2.example.com at example.com 1 cifs/labs3.example.com at example.com 1 cifs/labs3.example.com at example.com [root at spectrum1 security]# net conf list [global] disable netbios = yes disable spoolss = yes printcap cache time = 0 fileid:algorithm = fsname fileid:fstype allow = gpfs syncops:onmeta = no preferred master = no client NTLMv2 auth = yes kernel oplocks = no level2 oplocks = yes debug hires timestamp = yes max log size = 100000 host msdfs = yes notify:inotify = yes wide links = no log writeable files on exit = yes ctdb locktime warn threshold = 5000 auth methods = guest sam winbind smbd:backgroundqueue = False read only = no use sendfile = no strict locking = auto posix locking = no large readwrite = yes aio read size = 1 aio write size = 1 force unknown acl user = yes store dos attributes = yes map readonly = yes map archive = yes map system = yes map hidden = yes ea support = yes groupdb:backend = tdb winbind:online check timeout = 30 winbind max domain connections = 5 winbind max clients = 10000 dmapi support = no unix extensions = no socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 strict allocate = yes tdbsam:map builtin = no aio_pthread:aio open = yes dfree cache time = 100 change notify = yes max open files = 20000 time_audit:timeout = 5000 gencache:stabilize_count = 10000 server min protocol = SMB2_02 server max protocol = SMB3_02 vfs objects = shadow_copy2 syncops gpfs fileid time_audit smbd profiling level = on log level = 1 logging = syslog at 0 file smbd exit on ip drop = yes durable handles = no ctdb:smbxsrv_open_global.tdb = false mangled names = illegal include system krb5 conf = no smbd:async search ask sharemode = yes gpfs:sharemodes = yes gpfs:leases = yes gpfs:dfreequota = yes gpfs:prealloc = yes gpfs:hsm = yes gpfs:winattr = yes gpfs:merge_writeappend = no fruit:metadata = stream fruit:nfs_aces = no fruit:veto_appledouble = no readdir_attr:aapl_max_access = false shadow:snapdir = .snapshots shadow:fixinodes = yes shadow:snapdirseverywhere = yes shadow:sort = desc nfs4:mode = simple nfs4:chown = yes nfs4:acedup = merge add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport server string = IBM NAS client use spnego = yes kerberos method = system keytab ldap admin dn = cn=Directory Manager ldap ssl = start tls ldap suffix = dc=example,dc=com netbios name = spectrum1 passdb backend = ldapsam:"ldap://ssipa.example.com" realm = example.com security = ADS dedicated keytab file = /etc/krb5.keytab password server = ssipa.example.com idmap:cache = no idmap config * : read only = no idmap config * : backend = autorid idmap config * : range = 10000000-299999999 idmap config * : rangesize = 1000000 workgroup = labs1 ntlm auth = yes [share1] path = /ibm/gpfs1/labs1 guest ok = no browseable = yes comment = jas share smb encrypt = disabled [root at spectrum1 ~]# mmsmb export list export path browseable guest ok smb encrypt share1 /ibm/gpfs1/labs1 yes no disabled userauth command: mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com root at spectrum1 ~]# mmuserauth service list FILE access configuration : LDAP PARAMETERS VALUES ------------------------------------------------- ENABLE_SERVER_TLS true ENABLE_KERBEROS true USER_NAME cn=Directory Manager SERVERS ssipa.example.com NETBIOS_NAME spectrum1 BASE_DN dc=example,dc=com USER_DN none GROUP_DN none NETGROUP_DN none USER_OBJECTCLASS posixAccount GROUP_OBJECTCLASS posixGroup USER_NAME_ATTRIB cn USER_ID_ATTRIB uid KERBEROS_SERVER ssipa.example.com KERBEROS_REALM example.com OBJECT access not configured PARAMETERS VALUES ------------------------------------------------- net ads keytab list -> does not show any keys LDAP user information was updated with Samba attributes according to the documentation: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm [root at spectrum1 ~]# pdbedit -L -v Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 No builtin backend found, trying to load plugin Module 'ldapsam' loaded db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] StartTLS issued: using a TLS connection smbldap_open_connection: connection opened ldap_connect_system: successful connection to the LDAP server smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] smbldap_search_paged: search was successful init_sam_from_ldap: Entry found for user: jas --------------- Unix username: jas NT username: jas Account Flags: [U ] User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 Forcing Primary Group to 'Domain Users' for jas Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 Full Name: jas jas Home Directory: \\spectrum1\jas HomeDir Drive: Logon Script: Profile Path: \\spectrum1\jas\profile Domain: SPECTRUM1 Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: never Kickoff time: never Password last set: Thu, 17 May 2018 14:08:01 EDT Password can change: Thu, 17 May 2018 14:08:01 EDT Password must change: never Last bad password : 0 Bad password count : 0 Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Client keytab file: [root at test ~]# klist -k Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 host/test.example.com at example.com 1 host/test.example.com at example.com From christof.schmitt at us.ibm.com Sat May 19 00:05:56 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Fri, 18 May 2018 23:05:56 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos authentication issue In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From spectrumscale at kiranghag.com Sat May 19 05:00:04 2018 From: spectrumscale at kiranghag.com (KG) Date: Sat, 19 May 2018 09:30:04 +0530 Subject: [gpfsug-discuss] NFS on system Z Message-ID: Hi The SS FAQ says following for system Z - Cluster Export Service (CES) is not supported. (Monitoring capabilities, Object, CIFS, User space implementation of NFS) - Kernel NFS (v3 and v4) is supported. Clustered NFS is not supported. Does this mean we can only configure OS based non-redundant NFS exports from scale nodes without CNFS/CES? Kiran Ghag -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Sat May 19 07:58:41 2018 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Sat, 19 May 2018 08:58:41 +0200 Subject: [gpfsug-discuss] NFS on system Z In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Sun May 20 19:42:32 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Sun, 20 May 2018 18:42:32 +0000 Subject: [gpfsug-discuss] NFS on system Z In-Reply-To: Message-ID: Kieran, You can also add x86 nodes to run CES and Ganesha NFS. Either in the same cluster or perhaps neater in a separate multi-cluster Mount. Daniel Dr Daniel Kidger IBM Technical Sales Specialist Software Defined Solution Sales +44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 19 May 2018, at 07:58, Olaf Weiser wrote: > > HI, > yes.. CES comes along with lots of monitors about status, health checks and a special NFS (ganesha) code.. which is optimized / available only for a limited choice of OS/platforms > so CES is not available for e.g. AIX and in your case... not available for systemZ ... > > but - of course you can setup your own NFS server .. > > > > > From: KG > To: gpfsug main discussion list > Date: 05/19/2018 06:00 AM > Subject: [gpfsug-discuss] NFS on system Z > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hi > > The SS FAQ says following for system Z > Cluster Export Service (CES) is not supported. (Monitoring capabilities, Object, CIFS, User space implementation of NFS) > Kernel NFS (v3 and v4) is supported. Clustered NFS is not supported. > > Does this mean we can only configure OS based non-redundant NFS exports from scale nodes without CNFS/CES? > > Kiran Ghag > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Sun May 20 22:39:41 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Sun, 20 May 2018 21:39:41 +0000 Subject: [gpfsug-discuss] Presentations for Spectrum Scale USA - May 16th-17th Message-ID: <7AABFF43-54F0-418E-9F3C-C0E479696528@nuance.com> I?ve uploaded what I have received so far to the spectrumscale.org website, and they are located here: https://www.spectrumscaleug.org/presentations/2018/ Still working on the other authors for their content. Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron.s.knister at nasa.gov Mon May 21 02:41:08 2018 From: aaron.s.knister at nasa.gov (Aaron Knister) Date: Sun, 20 May 2018 21:41:08 -0400 (EDT) Subject: [gpfsug-discuss] Presentations for Spectrum Scale USA - May 16th-17th In-Reply-To: <7AABFF43-54F0-418E-9F3C-C0E479696528@nuance.com> References: <7AABFF43-54F0-418E-9F3C-C0E479696528@nuance.com> Message-ID: I must admit, I got a chuckle out of this typo: Compostable Infrastructure for Technical Computing sadly, I'm sure we all have stories about what we would consider "compostable" infrastructure. -Aaron -- Aaron Knister NASA Center for Climate Simulation (Code 606.2) Goddard Space Flight Center (301) 286-2776 On Sun, 20 May 2018, Oesterlin, Robert wrote: > > I?ve uploaded what I have received so far to the spectrumscale.org website, and they are located here: > > ? > > https://www.spectrumscaleug.org/presentations/2018/ > > ? > > Still working on the other authors for their content. > > ? > > ? > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > ? > > > From bbanister at jumptrading.com Mon May 21 21:51:54 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 21 May 2018 20:51:54 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> <723293fee7214938ae20cdfdbaf99149@jumptrading.com> <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> Message-ID: <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system + fileset + user with this form: mmedquota -d -u :: It?s just not documented in the man page or shown in the examples. Docs need to be updated! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 11:00 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Tue May 22 09:01:21 2018 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Tue, 22 May 2018 16:01:21 +0800 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com><672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com><723293fee7214938ae20cdfdbaf99149@jumptrading.com><3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> Message-ID: Hi Kuei-Yu, Should we update the document as the requested below ? Thanks. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Bryan Banister To: gpfsug main discussion list Date: 05/22/2018 04:52 AM Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Sent by: gpfsug-discuss-bounces at spectrumscale.org Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system + fileset + user with this form: mmedquota -d -u :: It?s just not documented in the man page or shown in the examples. Docs need to be updated! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 11:00 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Tue May 22 09:51:51 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Tue, 22 May 2018 08:51:51 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> Message-ID: Hi all, This has been resolved by (I presume what Jonathan was referring to in his posts) setting "dfree cache time" to 0. Many thanks for everyone's input on this! Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Sobey, Richard A Sent: 14 May 2018 12:54 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query Thanks Jonathan. What I failed to mention in my OP was that MacOS clients DO report the correct size of each mounted folder. Not sure how that changes anything except to reinforce the idea that it's Windows at fault. Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: 14 May 2018 11:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query On Mon, 2018-05-14 at 10:09 +0000, Sobey, Richard A wrote: [SNIP] > ? > I am worried that IBM may tell us we?re doing it wrong (humm) and to > create individual exports for each fileset but this will quickly > become tiresome! > Worst case scenario you could fall back to using the dfree option in smb.conf and then use a program to get the file quota. I have the ~100 lines of C that you need it. Though it has been ~5 years since I last used it. In fact the whole reporting the fileset quota as the disk size is my idea, and the dfree config option is how I implemented it prior to IBM adding it to the vfs_gpfs module. A quick check shows a commit from Jeremy Allison on June 18th last year to use const stuct smb_filename, the comment on the commit is ?instead of const char *. We need to migrate all pathname based VFS calls to use a struct to finish modernising the VFS with extra timestamp and flags parameters. I suspect this change has broken the behaviour. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From p.childs at qmul.ac.uk Tue May 22 10:23:58 2018 From: p.childs at qmul.ac.uk (Peter Childs) Date: Tue, 22 May 2018 09:23:58 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> <723293fee7214938ae20cdfdbaf99149@jumptrading.com> <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> Message-ID: Its a little difficult that the different quota commands for Spectrum Scale are all different in there syntax and can only be used by the "right" people. As far as I can see mmedquota is the only quota command that uses this "full colon" syntax and it would be better if its syntax matched that for mmsetquota and mmlsquota. or that the reset to default quota was added to mmsetquota and mmedquota was left for editing quotas visually in an editor. Regards Peter Childs On Tue, 2018-05-22 at 16:01 +0800, IBM Spectrum Scale wrote: Hi Kuei-Yu, Should we update the document as the requested below ? Thanks. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. [Inactive hide details for Bryan Banister ---05/22/2018 04:52:15 AM---Quick update. Thanks to a colleague of mine, John Valdes,]Bryan Banister ---05/22/2018 04:52:15 AM---Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system From: Bryan Banister To: gpfsug main discussion list Date: 05/22/2018 04:52 AM Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Sent by: gpfsug-discuss-bounces at spectrumscale.org Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system + fileset + user with this form: mmedquota -d -u :: It?s just not documented in the man page or shown in the examples. Docs need to be updated! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 11:00 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Peter Childs ITS Research Storage Queen Mary, University of London -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From valleru at cbio.mskcc.org Tue May 22 16:42:43 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 22 May 2018 11:42:43 -0400 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Message-ID: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this -?PMR: 24090,L6Q,000. However, According to the ticket ?- they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also ?- According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run ?mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ?( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dwayne.Hart at med.mun.ca Tue May 22 16:45:07 2018 From: Dwayne.Hart at med.mun.ca (Dwayne.Hart at med.mun.ca) Date: Tue, 22 May 2018 15:45:07 +0000 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Message-ID: Hi Lohit, What type of network are you using on the back end to transfer the GPFS traffic? Best, Dwayne From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org Sent: Tuesday, May 22, 2018 1:13 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this - PMR: 24090,L6Q,000. However, According to the ticket - they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also - According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Tue May 22 17:40:26 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 22 May 2018 12:40:26 -0400 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Message-ID: <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> 10G Ethernet. Thanks, Lohit On May 22, 2018, 11:55 AM -0400, Dwayne.Hart at med.mun.ca, wrote: > Hi Lohit, > > What type of network are you using on the back end to transfer the GPFS traffic? > > Best, > Dwayne > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org > Sent: Tuesday, May 22, 2018 1:13 PM > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 > > Hello All, > > We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) > Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) > The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. > > I have raised an IBM critical service request about a month ago related to this -?PMR: 24090,L6Q,000. > However, According to the ticket ?- they seemed to feel that it might not be related to GPFS. > Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. > > One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. > Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. > > Also ?- According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run ?mmchconfig release=LATEST command, and that will resolve the issue. > However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. > > Can downgrading GPFS take us back to exactly the previous GPFS config state? > With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? > Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 > > Our previous state: > > 2 Storage clusters - 4.2.3.2 > 1 Compute cluster - 4.2.3.2 ?( remote mounts the above 2 storage clusters ) > > Our current state: > > 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) > 1 Compute cluster - 5.0.0.2 > > Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? > > Any advice on the best steps forward, would greatly help. > > Thanks, > > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dwayne.Hart at med.mun.ca Tue May 22 17:54:43 2018 From: Dwayne.Hart at med.mun.ca (Dwayne.Hart at med.mun.ca) Date: Tue, 22 May 2018 16:54:43 +0000 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> , <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> Message-ID: We are having issues with ESS/Mellanox implementation and were curious as to what you were working with from a network perspective. Best, Dwayne ? Dwayne Hart | Systems Administrator IV CHIA, Faculty of Medicine Memorial University of Newfoundland 300 Prince Philip Drive St. John?s, Newfoundland | A1B 3V6 Craig L Dobbin Building | 4M409 T 709 864 6631 On May 22, 2018, at 2:10 PM, "valleru at cbio.mskcc.org" > wrote: 10G Ethernet. Thanks, Lohit On May 22, 2018, 11:55 AM -0400, Dwayne.Hart at med.mun.ca, wrote: Hi Lohit, What type of network are you using on the back end to transfer the GPFS traffic? Best, Dwayne From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org Sent: Tuesday, May 22, 2018 1:13 PM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this - PMR: 24090,L6Q,000. However, According to the ticket - they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also - According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Tue May 22 19:16:28 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 22 May 2018 14:16:28 -0400 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> Message-ID: <7cb337ab-7824-40a6-9bbf-b2cd62ec97cf@Spark> Thank Dwayne. I don?t think, we are facing anything else from network perspective as of now. We were seeing deadlocks initially when we upgraded to 5.0, but it might not be because of network. We also see deadlocks now, but they are mostly caused due to high waiters i believe. I have temporarily disabled deadlocks. Thanks, Lohit On May 22, 2018, 12:54 PM -0400, Dwayne.Hart at med.mun.ca, wrote: > We are having issues with ESS/Mellanox implementation and were curious as to what you were working with from a network perspective. > > Best, > Dwayne > ? > Dwayne Hart | Systems Administrator IV > > CHIA, Faculty of Medicine > Memorial University of Newfoundland > 300 Prince Philip Drive > St. John?s, Newfoundland | A1B 3V6 > Craig L Dobbin Building | 4M409 > T 709 864 6631 > > On May 22, 2018, at 2:10 PM, "valleru at cbio.mskcc.org" wrote: > > > 10G Ethernet. > > > > Thanks, > > Lohit > > > > On May 22, 2018, 11:55 AM -0400, Dwayne.Hart at med.mun.ca, wrote: > > > Hi Lohit, > > > > > > What type of network are you using on the back end to transfer the GPFS traffic? > > > > > > Best, > > > Dwayne > > > > > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org > > > Sent: Tuesday, May 22, 2018 1:13 PM > > > To: gpfsug main discussion list > > > Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 > > > > > > Hello All, > > > > > > We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) > > > Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) > > > The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. > > > > > > I have raised an IBM critical service request about a month ago related to this -?PMR: 24090,L6Q,000. > > > However, According to the ticket ?- they seemed to feel that it might not be related to GPFS. > > > Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. > > > > > > One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. > > > Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. > > > > > > Also ?- According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run ?mmchconfig release=LATEST command, and that will resolve the issue. > > > However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. > > > > > > Can downgrading GPFS take us back to exactly the previous GPFS config state? > > > With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? > > > Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 > > > > > > Our previous state: > > > > > > 2 Storage clusters - 4.2.3.2 > > > 1 Compute cluster - 4.2.3.2 ?( remote mounts the above 2 storage clusters ) > > > > > > Our current state: > > > > > > 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) > > > 1 Compute cluster - 5.0.0.2 > > > > > > Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? > > > > > > Any advice on the best steps forward, would greatly help. > > > > > > Thanks, > > > > > > Lohit > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From hopii at interia.pl Tue May 22 20:43:52 2018 From: hopii at interia.pl (hopii at interia.pl) Date: Tue, 22 May 2018 21:43:52 +0200 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 In-Reply-To: References: Message-ID: Thank you for reply. Because I didn't already know what to do, was just playing with different options including 'security = ADS' . Anyway, the problem is solved, not sure if it was a bug but the client Centos 7.4 couldn't connect to spectrum scale node RH 7.5, resulting the errors provided before. After client upgrade from Centos 7.4 to latest Centos 7.5, without any changes to configuration, smb with kerberos works perfectly fine. Thank you again, d. Od: gpfsug-discuss-request at spectrumscale.org Do: gpfsug-discuss at spectrumscale.org; Wys?ane: 1:06 Sobota 2018-05-19 Temat: gpfsug-discuss Digest, Vol 76, Issue 71 > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (hopii at interia.pl) > 2. Re: Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (Christof Schmitt) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 May 2018 20:53:57 +0200 > From: hopii at interia.pl > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos > authentication issue > Message-ID: > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. > > Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. > NFS mount with keberos works with no issues as well. > > But I ran out of ideas how to configure SMB using LDAP with kerberos. > > I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. > But error message seems to point to keytab file, which is present on both, server and client nodes. > > I ran into simillar post, dated few days ago, so I'm not the only one. > https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html > > > Below is my configuration and error message, and I'd appreciate any hints or help. > > Thank you, > d. > > > > Error message from /var/adm/ras/log.smbd > > [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) > GENSEC backend 'ntlmssp_resume_ccache' registered > [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) > smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] > [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) > Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR > > > > Cluster nodes > spectrum1.example.com RedHat 7.4 > spectrum2.example.com RedHat 7.4 > spectrum3.example.com RedHat 7.4 > > Protocols nodes: > labs1.example.com > lasb2.example.com > labs3.example.com > > > ssipa.example.com Centos 7.5 > > > > spectrum scale server: > > [root at spectrum1 security]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/labs1.example.com at example.com > 1 host/labs1.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs3.example.com at example.com > 1 host/labs3.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs3.example.com at example.com > 1 cifs/labs3.example.com at example.com > > > > > [root at spectrum1 security]# net conf list > [global] > disable netbios = yes > disable spoolss = yes > printcap cache time = 0 > fileid:algorithm = fsname > fileid:fstype allow = gpfs > syncops:onmeta = no > preferred master = no > client NTLMv2 auth = yes > kernel oplocks = no > level2 oplocks = yes > debug hires timestamp = yes > max log size = 100000 > host msdfs = yes > notify:inotify = yes > wide links = no > log writeable files on exit = yes > ctdb locktime warn threshold = 5000 > auth methods = guest sam winbind > smbd:backgroundqueue = False > read only = no > use sendfile = no > strict locking = auto > posix locking = no > large readwrite = yes > aio read size = 1 > aio write size = 1 > force unknown acl user = yes > store dos attributes = yes > map readonly = yes > map archive = yes > map system = yes > map hidden = yes > ea support = yes > groupdb:backend = tdb > winbind:online check timeout = 30 > winbind max domain connections = 5 > winbind max clients = 10000 > dmapi support = no > unix extensions = no > socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 > strict allocate = yes > tdbsam:map builtin = no > aio_pthread:aio open = yes > dfree cache time = 100 > change notify = yes > max open files = 20000 > time_audit:timeout = 5000 > gencache:stabilize_count = 10000 > server min protocol = SMB2_02 > server max protocol = SMB3_02 > vfs objects = shadow_copy2 syncops gpfs fileid time_audit > smbd profiling level = on > log level = 1 > logging = syslog at 0 file > smbd exit on ip drop = yes > durable handles = no > ctdb:smbxsrv_open_global.tdb = false > mangled names = illegal > include system krb5 conf = no > smbd:async search ask sharemode = yes > gpfs:sharemodes = yes > gpfs:leases = yes > gpfs:dfreequota = yes > gpfs:prealloc = yes > gpfs:hsm = yes > gpfs:winattr = yes > gpfs:merge_writeappend = no > fruit:metadata = stream > fruit:nfs_aces = no > fruit:veto_appledouble = no > readdir_attr:aapl_max_access = false > shadow:snapdir = .snapshots > shadow:fixinodes = yes > shadow:snapdirseverywhere = yes > shadow:sort = desc > nfs4:mode = simple > nfs4:chown = yes > nfs4:acedup = merge > add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport > change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport > delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport > server string = IBM NAS > client use spnego = yes > kerberos method = system keytab > ldap admin dn = cn=Directory Manager > ldap ssl = start tls > ldap suffix = dc=example,dc=com > netbios name = spectrum1 > passdb backend = ldapsam:"ldap://ssipa.example.com" > realm = example.com > security = ADS > dedicated keytab file = /etc/krb5.keytab > password server = ssipa.example.com > idmap:cache = no > idmap config * : read only = no > idmap config * : backend = autorid > idmap config * : range = 10000000-299999999 > idmap config * : rangesize = 1000000 > workgroup = labs1 > ntlm auth = yes > > [share1] > path = /ibm/gpfs1/labs1 > guest ok = no > browseable = yes > comment = jas share > smb encrypt = disabled > > > [root at spectrum1 ~]# mmsmb export list > export path browseable guest ok smb encrypt > share1 /ibm/gpfs1/labs1 yes no disabled > > > > userauth command: > mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com > > > root at spectrum1 ~]# mmuserauth service list > FILE access configuration : LDAP > PARAMETERS VALUES > ------------------------------------------------- > ENABLE_SERVER_TLS true > ENABLE_KERBEROS true > USER_NAME cn=Directory Manager > SERVERS ssipa.example.com > NETBIOS_NAME spectrum1 > BASE_DN dc=example,dc=com > USER_DN none > GROUP_DN none > NETGROUP_DN none > USER_OBJECTCLASS posixAccount > GROUP_OBJECTCLASS posixGroup > USER_NAME_ATTRIB cn > USER_ID_ATTRIB uid > KERBEROS_SERVER ssipa.example.com > KERBEROS_REALM example.com > > OBJECT access not configured > PARAMETERS VALUES > ------------------------------------------------- > > net ads keytab list -> does not show any keys > > > LDAP user information was updated with Samba attributes according to the documentation: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm > > > [root at spectrum1 ~]# pdbedit -L -v > Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 > Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 > No builtin backend found, trying to load plugin > Module 'ldapsam' loaded > db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b > db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 > smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] > StartTLS issued: using a TLS connection > smbldap_open_connection: connection opened > ldap_connect_system: successful connection to the LDAP server > smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] > smbldap_search_paged: search was successful > init_sam_from_ldap: Entry found for user: jas > --------------- > Unix username: jas > NT username: jas > Account Flags: [U ] > User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 > Forcing Primary Group to 'Domain Users' for jas > Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 > Full Name: jas jas > Home Directory: \\spectrum1\jas > HomeDir Drive: > Logon Script: > Profile Path: \\spectrum1\jas\profile > Domain: SPECTRUM1 > Account desc: > Workstations: > Munged dial: > Logon time: 0 > Logoff time: never > Kickoff time: never > Password last set: Thu, 17 May 2018 14:08:01 EDT > Password can change: Thu, 17 May 2018 14:08:01 EDT > Password must change: never > Last bad password : 0 > Bad password count : 0 > Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF > > > > Client keytab file: > [root at test ~]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/test.example.com at example.com > 1 host/test.example.com at example.com > > > > ------------------------------ > > Message: 2 > Date: Fri, 18 May 2018 23:05:56 +0000 > From: "Christof Schmitt" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP > kerberos authentication issue > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 76, Issue 71 > ********************************************** > From alvise.dorigo at psi.ch Wed May 23 08:41:50 2018 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Wed, 23 May 2018 07:41:50 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 In-Reply-To: References: , Message-ID: <83A6EEB0EC738F459A39439733AE804522F15CC5@MBX114.d.ethz.ch> Hi Felix, yes please, configure jumbo frames for both ports. And yes, I'll check the cable (I used an old one, without any label 25G). thanks, A ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of hopii at interia.pl [hopii at interia.pl] Sent: Tuesday, May 22, 2018 9:43 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 Thank you for reply. Because I didn't already know what to do, was just playing with different options including 'security = ADS' . Anyway, the problem is solved, not sure if it was a bug but the client Centos 7.4 couldn't connect to spectrum scale node RH 7.5, resulting the errors provided before. After client upgrade from Centos 7.4 to latest Centos 7.5, without any changes to configuration, smb with kerberos works perfectly fine. Thank you again, d. Od: gpfsug-discuss-request at spectrumscale.org Do: gpfsug-discuss at spectrumscale.org; Wys?ane: 1:06 Sobota 2018-05-19 Temat: gpfsug-discuss Digest, Vol 76, Issue 71 > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (hopii at interia.pl) > 2. Re: Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (Christof Schmitt) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 May 2018 20:53:57 +0200 > From: hopii at interia.pl > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos > authentication issue > Message-ID: > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. > > Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. > NFS mount with keberos works with no issues as well. > > But I ran out of ideas how to configure SMB using LDAP with kerberos. > > I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. > But error message seems to point to keytab file, which is present on both, server and client nodes. > > I ran into simillar post, dated few days ago, so I'm not the only one. > https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html > > > Below is my configuration and error message, and I'd appreciate any hints or help. > > Thank you, > d. > > > > Error message from /var/adm/ras/log.smbd > > [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) > GENSEC backend 'ntlmssp_resume_ccache' registered > [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) > smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] > [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) > Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR > > > > Cluster nodes > spectrum1.example.com RedHat 7.4 > spectrum2.example.com RedHat 7.4 > spectrum3.example.com RedHat 7.4 > > Protocols nodes: > labs1.example.com > lasb2.example.com > labs3.example.com > > > ssipa.example.com Centos 7.5 > > > > spectrum scale server: > > [root at spectrum1 security]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/labs1.example.com at example.com > 1 host/labs1.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs3.example.com at example.com > 1 host/labs3.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs3.example.com at example.com > 1 cifs/labs3.example.com at example.com > > > > > [root at spectrum1 security]# net conf list > [global] > disable netbios = yes > disable spoolss = yes > printcap cache time = 0 > fileid:algorithm = fsname > fileid:fstype allow = gpfs > syncops:onmeta = no > preferred master = no > client NTLMv2 auth = yes > kernel oplocks = no > level2 oplocks = yes > debug hires timestamp = yes > max log size = 100000 > host msdfs = yes > notify:inotify = yes > wide links = no > log writeable files on exit = yes > ctdb locktime warn threshold = 5000 > auth methods = guest sam winbind > smbd:backgroundqueue = False > read only = no > use sendfile = no > strict locking = auto > posix locking = no > large readwrite = yes > aio read size = 1 > aio write size = 1 > force unknown acl user = yes > store dos attributes = yes > map readonly = yes > map archive = yes > map system = yes > map hidden = yes > ea support = yes > groupdb:backend = tdb > winbind:online check timeout = 30 > winbind max domain connections = 5 > winbind max clients = 10000 > dmapi support = no > unix extensions = no > socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 > strict allocate = yes > tdbsam:map builtin = no > aio_pthread:aio open = yes > dfree cache time = 100 > change notify = yes > max open files = 20000 > time_audit:timeout = 5000 > gencache:stabilize_count = 10000 > server min protocol = SMB2_02 > server max protocol = SMB3_02 > vfs objects = shadow_copy2 syncops gpfs fileid time_audit > smbd profiling level = on > log level = 1 > logging = syslog at 0 file > smbd exit on ip drop = yes > durable handles = no > ctdb:smbxsrv_open_global.tdb = false > mangled names = illegal > include system krb5 conf = no > smbd:async search ask sharemode = yes > gpfs:sharemodes = yes > gpfs:leases = yes > gpfs:dfreequota = yes > gpfs:prealloc = yes > gpfs:hsm = yes > gpfs:winattr = yes > gpfs:merge_writeappend = no > fruit:metadata = stream > fruit:nfs_aces = no > fruit:veto_appledouble = no > readdir_attr:aapl_max_access = false > shadow:snapdir = .snapshots > shadow:fixinodes = yes > shadow:snapdirseverywhere = yes > shadow:sort = desc > nfs4:mode = simple > nfs4:chown = yes > nfs4:acedup = merge > add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport > change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport > delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport > server string = IBM NAS > client use spnego = yes > kerberos method = system keytab > ldap admin dn = cn=Directory Manager > ldap ssl = start tls > ldap suffix = dc=example,dc=com > netbios name = spectrum1 > passdb backend = ldapsam:"ldap://ssipa.example.com" > realm = example.com > security = ADS > dedicated keytab file = /etc/krb5.keytab > password server = ssipa.example.com > idmap:cache = no > idmap config * : read only = no > idmap config * : backend = autorid > idmap config * : range = 10000000-299999999 > idmap config * : rangesize = 1000000 > workgroup = labs1 > ntlm auth = yes > > [share1] > path = /ibm/gpfs1/labs1 > guest ok = no > browseable = yes > comment = jas share > smb encrypt = disabled > > > [root at spectrum1 ~]# mmsmb export list > export path browseable guest ok smb encrypt > share1 /ibm/gpfs1/labs1 yes no disabled > > > > userauth command: > mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com > > > root at spectrum1 ~]# mmuserauth service list > FILE access configuration : LDAP > PARAMETERS VALUES > ------------------------------------------------- > ENABLE_SERVER_TLS true > ENABLE_KERBEROS true > USER_NAME cn=Directory Manager > SERVERS ssipa.example.com > NETBIOS_NAME spectrum1 > BASE_DN dc=example,dc=com > USER_DN none > GROUP_DN none > NETGROUP_DN none > USER_OBJECTCLASS posixAccount > GROUP_OBJECTCLASS posixGroup > USER_NAME_ATTRIB cn > USER_ID_ATTRIB uid > KERBEROS_SERVER ssipa.example.com > KERBEROS_REALM example.com > > OBJECT access not configured > PARAMETERS VALUES > ------------------------------------------------- > > net ads keytab list -> does not show any keys > > > LDAP user information was updated with Samba attributes according to the documentation: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm > > > [root at spectrum1 ~]# pdbedit -L -v > Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 > Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 > No builtin backend found, trying to load plugin > Module 'ldapsam' loaded > db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b > db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 > smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] > StartTLS issued: using a TLS connection > smbldap_open_connection: connection opened > ldap_connect_system: successful connection to the LDAP server > smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] > smbldap_search_paged: search was successful > init_sam_from_ldap: Entry found for user: jas > --------------- > Unix username: jas > NT username: jas > Account Flags: [U ] > User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 > Forcing Primary Group to 'Domain Users' for jas > Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 > Full Name: jas jas > Home Directory: \\spectrum1\jas > HomeDir Drive: > Logon Script: > Profile Path: \\spectrum1\jas\profile > Domain: SPECTRUM1 > Account desc: > Workstations: > Munged dial: > Logon time: 0 > Logoff time: never > Kickoff time: never > Password last set: Thu, 17 May 2018 14:08:01 EDT > Password can change: Thu, 17 May 2018 14:08:01 EDT > Password must change: never > Last bad password : 0 > Bad password count : 0 > Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF > > > > Client keytab file: > [root at test ~]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/test.example.com at example.com > 1 host/test.example.com at example.com > > > > ------------------------------ > > Message: 2 > Date: Fri, 18 May 2018 23:05:56 +0000 > From: "Christof Schmitt" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP > kerberos authentication issue > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 76, Issue 71 > ********************************************** > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From alvise.dorigo at psi.ch Wed May 23 08:42:59 2018 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Wed, 23 May 2018 07:42:59 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 In-Reply-To: <83A6EEB0EC738F459A39439733AE804522F15CC5@MBX114.d.ethz.ch> References: , , <83A6EEB0EC738F459A39439733AE804522F15CC5@MBX114.d.ethz.ch> Message-ID: <83A6EEB0EC738F459A39439733AE804522F15CDF@MBX114.d.ethz.ch> ops sorry! wrong window! please remove it... sorry. Alvise Dorigo ________________________________________ From: Dorigo Alvise (PSI) Sent: Wednesday, May 23, 2018 9:41 AM To: gpfsug main discussion list Subject: RE: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 Hi Felix, yes please, configure jumbo frames for both ports. And yes, I'll check the cable (I used an old one, without any label 25G). thanks, A ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of hopii at interia.pl [hopii at interia.pl] Sent: Tuesday, May 22, 2018 9:43 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 Thank you for reply. Because I didn't already know what to do, was just playing with different options including 'security = ADS' . Anyway, the problem is solved, not sure if it was a bug but the client Centos 7.4 couldn't connect to spectrum scale node RH 7.5, resulting the errors provided before. After client upgrade from Centos 7.4 to latest Centos 7.5, without any changes to configuration, smb with kerberos works perfectly fine. Thank you again, d. Od: gpfsug-discuss-request at spectrumscale.org Do: gpfsug-discuss at spectrumscale.org; Wys?ane: 1:06 Sobota 2018-05-19 Temat: gpfsug-discuss Digest, Vol 76, Issue 71 > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (hopii at interia.pl) > 2. Re: Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (Christof Schmitt) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 May 2018 20:53:57 +0200 > From: hopii at interia.pl > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos > authentication issue > Message-ID: > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. > > Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. > NFS mount with keberos works with no issues as well. > > But I ran out of ideas how to configure SMB using LDAP with kerberos. > > I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. > But error message seems to point to keytab file, which is present on both, server and client nodes. > > I ran into simillar post, dated few days ago, so I'm not the only one. > https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html > > > Below is my configuration and error message, and I'd appreciate any hints or help. > > Thank you, > d. > > > > Error message from /var/adm/ras/log.smbd > > [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) > GENSEC backend 'ntlmssp_resume_ccache' registered > [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) > smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] > [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) > Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR > > > > Cluster nodes > spectrum1.example.com RedHat 7.4 > spectrum2.example.com RedHat 7.4 > spectrum3.example.com RedHat 7.4 > > Protocols nodes: > labs1.example.com > lasb2.example.com > labs3.example.com > > > ssipa.example.com Centos 7.5 > > > > spectrum scale server: > > [root at spectrum1 security]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/labs1.example.com at example.com > 1 host/labs1.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs3.example.com at example.com > 1 host/labs3.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs3.example.com at example.com > 1 cifs/labs3.example.com at example.com > > > > > [root at spectrum1 security]# net conf list > [global] > disable netbios = yes > disable spoolss = yes > printcap cache time = 0 > fileid:algorithm = fsname > fileid:fstype allow = gpfs > syncops:onmeta = no > preferred master = no > client NTLMv2 auth = yes > kernel oplocks = no > level2 oplocks = yes > debug hires timestamp = yes > max log size = 100000 > host msdfs = yes > notify:inotify = yes > wide links = no > log writeable files on exit = yes > ctdb locktime warn threshold = 5000 > auth methods = guest sam winbind > smbd:backgroundqueue = False > read only = no > use sendfile = no > strict locking = auto > posix locking = no > large readwrite = yes > aio read size = 1 > aio write size = 1 > force unknown acl user = yes > store dos attributes = yes > map readonly = yes > map archive = yes > map system = yes > map hidden = yes > ea support = yes > groupdb:backend = tdb > winbind:online check timeout = 30 > winbind max domain connections = 5 > winbind max clients = 10000 > dmapi support = no > unix extensions = no > socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 > strict allocate = yes > tdbsam:map builtin = no > aio_pthread:aio open = yes > dfree cache time = 100 > change notify = yes > max open files = 20000 > time_audit:timeout = 5000 > gencache:stabilize_count = 10000 > server min protocol = SMB2_02 > server max protocol = SMB3_02 > vfs objects = shadow_copy2 syncops gpfs fileid time_audit > smbd profiling level = on > log level = 1 > logging = syslog at 0 file > smbd exit on ip drop = yes > durable handles = no > ctdb:smbxsrv_open_global.tdb = false > mangled names = illegal > include system krb5 conf = no > smbd:async search ask sharemode = yes > gpfs:sharemodes = yes > gpfs:leases = yes > gpfs:dfreequota = yes > gpfs:prealloc = yes > gpfs:hsm = yes > gpfs:winattr = yes > gpfs:merge_writeappend = no > fruit:metadata = stream > fruit:nfs_aces = no > fruit:veto_appledouble = no > readdir_attr:aapl_max_access = false > shadow:snapdir = .snapshots > shadow:fixinodes = yes > shadow:snapdirseverywhere = yes > shadow:sort = desc > nfs4:mode = simple > nfs4:chown = yes > nfs4:acedup = merge > add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport > change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport > delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport > server string = IBM NAS > client use spnego = yes > kerberos method = system keytab > ldap admin dn = cn=Directory Manager > ldap ssl = start tls > ldap suffix = dc=example,dc=com > netbios name = spectrum1 > passdb backend = ldapsam:"ldap://ssipa.example.com" > realm = example.com > security = ADS > dedicated keytab file = /etc/krb5.keytab > password server = ssipa.example.com > idmap:cache = no > idmap config * : read only = no > idmap config * : backend = autorid > idmap config * : range = 10000000-299999999 > idmap config * : rangesize = 1000000 > workgroup = labs1 > ntlm auth = yes > > [share1] > path = /ibm/gpfs1/labs1 > guest ok = no > browseable = yes > comment = jas share > smb encrypt = disabled > > > [root at spectrum1 ~]# mmsmb export list > export path browseable guest ok smb encrypt > share1 /ibm/gpfs1/labs1 yes no disabled > > > > userauth command: > mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com > > > root at spectrum1 ~]# mmuserauth service list > FILE access configuration : LDAP > PARAMETERS VALUES > ------------------------------------------------- > ENABLE_SERVER_TLS true > ENABLE_KERBEROS true > USER_NAME cn=Directory Manager > SERVERS ssipa.example.com > NETBIOS_NAME spectrum1 > BASE_DN dc=example,dc=com > USER_DN none > GROUP_DN none > NETGROUP_DN none > USER_OBJECTCLASS posixAccount > GROUP_OBJECTCLASS posixGroup > USER_NAME_ATTRIB cn > USER_ID_ATTRIB uid > KERBEROS_SERVER ssipa.example.com > KERBEROS_REALM example.com > > OBJECT access not configured > PARAMETERS VALUES > ------------------------------------------------- > > net ads keytab list -> does not show any keys > > > LDAP user information was updated with Samba attributes according to the documentation: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm > > > [root at spectrum1 ~]# pdbedit -L -v > Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 > Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 > No builtin backend found, trying to load plugin > Module 'ldapsam' loaded > db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b > db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 > smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] > StartTLS issued: using a TLS connection > smbldap_open_connection: connection opened > ldap_connect_system: successful connection to the LDAP server > smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] > smbldap_search_paged: search was successful > init_sam_from_ldap: Entry found for user: jas > --------------- > Unix username: jas > NT username: jas > Account Flags: [U ] > User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 > Forcing Primary Group to 'Domain Users' for jas > Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 > Full Name: jas jas > Home Directory: \\spectrum1\jas > HomeDir Drive: > Logon Script: > Profile Path: \\spectrum1\jas\profile > Domain: SPECTRUM1 > Account desc: > Workstations: > Munged dial: > Logon time: 0 > Logoff time: never > Kickoff time: never > Password last set: Thu, 17 May 2018 14:08:01 EDT > Password can change: Thu, 17 May 2018 14:08:01 EDT > Password must change: never > Last bad password : 0 > Bad password count : 0 > Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF > > > > Client keytab file: > [root at test ~]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/test.example.com at example.com > 1 host/test.example.com at example.com > > > > ------------------------------ > > Message: 2 > Date: Fri, 18 May 2018 23:05:56 +0000 > From: "Christof Schmitt" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP > kerberos authentication issue > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 76, Issue 71 > ********************************************** > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From johnbent at gmail.com Wed May 23 10:39:08 2018 From: johnbent at gmail.com (John Bent) Date: Wed, 23 May 2018 03:39:08 -0600 Subject: [gpfsug-discuss] IO500 Call for Submissions Message-ID: IO500 Call for Submissions Deadline: 23 June 2018 AoE The IO500 is now accepting and encouraging submissions for the upcoming IO500 list revealed at ISC 2018 in Frankfurt, Germany. The benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please submit and we look forward to seeing many of you at ISC 2018! Please note that submissions of all size are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below. Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017 and published its first list at SC17. The need for such an initiative has long been known within High Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking. The multi-fold goals of the benchmark suite are as follows: * Maximizing simplicity in running the benchmark suite * Encouraging complexity in tuning for performance * Allowing submitters to highlight their ?hero run? performance numbers * Forcing submitters to simultaneously report performance for challenging IO patterns. Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication. The goals of the community are also multi-fold: * Gather historical data for the sake of analysis and to aid predictions of storage futures * Collect tuning information to share valuable performance optimizations across the community * Encourage vendors and designers to optimize for workloads beyond ?hero runs? * Establish bounded expectations for users, procurers, and administrators Once again, we encourage you to submit (see http://io500.org/submission), to join our community, and to attend our BoF ?The IO-500 and the Virtual Institute of I/O? at ISC 2018 where we will announce the second ever IO500 list. The current list includes results from BeeGPFS, DataWarp, IME, Lustre, and Spectrum Scale. We hope that the next list has even more! We look forward to answering any questions or concerns you might have. Thank you! IO500 Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Thu May 24 09:45:00 2018 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Thu, 24 May 2018 08:45:00 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system Message-ID: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> Dear members, at PSI I'm trying to integrate the CES service with our AD authentication system. My understanding, after talking to expert people here, is that I should use the RFC2307 model for ID mapping (described here: https://goo.gl/XvqHDH). The problem is that our ID schema is slightly different than that one described in RFC2307. In the RFC the relevant user identification fields are named "uidNumber" and "gidNumber". But in our AD database schema we have: # egrep 'uid_number|gid_number' /etc/sssd/sssd.conf ldap_user_uid_number = msSFU30UidNumber ldap_user_gid_number = msSFU30GidNumber ldap_group_gid_number = msSFU30GidNumber My question is: is it possible to configure CES to look for the custom field labels (those ones listed above) instead the default ones officially described in rfc2307 ? many thanks. Regards, Alvise Dorigo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ivano.Talamo at psi.ch Thu May 24 14:51:56 2018 From: Ivano.Talamo at psi.ch (Ivano Talamo) Date: Thu, 24 May 2018 15:51:56 +0200 Subject: [gpfsug-discuss] Inter-clusters issue with change of the subnet IP Message-ID: <432c8c12-4d36-d8a7-3c79-61b94aa409bf@psi.ch> Hi all, We currently have an issue with our GPFS clusters. Shortly when we removed/added a node to a cluster we changed IP address for the IPoIB subnet and this broke GPFS. The primary IP didn't change. In details our setup is quite standard: one GPFS cluster with CPU nodes only accessing (via remote cluster mount) different storage clusters. Clusters are on an Infiniband fabric plus IPoIB for communication via the subnet parameter. Yesterday it happened that some nodes were added to the CPU cluster with the correct primary IP addresses but incorrect IPoIB ones. Incorrect in the sense that the IPoIB addresses were already in use by some other nodes in the same CPU cluster. This made all the clusters (not only the CPU one) suffer for a lot of errors, gpfs restarting, file systems being unmounted. Removing the wrong nodes brought the clusters to a stable state. But the real strange thing came when one of these node was reinstalled, configured with the correct IPoIB address and added again to the cluster. At this point (when the node tried to mount the remote filesystems) the issue happened again. In the log files we have lines like: 2018-05-24_10:32:45.520+0200: [I] Accepted and connected to 192.168.x.y Where the IP number 192.168.x.y is the old/incorrect one. And looking at mmdiag --network there are a bunch of lines like the following: 192.168.x.z broken 233 -1 0 0 L With the wrong/old IPs. And this appears on all cluster (CPU and storage ones). Is it possible that the other nodes in the clusters use this outdated information when the reinstalled node is brought back into the cluster? Is there any kind of timeout, so that after sometimes this information is purged? Or is there any procedure that we could use to now introduce the nodes? Otherwise we see no other option but to restart GPFS on all the nodes of all clusters one by one to make sure that the incorrect information goes away. Thanks, Ivano From skylar2 at uw.edu Thu May 24 15:16:32 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Thu, 24 May 2018 14:16:32 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> Message-ID: <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> I haven't needed to change the LDAP attributes that CES uses, but I do see --user-id-attrib in the mmuserauth documentation. Unfortunately, I don't see an equivalent one for gidNumber. On Thu, May 24, 2018 at 08:45:00AM +0000, Dorigo Alvise (PSI) wrote: > Dear members, > at PSI I'm trying to integrate the CES service with our AD authentication system. > > My understanding, after talking to expert people here, is that I should use the RFC2307 model for ID mapping (described here: https://goo.gl/XvqHDH). The problem is that our ID schema is slightly different than that one described in RFC2307. In the RFC the relevant user identification fields are named "uidNumber" and "gidNumber". But in our AD database schema we have: > > # egrep 'uid_number|gid_number' /etc/sssd/sssd.conf > ldap_user_uid_number = msSFU30UidNumber > ldap_user_gid_number = msSFU30GidNumber > ldap_group_gid_number = msSFU30GidNumber > > My question is: is it possible to configure CES to look for the custom field labels (those ones listed above) instead the default ones officially described in rfc2307 ? > > many thanks. > Regards, > > Alvise Dorigo > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From jonathan.buzzard at strath.ac.uk Thu May 24 15:46:32 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 24 May 2018 15:46:32 +0100 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> Message-ID: <1527173192.28106.18.camel@strath.ac.uk> On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote: > I haven't needed to change the LDAP attributes that CES uses, but I > do see --user-id-attrib in the mmuserauth documentation. > Unfortunately, I don't see an equivalent one for gidNumber. > Is it not doing the "Samba thing" where your GID is the GID of your primary Active Directory group? This is usually "Domain Users" but not always. Basically Samba ignores the separate GID field in RFC2307bis, so one imagines the options for changing the LDAP attributes are none existent. I know back in the day this had me stumped for a while because unless you assign a GID number to the users primary group then Winbind does not return anything, aka a "getent passwd" on the user fails. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From skylar2 at uw.edu Thu May 24 15:51:09 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Thu, 24 May 2018 14:51:09 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <1527173192.28106.18.camel@strath.ac.uk> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> <1527173192.28106.18.camel@strath.ac.uk> Message-ID: <20180524145053.osnyosp4qmz4npay@utumno.gs.washington.edu> On Thu, May 24, 2018 at 03:46:32PM +0100, Jonathan Buzzard wrote: > On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote: > > I haven't needed to change the LDAP attributes that CES uses, but I > > do see --user-id-attrib in the mmuserauth documentation. > > Unfortunately, I don't see an equivalent one for gidNumber. > > > > Is it not doing the "Samba thing" where your GID is the GID of your > primary Active Directory group? This is usually "Domain Users" but not > always. > > Basically Samba ignores the separate GID field in RFC2307bis, so one > imagines the options for changing the LDAP attributes are none > existent. > > I know back in the day this had me stumped for a while because unless > you assign a GID number to the users primary group then Winbind does > not return anything, aka a "getent passwd" on the user fails. At least for us, it seems to be using the gidNumber attribute of our users. On the back-end, of course, it is Samba, but I don't know that there are mm* commands available for all of the tunables one can set in smb.conf. -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From S.J.Thompson at bham.ac.uk Thu May 24 17:46:14 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Thu, 24 May 2018 16:46:14 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <20180524145053.osnyosp4qmz4npay@utumno.gs.washington.edu> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> <1527173192.28106.18.camel@strath.ac.uk>, <20180524145053.osnyosp4qmz4npay@utumno.gs.washington.edu> Message-ID: You can change them using the normal SMB commands, from the appropriate bin directory, whether this is supported is another matter. We have one parameter set this way but I forgot which. Simkn ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Skylar Thompson [skylar2 at uw.edu] Sent: 24 May 2018 15:51 To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Question concerning integration of CES with AD authentication system On Thu, May 24, 2018 at 03:46:32PM +0100, Jonathan Buzzard wrote: > On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote: > > I haven't needed to change the LDAP attributes that CES uses, but I > > do see --user-id-attrib in the mmuserauth documentation. > > Unfortunately, I don't see an equivalent one for gidNumber. > > > > Is it not doing the "Samba thing" where your GID is the GID of your > primary Active Directory group? This is usually "Domain Users" but not > always. > > Basically Samba ignores the separate GID field in RFC2307bis, so one > imagines the options for changing the LDAP attributes are none > existent. > > I know back in the day this had me stumped for a while because unless > you assign a GID number to the users primary group then Winbind does > not return anything, aka a "getent passwd" on the user fails. At least for us, it seems to be using the gidNumber attribute of our users. On the back-end, of course, it is Samba, but I don't know that there are mm* commands available for all of the tunables one can set in smb.conf. -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From christof.schmitt at us.ibm.com Thu May 24 18:07:02 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 24 May 2018 17:07:02 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <1527173192.28106.18.camel@strath.ac.uk> References: <1527173192.28106.18.camel@strath.ac.uk>, <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch><20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> Message-ID: An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Thu May 24 18:14:28 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 24 May 2018 17:14:28 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> Message-ID: An HTML attachment was scrubbed... URL: From scale at us.ibm.com Fri May 25 08:01:43 2018 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 25 May 2018 15:01:43 +0800 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Message-ID: If you didn't run mmchconfig release=LATEST and didn't change the fs version, then you can downgrade either or both of them. Thanks. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 05/22/2018 11:54 PM Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this - PMR: 24090,L6Q,000. However, According to the ticket - they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also - According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Fri May 25 13:24:31 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Fri, 25 May 2018 12:24:31 +0000 Subject: [gpfsug-discuss] IPv6 not supported still? Message-ID: Is the FAQ woefully outdated with respect to this when it says IPv6 is not supported for virtually any scenario (GUI, NFS, CES, TCT amongst others). Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Fri May 25 14:24:11 2018 From: knop at us.ibm.com (Felipe Knop) Date: Fri, 25 May 2018 09:24:11 -0400 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Message-ID: All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Fri May 25 15:29:16 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Fri, 25 May 2018 14:29:16 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: Message-ID: Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Fri May 25 21:01:56 2018 From: knop at us.ibm.com (Felipe Knop) Date: Fri, 25 May 2018 16:01:56 -0400 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: Message-ID: Richard, As far as I could determine: Protocol servers for Scale can be at RHEL 7.4 today Protocol servers for Scale will be able to be at RHEL 7.5 once the mid-June PTFs are released On ESS, RHEL 7.3 is still the highest level, with support for higher RHEL 7.x levels still being implemented/validated Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Sobey, Richard A" To: gpfsug main discussion list Date: 05/25/2018 10:29 AM Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Fri May 25 21:06:10 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Fri, 25 May 2018 20:06:10 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: , Message-ID: Hi Richard, Ours run on 7.4 without issue. We had one upgrade to 7.5 packages (didn't reboot into new kernel) and it broke, reverting it back to a 7.4 release fixed it, so when support comes along, do it with care! Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sobey, Richard A [r.sobey at imperial.ac.uk] Sent: 25 May 2018 15:29 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From jonathan.buzzard at strath.ac.uk Fri May 25 21:37:05 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Fri, 25 May 2018 21:37:05 +0100 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: Message-ID: <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> On 25/05/18 21:06, Simon Thompson (IT Research Support) wrote: > Hi Richard, > > Ours run on 7.4 without issue. We had one upgrade to 7.5 packages > (didn't reboot into new kernel) and it broke, reverting it back to a > 7.4 release fixed it, so when support comes along, do it with care! > I will at this point chime in that DSS is on 7.4 at the moment, so I am not surprised ESS is just fine too. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From S.J.Thompson at bham.ac.uk Fri May 25 21:42:49 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Fri, 25 May 2018 20:42:49 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> References: , <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> Message-ID: I was talking about protocols. But yes, DSS is also supported and runs fine on 7.4. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Jonathan Buzzard [jonathan.buzzard at strath.ac.uk] Sent: 25 May 2018 21:37 To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 On 25/05/18 21:06, Simon Thompson (IT Research Support) wrote: > Hi Richard, > > Ours run on 7.4 without issue. We had one upgrade to 7.5 packages > (didn't reboot into new kernel) and it broke, reverting it back to a > 7.4 release fixed it, so when support comes along, do it with care! > I will at this point chime in that DSS is on 7.4 at the moment, so I am not surprised ESS is just fine too. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From jonathan at buzzard.me.uk Fri May 25 22:08:54 2018 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 25 May 2018 22:08:54 +0100 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> Message-ID: <4d3aaaad-898d-d27d-04bc-729f01cef868@buzzard.me.uk> On 25/05/18 21:42, Simon Thompson (IT Research Support) wrote: > I was talking about protocols. > > But yes, DSS is also supported and runs fine on 7.4. Sure but I believe protocols will run fine on 7.4. On the downside DSS is still 4.2.x, grrrrrrrr as we have just implemented it double grrrr. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From r.sobey at imperial.ac.uk Sat May 26 08:32:05 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Sat, 26 May 2018 07:32:05 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: , , Message-ID: Thanks All! The faq still seems to imply that 7.3 is the latest supported release. Section A2.5 specifically. Other areas of the FAQ which I've now seen do indeed say 7.4. Have a great weekend. Get Outlook for Android ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Simon Thompson (IT Research Support) Sent: Friday, May 25, 2018 9:06:10 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Hi Richard, Ours run on 7.4 without issue. We had one upgrade to 7.5 packages (didn't reboot into new kernel) and it broke, reverting it back to a 7.4 release fixed it, so when support comes along, do it with care! Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sobey, Richard A [r.sobey at imperial.ac.uk] Sent: 25 May 2018 15:29 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Mon May 28 08:59:03 2018 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Mon, 28 May 2018 09:59:03 +0200 Subject: [gpfsug-discuss] User Group Meeting at ISC2018 Frankfurt Message-ID: Greetings: IBM is happy to announce the agenda for the joint "IBM Spectrum Scale and IBM Spectrum LSF User Group Meeting" at ISC in Frankfurt, Germany. We will finish on time to attend the opening reception. As with other user group meetings, the agenda includes user stories, updates on IBM Spectrum Scale & IBM Spectrum LSF, and access to IBM experts and your peers. Please join us! To attend please register here so that we can have an accurate count of attendees: https://www-01.ibm.com/events/wwe/grp/grp308.nsf/Registration.xsp?openform&seminar=AA4A99ES We are still looking for two customers to talk about their experience with Spectrum Scale and/or Spectrum LSF. Please send me a personal mail, if you are interested to talk. Monday June 25th, 2018 - 14:00-17:30 - Conference Room Applaus 14:00-14:15 Welcome Gabor Samu (IBM) / Ulf Troppens (IBM) 14:15-14:45 What is new in Spectrum Scale? Mathias Dietz (IBM) 14:45-15:00 News from Lenovo Storage Michael Hennicke (Lenovo) 15:00-15:15 What is new in ESS? Christopher Maestas (IBM) 15:15-15:35 Customer talk 1 TBD 15:35-15:55 Customer talk 2 TBD 15:55-16:25 What is new in Spectrum Computing? Bill McMillan (IBM) 16:25-16:55 Field Update Olaf Weiser (IBM) 16:55-17:25 Spectrum Scale enhancements for CORAL Sven Oehme (IBM) 17:25-17:30 Wrap-up Gabor Samu (IBM) / Ulf Troppens (IBM) Looking forward to see some of you there. Best, Ulf -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Mon May 28 09:23:00 2018 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 28 May 2018 10:23:00 +0200 Subject: [gpfsug-discuss] mmapplypolicy --choice-algorithm fast Message-ID: Just found the Spectrum Scale policy "best practices" presentation from the latest UG: http://files.gpfsug.org/presentations/2018/USA/SpectrumScalePolicyBP.pdf which mentions: "mmapplypolicy ? --choice-algorithm fast && ... WEIGHT(0) ? (avoids final sort of all selected files by weight)" and looking at the man-page I see that "fast" "Works together with the parallelized ?g /shared?tmp ?N node?list selection method." I do a daily listing of all files, and avoiding unneccessary sorting would be great. So, what is really needed to avoid sorting for a file-list policy? Just "--choice-algorithm fast"? Also WEIGHT(0) in policy required? Also a ?g /shared?tmp ? -jf -------------- next part -------------- An HTML attachment was scrubbed... URL: From janusz.malka at desy.de Tue May 29 14:30:35 2018 From: janusz.malka at desy.de (Janusz Malka) Date: Tue, 29 May 2018 15:30:35 +0200 (CEST) Subject: [gpfsug-discuss] AFM relation on the fs level Message-ID: <120160874.9373781.1527600635623.JavaMail.zimbra@desy.de> Dear all, Is it possible to build the AFM relation on the file system level ? I mean root file set of one file system as AFM cache and mount point of second as AFM home. Best regards, Janusz -- ------------------------------------------------------------------------- Janusz Tomasz Malka IT-Scientific Computing Deutsches Elektronen-Synchrotron Ein Forschungszentrum der Helmholtz-Gemeinschaft Notkestr. 85 22607 Hamburg Germany phone: +49 40 8998 3818 e-mail: janusz.malka at desy.de ------------------------------------------------------------------------- From vpuvvada at in.ibm.com Wed May 30 04:23:28 2018 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 30 May 2018 08:53:28 +0530 Subject: [gpfsug-discuss] AFM relation on the fs level In-Reply-To: <120160874.9373781.1527600635623.JavaMail.zimbra@desy.de> References: <120160874.9373781.1527600635623.JavaMail.zimbra@desy.de> Message-ID: AFM cannot be enabled at root fileset level today. ~Venkat (vpuvvada at in.ibm.com) From: Janusz Malka To: gpfsug main discussion list Date: 05/29/2018 07:06 PM Subject: [gpfsug-discuss] AFM relation on the fs level Sent by: gpfsug-discuss-bounces at spectrumscale.org Dear all, Is it possible to build the AFM relation on the file system level ? I mean root file set of one file system as AFM cache and mount point of second as AFM home. Best regards, Janusz -- ------------------------------------------------------------------------- Janusz Tomasz Malka IT-Scientific Computing Deutsches Elektronen-Synchrotron Ein Forschungszentrum der Helmholtz-Gemeinschaft Notkestr. 85 22607 Hamburg Germany phone: +49 40 8998 3818 e-mail: janusz.malka at desy.de ------------------------------------------------------------------------- _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 30 12:52:33 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 30 May 2018 11:52:33 +0000 Subject: [gpfsug-discuss] AFM negative file caching Message-ID: Hi All, We have a file-set which is an AFM fileset and contains installed software. We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. /gpfs/apps/somesoftware/v1.2/lib Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 30 12:57:27 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 30 May 2018 11:57:27 +0000 Subject: [gpfsug-discuss] AFM negative file caching Message-ID: <2686836B-9BD3-4B9C-A5D9-7C3EF6E6D69B@bham.ac.uk> p.s. I wasn?t sure if afmDirLookupRefreshInterval and afmFileLookupRefreshInterval would be the right thing if it?s a file/directory that doesn?t exist? Simon From: on behalf of "Simon Thompson (IT Research Support)" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Wednesday, 30 May 2018 at 12:52 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] AFM negative file caching Hi All, We have a file-set which is an AFM fileset and contains installed software. We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. /gpfs/apps/somesoftware/v1.2/lib Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From peserocka at gmail.com Wed May 30 13:26:46 2018 From: peserocka at gmail.com (Peter Serocka) Date: Wed, 30 May 2018 14:26:46 +0200 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: Message-ID: As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? (Not to get started on using LD_LIBRARY_PATH in the first place?) ? Peter > On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: > > Hi All, > > We have a file-set which is an AFM fileset and contains installed software. > > We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. > > /gpfs/apps/somesoftware/v1.2/lib > > Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. > > Thanks > > Simon > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From david_johnson at brown.edu Wed May 30 13:43:33 2018 From: david_johnson at brown.edu (david_johnson at brown.edu) Date: Wed, 30 May 2018 08:43:33 -0400 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: Message-ID: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka wrote: > > As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? > > (Not to get started on using LD_LIBRARY_PATH in the first place?) > > > ? Peter > >> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: >> >> Hi All, >> >> We have a file-set which is an AFM fileset and contains installed software. >> >> We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. >> >> /gpfs/apps/somesoftware/v1.2/lib >> >> Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. >> >> Thanks >> >> Simon >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From vpuvvada at in.ibm.com Wed May 30 15:29:55 2018 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 30 May 2018 19:59:55 +0530 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> References: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> Message-ID: >I wasn?t sure if afmDirLookupRefreshInterval and afmFileLookupRefreshInterval would be the right thing if it?s a file/directory that doesn?t exist? These refresh intervals applies to all the lookups and not just for negative lookups. For working around in AFM itself, you could try setting these refresh intervals to higher value if cache does not need to validate with home often. ~Venkat (vpuvvada at in.ibm.com) From: david_johnson at brown.edu To: gpfsug main discussion list Date: 05/30/2018 06:14 PM Subject: Re: [gpfsug-discuss] AFM negative file caching Sent by: gpfsug-discuss-bounces at spectrumscale.org Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka wrote: > > As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? > > (Not to get started on using LD_LIBRARY_PATH in the first place?) > > > ? Peter > >> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: >> >> Hi All, >> >> We have a file-set which is an AFM fileset and contains installed software. >> >> We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. >> >> /gpfs/apps/somesoftware/v1.2/lib >> >> Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. >> >> Thanks >> >> Simon >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 30 15:30:40 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 30 May 2018 14:30:40 +0000 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: Message-ID: So we use easybuild to build software and dependency stacks (and modules to do all this), yeah I did wonder about putting it first, but my worry is that other "stuff" installed locally that dumps in there might then break the dependency stack. I was thinking maybe we can create something local with select symlinks and add that to the path ... but I was hoping we could do some sort of negative caching. Simon ?On 30/05/2018, 13:26, "gpfsug-discuss-bounces at spectrumscale.org on behalf of peserocka at gmail.com" wrote: As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? (Not to get started on using LD_LIBRARY_PATH in the first place?) ? Peter > On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: > > Hi All, > > We have a file-set which is an AFM fileset and contains installed software. > > We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. > > /gpfs/apps/somesoftware/v1.2/lib > > Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. > > Thanks > > Simon > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Sandra.McLaughlin at astrazeneca.com Wed May 30 16:03:32 2018 From: Sandra.McLaughlin at astrazeneca.com (McLaughlin, Sandra M) Date: Wed, 30 May 2018 15:03:32 +0000 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> Message-ID: If it?s any help, Simon, I had a very similar problem, and I set afmDirLookupRefreshIntervaland afmFileLookupRefreshInterval to one day on an AFM cache fileset which only had software on it. It did make a difference to the users. And if you are really desperate to push an application upgrade to the cache fileset, there are other ways to do it. Sandra From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Venkateswara R Puvvada Sent: 30 May 2018 15:30 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM negative file caching >I wasn?t sure if afmDirLookupRefreshIntervaland afmFileLookupRefreshIntervalwould be the right thing if it?s a file/directory that doesn?t exist? These refresh intervals applies to all the lookups and not just for negative lookups. For working around in AFM itself, you could try setting these refresh intervals to higher value if cache does not need to validate with home often. ~Venkat (vpuvvada at in.ibm.com) From: david_johnson at brown.edu To: gpfsug main discussion list > Date: 05/30/2018 06:14 PM Subject: Re: [gpfsug-discuss] AFM negative file caching Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka > wrote: > > As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? > > (Not to get started on using LD_LIBRARY_PATH in the first place?) > > > ? Peter > >> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) > wrote: >> >> Hi All, >> >> We have a file-set which is an AFM fileset and contains installed software. >> >> We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. >> >> /gpfs/apps/somesoftware/v1.2/lib >> >> Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. >> >> Thanks >> >> Simon >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA. This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_johnson at brown.edu Thu May 31 19:21:42 2018 From: david_johnson at brown.edu (David Johnson) Date: Thu, 31 May 2018 14:21:42 -0400 Subject: [gpfsug-discuss] recommendations for gpfs 5.x GUI and perf/health monitoring collector nodes Message-ID: We are planning to bring up the new ZIMon tools on our 450+ node cluster, and need to purchase new nodes to run the collector federation and GUI function on. What would you choose as a platform for this? ? memory size? ? local disk space ? SSD? shared? ? net attach ? 10Gig? 25Gig? IB? ? CPU horse power ? single or dual socket? I think I remember somebody in Cambridge UG meeting saying 150 nodes per collector as a rule of thumb, so we?re guessing a federation of 4 nodes would do it. Does this include the GUI host(s) or are those separate? Finally, we?re still using client/server based licensing model, do these nodes count as clients? Thanks, ? ddj Dave Johnson Brown University From valleru at cbio.mskcc.org Tue May 1 15:34:39 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 1 May 2018 10:34:39 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> Message-ID: <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > You have been able to do this for some time, though I think it's only just supported. > > We've been exporting remote mounts since CES was added. > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > Sent: 30 April 2018 22:11 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Hello All, > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > Because according to the limitations as mentioned in the below link: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > Regards, > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.smith at framestore.com Wed May 2 11:06:20 2018 From: peter.smith at framestore.com (Peter Smith) Date: Wed, 2 May 2018 11:06:20 +0100 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: "how do I see how much of the pagepool is in use and by what? I've looked at mmfsadm dump and mmdiag --memory and neither has provided me the information I'm looking for (or at least not in a format I understand)" +1. Pointers appreciated! :-) On 10 April 2018 at 17:22, Aaron Knister wrote: > I wonder if this is an artifact of pagepool exhaustion which makes me ask > the question-- how do I see how much of the pagepool is in use and by what? > I've looked at mmfsadm dump and mmdiag --memory and neither has provided me > the information I'm looking for (or at least not in a format I understand). > > -Aaron > > On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] > wrote: > >> I hate admitting this but I?ve found something that?s got me stumped. >> >> We have a user running an MPI job on the system. Each rank opens up >> several output files to which it writes ASCII debug information. The net >> result across several hundred ranks is an absolute smattering of teeny tiny >> I/o requests to te underlying disks which they don?t appreciate. >> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I >> don?t understand is why these write requests aren?t getting batched up into >> larger write requests to the underlying disks. >> >> If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see >> that the nasty unaligned 8k io requests are batched up into nice 1M I/o >> requests before they hit the NSD. >> >> As best I can tell the application isn?t doing any fsync?s and isn?t >> doing direct io to these files. >> >> Can anyone explain why seemingly very similar io workloads appear to >> result in well formed NSD I/O in one case and awful I/o in another? >> >> Thanks! >> >> -Stumped >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> > -- > Aaron Knister > NASA Center for Climate Simulation (Code 606.2) > Goddard Space Flight Center > (301) 286-2776 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- [image: Framestore] Peter Smith ? Senior Systems Engineer London ? New York ? Los Angeles ? Chicago ? Montr?al T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 <+44%20%280%297816%20123009> 28 Chancery Lane, London WC2A 1LB Twitter ? Facebook ? framestore.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From UWEFALKE at de.ibm.com Wed May 2 13:09:21 2018 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Wed, 2 May 2018 14:09:21 +0200 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: mmfsadm dump pgalloc might get you one step further ... Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Thomas Wolter, Sven Schoo? Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: Peter Smith To: gpfsug main discussion list Date: 02/05/2018 12:10 Subject: Re: [gpfsug-discuss] Confusing I/O Behavior Sent by: gpfsug-discuss-bounces at spectrumscale.org "how do I see how much of the pagepool is in use and by what? I've looked at mmfsadm dump and mmdiag --memory and neither has provided me the information I'm looking for (or at least not in a format I understand)" +1. Pointers appreciated! :-) On 10 April 2018 at 17:22, Aaron Knister wrote: I wonder if this is an artifact of pagepool exhaustion which makes me ask the question-- how do I see how much of the pagepool is in use and by what? I've looked at mmfsadm dump and mmdiag --memory and neither has provided me the information I'm looking for (or at least not in a format I understand). -Aaron On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] wrote: I hate admitting this but I?ve found something that?s got me stumped. We have a user running an MPI job on the system. Each rank opens up several output files to which it writes ASCII debug information. The net result across several hundred ranks is an absolute smattering of teeny tiny I/o requests to te underlying disks which they don?t appreciate. Performance plummets. The I/o requests are 30 to 80 bytes in size. What I don?t understand is why these write requests aren?t getting batched up into larger write requests to the underlying disks. If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see that the nasty unaligned 8k io requests are batched up into nice 1M I/o requests before they hit the NSD. As best I can tell the application isn?t doing any fsync?s and isn?t doing direct io to these files. Can anyone explain why seemingly very similar io workloads appear to result in well formed NSD I/O in one case and awful I/o in another? Thanks! -Stumped _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Aaron Knister NASA Center for Climate Simulation (Code 606.2) Goddard Space Flight Center (301) 286-2776 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Peter Smith ? Senior Systems Engineer London ? New York ? Los Angeles ? Chicago ? Montr?al T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 28 Chancery Lane, London WC2A 1LB Twitter ? Facebook ? framestore.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Wed May 2 13:25:42 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 2 May 2018 12:25:42 +0000 Subject: [gpfsug-discuss] AFM with clones Message-ID: <05241944-0A1C-4BC7-90FC-C22BC05F9643@bham.ac.uk> Hi, We are looking at providing an AFM cache of a home which has a number of cloned files. From the docs: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1ins_afmandafmdrlimitations.htm ? We can see that ?The mmclone command is not supported on AFM cache and AFM DR primary filesets. Clones created at home for AFM filesets are treated as separate files in the cache.? So it?s no surprise that when we pre-cache the files, they space consumed is different. What I?m not clear on is what happens if we update a clone file at home? I know AFM is supposed to only transfer the exact bytes updated, does this work with clones? i.e. at home do we just get the bytes updated in the copy-on-write clone, or do we accidentally end up shipping the whole file back? (note we are using IW mode) Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Wed May 2 13:31:37 2018 From: oehmes at gmail.com (Sven Oehme) Date: Wed, 02 May 2018 12:31:37 +0000 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: GPFS doesn't do flush on close by default unless explicit asked by the application itself, but you can configure that . mmchconfig flushOnClose=yes if you use O_SYNC or O_DIRECT then each write ends up on the media before we return. sven On Wed, Apr 11, 2018 at 7:06 AM Peter Serocka wrote: > Let?s keep in mind that line buffering is a concept > within the standard C library; > if every log line triggers one write(2) system call, > and it?s not direct io, then multiple write still get > coalesced into few larger disk writes (as with the dd example). > > A logging application might choose to close(2) > a log file after each write(2) ? that produces > a different scenario, where the file system might > guarantee that the data has been written to disk > when close(2) return a success. > > (Local Linux file systems do not do this with default mounts, > but networked filesystems usually do.) > > Aaron, can you trace your application to see > what is going on in terms of system calls? > > ? Peter > > > > On 2018 Apr 10 Tue, at 18:28, Marc A Kaplan wrote: > > > > Debug messages are typically unbuffered or "line buffered". If that is > truly causing a performance problem AND you still want to collect the > messages -- you'll need to find a better way to channel and collect those > messages. > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Wed May 2 13:34:56 2018 From: oehmes at gmail.com (Sven Oehme) Date: Wed, 02 May 2018 12:34:56 +0000 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: a few more weeks and we have a better answer than dump pgalloc ;-) On Wed, May 2, 2018 at 6:07 AM Peter Smith wrote: > "how do I see how much of the pagepool is in use and by what? I've looked > at mmfsadm dump and mmdiag --memory and neither has provided me the > information I'm looking for (or at least not in a format I understand)" > > +1. Pointers appreciated! :-) > > On 10 April 2018 at 17:22, Aaron Knister wrote: > >> I wonder if this is an artifact of pagepool exhaustion which makes me ask >> the question-- how do I see how much of the pagepool is in use and by what? >> I've looked at mmfsadm dump and mmdiag --memory and neither has provided me >> the information I'm looking for (or at least not in a format I understand). >> >> -Aaron >> >> On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE >> CORP] wrote: >> >>> I hate admitting this but I?ve found something that?s got me stumped. >>> >>> We have a user running an MPI job on the system. Each rank opens up >>> several output files to which it writes ASCII debug information. The net >>> result across several hundred ranks is an absolute smattering of teeny tiny >>> I/o requests to te underlying disks which they don?t appreciate. >>> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I >>> don?t understand is why these write requests aren?t getting batched up into >>> larger write requests to the underlying disks. >>> >>> If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see >>> that the nasty unaligned 8k io requests are batched up into nice 1M I/o >>> requests before they hit the NSD. >>> >>> As best I can tell the application isn?t doing any fsync?s and isn?t >>> doing direct io to these files. >>> >>> Can anyone explain why seemingly very similar io workloads appear to >>> result in well formed NSD I/O in one case and awful I/o in another? >>> >>> Thanks! >>> >>> -Stumped >>> >>> >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >>> >>> >> -- >> Aaron Knister >> NASA Center for Climate Simulation (Code 606.2) >> Goddard Space Flight Center >> (301) 286-2776 >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > > > > -- > [image: Framestore] Peter Smith ? Senior Systems Engineer > London ? New York ? Los Angeles ? Chicago ? Montr?al > T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 > <+44%20%280%297816%20123009> > 28 Chancery Lane, London WC2A 1LB > > Twitter ? Facebook > ? framestore.com > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alevin at gmail.com Wed May 2 17:10:48 2018 From: alevin at gmail.com (Alex Levin) Date: Wed, 2 May 2018 12:10:48 -0400 Subject: [gpfsug-discuss] Confusing I/O Behavior In-Reply-To: References: Message-ID: Aaron, Peter, I'm monitoring the pagepool usage as: buffers=`/usr/lpp/mmfs/bin/mmfsadm dump buffers | grep bufLen | awk '{ SUM += $7} END { print SUM }'` result in bytes If your pagepool is huge - the execution could take some time ( ~5 sec on 100Gb pagepool ) --Alex On Wed, May 2, 2018 at 6:06 AM, Peter Smith wrote: > "how do I see how much of the pagepool is in use and by what? I've looked > at mmfsadm dump and mmdiag --memory and neither has provided me the > information I'm looking for (or at least not in a format I understand)" > > +1. Pointers appreciated! :-) > > On 10 April 2018 at 17:22, Aaron Knister wrote: > >> I wonder if this is an artifact of pagepool exhaustion which makes me ask >> the question-- how do I see how much of the pagepool is in use and by what? >> I've looked at mmfsadm dump and mmdiag --memory and neither has provided me >> the information I'm looking for (or at least not in a format I understand). >> >> -Aaron >> >> On 4/10/18 12:00 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE >> CORP] wrote: >> >>> I hate admitting this but I?ve found something that?s got me stumped. >>> >>> We have a user running an MPI job on the system. Each rank opens up >>> several output files to which it writes ASCII debug information. The net >>> result across several hundred ranks is an absolute smattering of teeny tiny >>> I/o requests to te underlying disks which they don?t appreciate. >>> Performance plummets. The I/o requests are 30 to 80 bytes in size. What I >>> don?t understand is why these write requests aren?t getting batched up into >>> larger write requests to the underlying disks. >>> >>> If I do something like ?df if=/dev/zero of=foo bs=8k? on a node I see >>> that the nasty unaligned 8k io requests are batched up into nice 1M I/o >>> requests before they hit the NSD. >>> >>> As best I can tell the application isn?t doing any fsync?s and isn?t >>> doing direct io to these files. >>> >>> Can anyone explain why seemingly very similar io workloads appear to >>> result in well formed NSD I/O in one case and awful I/o in another? >>> >>> Thanks! >>> >>> -Stumped >>> >>> >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >>> >>> >> -- >> Aaron Knister >> NASA Center for Climate Simulation (Code 606.2) >> Goddard Space Flight Center >> (301) 286-2776 >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > > > > -- > [image: Framestore] Peter Smith ? Senior Systems Engineer > London ? New York ? Los Angeles ? Chicago ? Montr?al > T +44 (0)20 7208 2600 ? M +44 (0)7816 123009 > <+44%20%280%297816%20123009> > 28 Chancery Lane, London WC2A 1LB > > Twitter ? Facebook > ? framestore.com > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vpuvvada at in.ibm.com Wed May 2 18:48:01 2018 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 2 May 2018 23:18:01 +0530 Subject: [gpfsug-discuss] AFM with clones In-Reply-To: <05241944-0A1C-4BC7-90FC-C22BC05F9643@bham.ac.uk> References: <05241944-0A1C-4BC7-90FC-C22BC05F9643@bham.ac.uk> Message-ID: >What I?m not clear on is what happens if we update a clone file at home? I know AFM is supposed to only transfer the exact bytes updated, does this work with clones? i.e. at home do we just get the bytes updated in the >copy-on-write clone, or do we accidentally end up shipping the whole file back? IW mode revalidation detects that file is changed at home, all data blocks are cleared (punches the hole) and the next read pulls whole file from the home. ~Venkat (vpuvvada at in.ibm.com) From: "Simon Thompson (IT Research Support)" To: "gpfsug-discuss at spectrumscale.org" Date: 05/02/2018 05:55 PM Subject: [gpfsug-discuss] AFM with clones Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, We are looking at providing an AFM cache of a home which has a number of cloned files. From the docs: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1ins_afmandafmdrlimitations.htm ? We can see that ?The mmclone command is not supported on AFM cache and AFM DR primary filesets. Clones created at home for AFM filesets are treated as separate files in the cache.? So it?s no surprise that when we pre-cache the files, they space consumed is different. What I?m not clear on is what happens if we update a clone file at home? I know AFM is supposed to only transfer the exact bytes updated, does this work with clones? i.e. at home do we just get the bytes updated in the copy-on-write clone, or do we accidentally end up shipping the whole file back? (note we are using IW mode) Thanks Simon_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=92LOlNh2yLzrrGTDA7HnfF8LFr55zGxghLZtvZcZD7A&m=yLFsan-7rzFW2Nw9k8A-SHKQfNQonl9v_hk9hpXLYjQ&s=7w_-SsCLeUNBZoFD3zUF5ika7PTUIQkKuOhuz-5pr1I&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Thu May 3 10:43:31 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Thu, 3 May 2018 09:43:31 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used Message-ID: Hi all, I'd be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you've employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Thu May 3 12:41:28 2018 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Thu, 3 May 2018 13:41:28 +0200 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Thu May 3 14:03:09 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Thu, 3 May 2018 09:03:09 -0400 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen > On May 3, 2018, at 5:43 AM, Sobey, Richard A wrote: > > Hi all, > > I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. > > On-list or off is fine with me. > > Thanks > Richard > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Thu May 3 15:25:03 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 3 May 2018 14:25:03 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: Hi Lohit, Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz Sent: Thursday, May 03, 2018 6:41 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Note: External Email ________________________________ Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) >, wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says "You can configure one storage cluster and up to five protocol clusters (current limit)." Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Thu May 3 15:37:11 2018 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Thu, 3 May 2018 16:37:11 +0200 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: Since I'm pretty proud of my awk one-liner, and maybe it's useful for this kind of charging, here's how to sum up how much data each user has in the filesystem (without regards to if the data blocks are offline, online, replicated or compressed): # cat full-file-list.policy RULE EXTERNAL LIST 'files' EXEC '' RULE LIST 'files' SHOW( VARCHAR(USER_ID) || ' ' || VARCHAR(GROUP_ID) || ' ' || VARCHAR(FILESET_NAME) || ' ' || VARCHAR(FILE_SIZE) || ' ' || VARCHAR(KB_ALLOCATED) ) # mmapplypolicy gpfs0 -P /gpfs/gpfsmgt/etc/full-file-list.policy -I defer -f /tmp/full-file-list # awk '{a[$4] += $7} END{ print "# UID\t Bytes" ; for (i in a) print i, "\t", a[i]}' /tmp/full-file-list.list.files Takes ~15 minutes to run on a 60 million file filesystem. -jf On Thu, May 3, 2018 at 11:43 AM, Sobey, Richard A wrote: > Hi all, > > > > I?d be interested to talk to anyone that is using HSM to move data to > tape, (and stubbing the file(s)) specifically any strategies you?ve > employed to figure out how to charge your customers (where you do charge > anyway) based on usage. > > > > On-list or off is fine with me. > > > > Thanks > > Richard > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 15:41:16 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 10:41:16 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: > Hi Lohit, > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > Mit freundlichen Gr??en / Kind regards > > Mathias Dietz > > Spectrum Scale Development - Release Lead Architect (4.2.x) > Spectrum Scale RAS Architect > --------------------------------------------------------------------------- > IBM Deutschland > Am Weiher 24 > 65451 Kelsterbach > Phone: +49 70342744105 > Mobile: +49-15152801035 > E-Mail: mdietz at de.ibm.com > ----------------------------------------------------------------------------- > IBM Deutschland Research & Development GmbH > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > From: ? ? ? ?valleru at cbio.mskcc.org > To: ? ? ? ?gpfsug main discussion list > Date: ? ? ? ?01/05/2018 16:34 > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > Thanks Simon. > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > Regards, > Lohit > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > You have been able to do this for some time, though I think it's only just supported. > > We've been exporting remote mounts since CES was added. > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > Sent: 30 April 2018 22:11 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Hello All, > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > Because according to the limitations as mentioned in the below link: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > Regards, > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 15:46:09 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 10:46:09 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> Message-ID: <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> Thanks Brian, May i know, if you could explain a bit more on the metadata updates issue? I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers. I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters? Please do correct me if i am wrong. As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers. Thanks, Lohit On May 3, 2018, 10:25 AM -0400, Bryan Banister , wrote: > Hi Lohit, > > Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. > > Cheers, > -Bryan > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz > Sent: Thursday, May 03, 2018 6:41 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Note: External Email > Hi Lohit, > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > Mit freundlichen Gr??en / Kind regards > > Mathias Dietz > > Spectrum Scale Development - Release Lead Architect (4.2.x) > Spectrum Scale RAS Architect > --------------------------------------------------------------------------- > IBM Deutschland > Am Weiher 24 > 65451 Kelsterbach > Phone: +49 70342744105 > Mobile: +49-15152801035 > E-Mail: mdietz at de.ibm.com > ----------------------------------------------------------------------------- > IBM Deutschland Research & Development GmbH > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > From: ? ? ? ?valleru at cbio.mskcc.org > To: ? ? ? ?gpfsug main discussion list > Date: ? ? ? ?01/05/2018 16:34 > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > Thanks Simon. > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > Regards, > Lohit > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > You have been able to do this for some time, though I think it's only just supported. > > We've been exporting remote mounts since CES was added. > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > Sent: 30 April 2018 22:11 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Hello All, > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > Because according to the limitations as mentioned in the below link: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > Regards, > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Thu May 3 16:02:51 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Thu, 3 May 2018 15:02:51 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> Message-ID: Stephen, Bryan, Thanks for the input, it?s greatly appreciated. For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers. So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB. Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Stephen Ulmer Sent: 03 May 2018 14:03 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Recharging where HSM is used I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen On May 3, 2018, at 5:43 AM, Sobey, Richard A > wrote: Hi all, I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Thu May 3 16:14:20 2018 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Thu, 3 May 2018 17:14:20 +0200 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark><8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> Message-ID: yes, deleting all NFS exports which point to a given file system would allow you to unmount it without bringing down the other file systems. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 03/05/2018 16:41 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Thu May 3 16:15:24 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Thu, 3 May 2018 15:15:24 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> Message-ID: Hi Lohit, Please see slides 13 and 14 in the presentation that DDN gave at the GPFS UG in the UK this April: http://files.gpfsug.org/presentations/2018/London/2-5_GPFSUG_London_2018_VCC_DDN_Overheads.pdf Multicluster setups with shared file access have a high probability of ?MetaNode Flapping? ? ?MetaNode role transfer occurs when the same files from a filesystem are accessed from two or more ?client? clusters via a MultiCluster relationship.? Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org Sent: Thursday, May 03, 2018 9:46 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Note: External Email ________________________________ Thanks Brian, May i know, if you could explain a bit more on the metadata updates issue? I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers. I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters? Please do correct me if i am wrong. As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers. Thanks, Lohit On May 3, 2018, 10:25 AM -0400, Bryan Banister >, wrote: Hi Lohit, Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz Sent: Thursday, May 03, 2018 6:41 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Note: External Email ________________________________ Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) >, wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From khanhn at us.ibm.com Thu May 3 16:29:57 2018 From: khanhn at us.ibm.com (Khanh V Ngo) Date: Thu, 3 May 2018 15:29:57 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Thu May 3 16:52:44 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 03 May 2018 16:52:44 +0100 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> Message-ID: <1525362764.27337.140.camel@strath.ac.uk> On Thu, 2018-05-03 at 15:02 +0000, Sobey, Richard A wrote: > Stephen, Bryan, > ? > Thanks for the input, it?s greatly appreciated. > ? > For us we?re trying ? as many people are ? to drive down the usage of > under-the-desk NAS appliances and USB HDDs. We offer space on disk, > but you can?t charge for 3TB of storage the same as you would down PC > World and many customers don?t understand the difference between what > we do, and what a USB disk offers. > ? > So, offering tape as a medium to store cold data, but not archive > data, is one offering we?re just getting round to discussing. The > solution is in place. To answer the specific question: for our > customers that adopt HSM, how much less should/could/can we charge > them per TB. We know how much a tape costs, but we don?t necessarily > have the means (or knowledge?) to say that for a given fileset, 80% > of the data is on tape. Then you get into 80% of 1TB is not the same > as 80% of 10TB. > ? The test that I have used in the past for if a file is migrated with a high degree of accuracy is if the space allocated on the file system is less than the file size, and equal to the stub size then presume the file is migrated. There is a small chance it could be sparse instead. However this is really rather remote as sparse files are not common in the first place and even less like that the amount of allocated data in the sparse file exactly matches the stub size. It is an easy step to write a policy to list all the UID and FILE_SIZE where KB_ALLOCATED References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> Message-ID: <6009EFF3-27EF-4E35-9FA1-1730C9ECF1A8@bham.ac.uk> Our charging model for disk storage assumes that a percentage of it is really HSM?d, though in practise we aren?t heavily doing this. My (personal) view on tape really is that anything on tape is FoC, that way people can play games to recall/keep it hot it if they want, but it eats their FoC or paid disk allocations, whereas if they leave it on tape, they benefit in having more total capacity. We currently use the pre-migrate/SOBAR for our DR piece, so we?d already be pre-migrating to tape anyway, so it doesn?t really cost us anything extra to give FoC HSM?d storage. So my suggestion is pitch HSM (or even TCT maybe ? if only we could do both) as your DR proposal, and then you can give it to users for free ? Simon From: on behalf of "Sobey, Richard A" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Thursday, 3 May 2018 at 16:03 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Recharging where HSM is used Stephen, Bryan, Thanks for the input, it?s greatly appreciated. For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers. So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB. Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Stephen Ulmer Sent: 03 May 2018 14:03 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Recharging where HSM is used I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen On May 3, 2018, at 5:43 AM, Sobey, Richard A > wrote: Hi all, I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu May 3 18:30:32 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Thu, 3 May 2018 17:30:32 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> Message-ID: <222D5882-1C2C-48CA-BEF3-478A9D66A0F3@bham.ac.uk> Yes we do this when we really really need to take a remote FS offline, which we try at all costs to avoid unless we have a maintenance window. Note if you only export via SMB, then you don?t have the same effect (unless something has changed recently) Simon From: on behalf of "valleru at cbio.mskcc.org" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Thursday, 3 May 2018 at 15:41 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 19:46:42 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 14:46:42 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <51708297-faf6-41e1-90d3-4a2828863f9f@Spark> Message-ID: <1f7af581-300d-4526-8c9c-7bde344fbf22@Spark> Thanks Bryan. Yes i do understand it now, with respect to multi clusters reading the same file and metanode flapping. Will make sure the workload design will prevent metanode flapping. Regards, Lohit On May 3, 2018, 11:15 AM -0400, Bryan Banister , wrote: > Hi Lohit, > > Please see slides 13 and 14 in the presentation that DDN gave at the GPFS UG in the UK this April:? http://files.gpfsug.org/presentations/2018/London/2-5_GPFSUG_London_2018_VCC_DDN_Overheads.pdf > > Multicluster setups with shared file access have a high probability of ?MetaNode Flapping? > ? ?MetaNode role transfer occurs when the same files from a filesystem are accessed from two or more ?client? clusters via a MultiCluster relationship.? > > Cheers, > -Bryan > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org > Sent: Thursday, May 03, 2018 9:46 AM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Note: External Email > Thanks Brian, > May i know, if you could explain a bit more on the metadata updates issue? > I am not sure i exactly understand on why the metadata updates would fail between filesystems/between clusters - since every remote cluster will have its own metadata pool/servers. > I suppose the metadata updates for respective remote filesystems should go to respective remote clusters/metadata servers and should not depend on metadata servers of other remote clusters? > Please do correct me if i am wrong. > As of now, our workload is to use NFS/SMB to read files and update files from different remote servers. It is not for running heavy parallel read/write workloads across different servers. > > Thanks, > Lohit > > On May 3, 2018, 10:25 AM -0400, Bryan Banister , wrote: > > > Hi Lohit, > > > > Just another thought, you also have to consider that metadata updates will have to fail between nodes in the CES cluster with those in other clusters because nodes in separate remote clusters do not communicate directly for metadata updates, which depends on your workload is that would be an issue. > > > > Cheers, > > -Bryan > > > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Mathias Dietz > > Sent: Thursday, May 03, 2018 6:41 AM > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > > > Note: External Email > > Hi Lohit, > > > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > > > > Mit freundlichen Gr??en / Kind regards > > > > Mathias Dietz > > > > Spectrum Scale Development - Release Lead Architect (4.2.x) > > Spectrum Scale RAS Architect > > --------------------------------------------------------------------------- > > IBM Deutschland > > Am Weiher 24 > > 65451 Kelsterbach > > Phone: +49 70342744105 > > Mobile: +49-15152801035 > > E-Mail: mdietz at de.ibm.com > > ----------------------------------------------------------------------------- > > IBM Deutschland Research & Development GmbH > > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > > > > > From: ? ? ? ?valleru at cbio.mskcc.org > > To: ? ? ? ?gpfsug main discussion list > > Date: ? ? ? ?01/05/2018 16:34 > > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > > > > > Thanks Simon. > > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > > > Regards, > > Lohit > > > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > > You have been able to do this for some time, though I think it's only just supported. > > > > We've been exporting remote mounts since CES was added. > > > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > > > Simon > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > > Sent: 30 April 2018 22:11 > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > > > Hello All, > > > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > > > Because according to the limitations as mentioned in the below link: > > > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > > > > Regards, > > Lohit > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 19:52:23 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 14:52:23 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts In-Reply-To: <222D5882-1C2C-48CA-BEF3-478A9D66A0F3@bham.ac.uk> References: <1516de0f-ba2a-40e7-9aa4-d7ea7bae3edf@Spark> <8069478f-af63-44bc-bb8c-59ae379bda26@Spark> <612df737-85d0-4a3f-85e5-10149acce2d6@Spark> <222D5882-1C2C-48CA-BEF3-478A9D66A0F3@bham.ac.uk> Message-ID: <44e9d877-36b9-43c1-8ee8-ac8437987265@Spark> Thanks Simon. Currently, we are thinking of using the same remote filesystem for both NFS/SMB exports. I do have a related question with respect to SMB and AD integration on user-defined authentication. I have seen a past discussion from you on the usergroup regarding a similar integration, but i am trying a different setup. Will send an email with the related subject. Thanks, Lohit On May 3, 2018, 1:30 PM -0400, Simon Thompson (IT Research Support) , wrote: > Yes we do this when we really really need to take a remote FS offline, which we try at all costs to avoid unless we have a maintenance window. > > Note if you only export via SMB, then you don?t have the same effect (unless something has changed recently) > > Simon > > From: on behalf of "valleru at cbio.mskcc.org" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Thursday, 3 May 2018 at 15:41 > To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Thanks Mathiaz, > Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. > > However, i suppose we could bring down one of the filesystems before a planned downtime? > For example, by unexporting the filesystems on NFS/SMB before the downtime? > > I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. > > Regards, > Lohit > > On May 3, 2018, 7:41 AM -0400, Mathias Dietz , wrote: > > > Hi Lohit, > > > > >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. > > > > > > >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. > > Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. > > e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. > > > > > > Mit freundlichen Gr??en / Kind regards > > > > Mathias Dietz > > > > Spectrum Scale Development - Release Lead Architect (4.2.x) > > Spectrum Scale RAS Architect > > --------------------------------------------------------------------------- > > IBM Deutschland > > Am Weiher 24 > > 65451 Kelsterbach > > Phone: +49 70342744105 > > Mobile: +49-15152801035 > > E-Mail: mdietz at de.ibm.com > > ----------------------------------------------------------------------------- > > IBM Deutschland Research & Development GmbH > > Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 > > > > > > > > From: ? ? ? ?valleru at cbio.mskcc.org > > To: ? ? ? ?gpfsug main discussion list > > Date: ? ? ? ?01/05/2018 16:34 > > Subject: ? ? ? ?Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > Sent by: ? ? ? ?gpfsug-discuss-bounces at spectrumscale.org > > > > > > > > Thanks Simon. > > I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. > > > > Regards, > > Lohit > > > > On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) , wrote: > > You have been able to do this for some time, though I think it's only just supported. > > > > We've been exporting remote mounts since CES was added. > > > > At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. > > > > One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... > > > > Simon > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] > > Sent: 30 April 2018 22:11 > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts > > > > Hello All, > > > > I read from the below link, that it is now possible to export remote mounts over NFS/SMB. > > > > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm > > > > I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. > > May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? > > > > Because according to the limitations as mentioned in the below link: > > > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm > > > > It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? > > > > > > Regards, > > Lohit > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss_______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From JRLang at uwyo.edu Thu May 3 16:38:32 2018 From: JRLang at uwyo.edu (Jeffrey R. Lang) Date: Thu, 3 May 2018 15:38:32 +0000 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: References: Message-ID: Khanh Could you tell us what the policy file name is or where to get it? Thanks Jeff From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Khanh V Ngo Sent: Thursday, May 3, 2018 10:30 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Recharging where HSM is used Specifically with IBM Spectrum Archive EE, there is a script (mmapplypolicy with list rules and python since it outputs many different tables) to provide the total size of user files by file states. This way you can charge more for files that remain on disk and charge less for files migrated to tape. I have seen various prices for the chargeback so it's probably better to calculate based on your environment. The script can easily be changed to output based on GID, filesets, etc. Here's a snippet of the output (in human-readable units): +-------+-----------+-------------+-------------+-----------+ | User | Migrated | Premigrated | Resident | TOTAL | +-------+-----------+-------------+-------------+-----------+ | 0 | 1.563 KB | 50.240 GB | 6.000 bytes | 50.240 GB | | 27338 | 9.338 TB | 1.566 TB | 63.555 GB | 10.965 TB | | 27887 | 58.341 GB | 191.653 KB | | 58.341 GB | | 27922 | 2.111 MB | | | 2.111 MB | | 24089 | 4.657 TB | 22.921 TB | 433.660 GB | 28.002 TB | | 29657 | 29.219 TB | 32.049 TB | | 61.268 TB | | 29210 | 3.057 PB | 399.908 TB | 47.448 TB | 3.494 PB | | 23326 | 7.793 GB | 257.005 MB | 166.364 MB | 8.207 GB | | TOTAL | 3.099 PB | 456.492 TB | 47.933 TB | 3.592 PB | +-------+-----------+-------------+-------------+-----------+ Thanks, Khanh Khanh Ngo, Tape Storage Test Architect Senior Technical Staff Member and Master Inventor Tie-Line 8-321-4802 External Phone: (520)799-4802 9042/1/1467 Tucson, AZ khanhn at us.ibm.com (internet) It's okay to not understand something. It's NOT okay to test something you do NOT understand. ----- Original message ----- From: gpfsug-discuss-request at spectrumscale.org Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: gpfsug-discuss Digest, Vol 76, Issue 7 Date: Thu, May 3, 2018 8:19 AM Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Recharging where HSM is used (Sobey, Richard A) 2. Re: Spectrum Scale CES and remote file system mounts (Mathias Dietz) ---------------------------------------------------------------------- Message: 1 Date: Thu, 3 May 2018 15:02:51 +0000 From: "Sobey, Richard A" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Recharging where HSM is used Message-ID: > Content-Type: text/plain; charset="utf-8" Stephen, Bryan, Thanks for the input, it?s greatly appreciated. For us we?re trying ? as many people are ? to drive down the usage of under-the-desk NAS appliances and USB HDDs. We offer space on disk, but you can?t charge for 3TB of storage the same as you would down PC World and many customers don?t understand the difference between what we do, and what a USB disk offers. So, offering tape as a medium to store cold data, but not archive data, is one offering we?re just getting round to discussing. The solution is in place. To answer the specific question: for our customers that adopt HSM, how much less should/could/can we charge them per TB. We know how much a tape costs, but we don?t necessarily have the means (or knowledge?) to say that for a given fileset, 80% of the data is on tape. Then you get into 80% of 1TB is not the same as 80% of 10TB. Richard From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Stephen Ulmer Sent: 03 May 2018 14:03 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Recharging where HSM is used I work for a partner, but I occasionally have to help customers work on cost justification that includes charge-back (or I encourage them to do show-back to alleviate some political concerns). I?d also like to see what people are doing around this. If I may ask a question, what is the goal for your site? Are you trying to figure out how to charge for the tape space, or to NOT charge the migrated files as resident? Would you (need to) charge for pre-migrated files twice? Are you trying to figure out how to have users pay for recalls? Basically, what costs are you trying to cover? I realize that was not ?a? question? :) Also, do you specifically mean TSM HSM, or do you mean GPFS policies and an external storage pool? -- Stephen On May 3, 2018, at 5:43 AM, Sobey, Richard A > wrote: Hi all, I?d be interested to talk to anyone that is using HSM to move data to tape, (and stubbing the file(s)) specifically any strategies you?ve employed to figure out how to charge your customers (where you do charge anyway) based on usage. On-list or off is fine with me. Thanks Richard _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Thu, 3 May 2018 17:14:20 +0200 From: "Mathias Dietz" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Message-ID: > Content-Type: text/plain; charset="iso-8859-1" yes, deleting all NFS exports which point to a given file system would allow you to unmount it without bringing down the other file systems. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 03/05/2018 16:41 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Mathiaz, Yes i do understand the concern, that if one of the remote file systems go down abruptly - the others will go down too. However, i suppose we could bring down one of the filesystems before a planned downtime? For example, by unexporting the filesystems on NFS/SMB before the downtime? I might not want to be in a situation, where i have to bring down all the remote filesystems because of planned downtime of one of the remote clusters. Regards, Lohit On May 3, 2018, 7:41 AM -0400, Mathias Dietz >, wrote: Hi Lohit, >I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. Technically this should work fine (assuming all 3 clusters use the same uids/guids). However this has not been tested in our Test lab. >One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. Not only the ces root file system is a concern, the whole CES cluster will go down if any remote file systems with NFS exports is not available. e.g. if remote cluster 1 is not available, the CES cluster will unmount the corresponding file system which will lead to a NFS failure on all CES nodes. Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: valleru at cbio.mskcc.org To: gpfsug main discussion list > Date: 01/05/2018 16:34 Subject: Re: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks Simon. I will make sure i am careful about the CES root and test nfs exporting more than 2 remote file systems. Regards, Lohit On Apr 30, 2018, 5:57 PM -0400, Simon Thompson (IT Research Support) >, wrote: You have been able to do this for some time, though I think it's only just supported. We've been exporting remote mounts since CES was added. At some point we've had two storage clusters supplying data and at least 3 remote file-systems exported over NFS and SMB. One thing to watch, be careful if your CES root is on a remote fs, as if that goes away, so do all CES exports. We do have CES root on a remote fs and it works, just be aware... Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of valleru at cbio.mskcc.org [valleru at cbio.mskcc.org] Sent: 30 April 2018 22:11 To: gpfsug main discussion list Subject: [gpfsug-discuss] Spectrum Scale CES and remote file system mounts Hello All, I read from the below link, that it is now possible to export remote mounts over NFS/SMB. https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_protocoloverremoteclu.htm I am thinking of using a single CES protocol cluster, with remote mounts from 3 storage clusters. May i know, if i will be able to export the 3 remote mounts(from 3 storage clusters) over NFS/SMB from a single CES protocol cluster? Because according to the limitations as mentioned in the below link: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_limitationofprotocolonRMT.htm It says ?You can configure one storage cluster and up to five protocol clusters (current limit).? Regards, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=KV_9f1Z5cuLb_ISUqPwkvwc38LRXRLXwKd7w3_A1HS8&m=lrEfoX8I-iJKZPqhVJc0DTlof2GckqjbexR-HavAMOY&s=kOxgdZpL-0VRJL_Vcpst3jnumGGDVZ4mZ9JL7nW0_eA&e= End of gpfsug-discuss Digest, Vol 76, Issue 7 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Thu May 3 20:14:57 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Thu, 3 May 2018 15:14:57 -0400 Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA and AD keytab integration with userdefined authentication Message-ID: <03e2a5c6-3538-4e20-84b8-563b0aedfbe6@Spark> Hello All, I am trying to export a single remote filesystem over NFS/SMB using GPFS CES. ( GPFS 5.0.0.2 and CentOS 7 ). We need NFS exports to be accessible on client nodes, that use public key authentication and ldap authorization. I already have this working with a previous CES setup on user-defined authentication, where users can just login to the client nodes, and access NFS mounts. However, i will also need SAMBA exports for the same GPFS filesystem with AD/kerberos authentication. Previously, we used to have a working SAMBA export for a local filesystem with SSSD and AD integration with SAMBA as mentioned in the below solution from redhat. https://access.redhat.com/solutions/2221561 We find the above as cleaner solution with respect to AD and Samba integration compared to centrify or winbind. I understand that GPFS does offer AD authentication, however i believe i cannot use the same since NFS will need user-defined authentication and SAMBA will need AD authentication. I have thus been trying to use user-defined authentication. I tried to edit smb.cnf from GPFS ( with a bit of help from this blog, written by Simon.?https://www.roamingzebra.co.uk/2015/07/smb-protocol-support-with-spectrum.html) /usr/lpp/mmfs/bin/net conf list realm = xxxx workgroup = xxxx security = ads kerberos method = secrets and key tab idmap config * : backend = tdb template homedir = /home/%U dedicated keytab file = /etc/krb5.keytab I had joined the node to AD with realmd and i do get relevant AD info when i try: /usr/lpp/mmfs/bin/net ads info However, when i try to display keytab or add principals to keytab. It just does not work. /usr/lpp/mmfs/bin/net ads keytab list ?-> does not show the keys present in /etc/krb5.keytab. /usr/lpp/mmfs/bin/net ads keytab add cifs -> does not add the keys to the /etc/krb5.keytab As per the samba documentation, these two parameters should help samba automatically find the keytab file. kerberos method = secrets and key tab dedicated keytab file = /etc/krb5.keytab I have not yet tried to see, if a SAMBA export is working with AD authentication but i am afraid it might not work. Have anyone tried the AD integration with SSSD/SAMBA for GPFS, and any suggestions on how to debug the above would be really helpful. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From valdis.kletnieks at vt.edu Thu May 3 20:16:03 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Thu, 03 May 2018 15:16:03 -0400 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: <1525362764.27337.140.camel@strath.ac.uk> References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org> <1525362764.27337.140.camel@strath.ac.uk> Message-ID: <75615.1525374963@turing-police.cc.vt.edu> On Thu, 03 May 2018 16:52:44 +0100, Jonathan Buzzard said: > The test that I have used in the past for if a file is migrated with a > high degree of accuracy is > > if the space allocated on the file system is less than the > file size, and equal to the stub size then presume the file > is migrated. At least for LTFS/EE, we use something like this: define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')) define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%')) define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%')) RULE 'MIGRATED' LIST 'ltfsee_files' FROM POOL 'system' SHOW('migrated ' || xattr('dmapi.IBMTPS') || ' ' || all_attrs) WHERE is_migrated AND (xattr('dmapi.IBMTPS') LIKE '%:%' ) Not sure if the V and M misc_attributes are the same for other tape backends... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From Robert.Oesterlin at nuance.com Thu May 3 21:13:14 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 3 May 2018 20:13:14 +0000 Subject: [gpfsug-discuss] FYI - SC18 - Hotels are now open for reservations! Message-ID: <1CE10F03-B49C-44DF-A772-B674D059457F@nuance.com> FYI, Hotels for SC18 are now open, and if it?s like any other year, they fill up FAST. Reserve one early since it?s no charge to hold it until 1 month before the conference. https://sc18.supercomputing.org/experience/housing/ Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From zacekm at img.cas.cz Fri May 4 06:53:23 2018 From: zacekm at img.cas.cz (Michal Zacek) Date: Fri, 4 May 2018 07:53:23 +0200 Subject: [gpfsug-discuss] Temporary office files Message-ID: Hello, I have problem with "~$somename.xlsx" files in Samba shares at GPFS Samba cluster. These lock files are supposed to be removed by Samba with "delete on close" function. This function is working? at standard Samba server in Centos but not with Samba cluster at GPFS. Is this function disabled on purpose or is ti an error? I'm not sure if this problem was in older versions, but now with version 5.0.0.0 it's easy to reproduce. Just open and close any excel file, and "~$xxxx.xlsx" file will remain at share. You have to uncheck "hide protected operating system files" on Windows to see them. Any help would be appreciated. Regards, Michal -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3776 bytes Desc: Elektronicky podpis S/MIME URL: From r.sobey at imperial.ac.uk Fri May 4 09:10:33 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Fri, 4 May 2018 08:10:33 +0000 Subject: [gpfsug-discuss] Temporary office files In-Reply-To: References: Message-ID: Hi Michal, We occasionally get a request to close a lock file for an Office document but I wouldn't necessarily say we could easily reproduce it. We're still running 4.2.3.7 though so YMMV. I'm building out my test cluster at the moment to do some experiments and as soon as 5.0.1 is released I'll be upgrading it to check it out. Thanks Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Michal Zacek Sent: 04 May 2018 06:53 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] Temporary office files Hello, I have problem with "~$somename.xlsx" files in Samba shares at GPFS Samba cluster. These lock files are supposed to be removed by Samba with "delete on close" function. This function is working? at standard Samba server in Centos but not with Samba cluster at GPFS. Is this function disabled on purpose or is ti an error? I'm not sure if this problem was in older versions, but now with version 5.0.0.0 it's easy to reproduce. Just open and close any excel file, and "~$xxxx.xlsx" file will remain at share. You have to uncheck "hide protected operating system files" on Windows to see them. Any help would be appreciated. Regards, Michal From Achim.Rehor at de.ibm.com Fri May 4 09:17:52 2018 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Fri, 4 May 2018 10:17:52 +0200 Subject: [gpfsug-discuss] Temporary office files In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 7182 bytes Desc: not available URL: From zacekm at img.cas.cz Fri May 4 10:40:50 2018 From: zacekm at img.cas.cz (Michal Zacek) Date: Fri, 4 May 2018 11:40:50 +0200 Subject: [gpfsug-discuss] Temporary office files In-Reply-To: References: Message-ID: Hi Achim Set "gpfs:sharemodes=no" did the trick and I will upgrade to 5.0.0.2 next week. Thank you very much. Regards, Michal Dne 4.5.2018 v 10:17 Achim Rehor napsal(a): > Hi Michal, > > there was an open defect on this, which had been fixed in level > 4.2.3.7 (APAR _IJ03182 _ > ) > gpfs.smb 4.5.15_gpfs_31-1 > should be in gpfs.smb 4.6.11_gpfs_31-1 ?package for the 5.0.0 PTF1 level. > > > > > Mit freundlichen Gr??en / Kind regards > > *Achim Rehor* > > ------------------------------------------------------------------------ > Software Technical Support Specialist AIX/ Emea HPC Support > IBM Certified Advanced Technical Expert - Power Systems with AIX > TSCC Software Service, Dept. 7922 > Global Technology Services > ------------------------------------------------------------------------ > Phone: +49-7034-274-7862 ?IBM Deutschland > E-Mail: Achim.Rehor at de.ibm.com ?Am Weiher 24 > ?65451 Kelsterbach > ?Germany > > ------------------------------------------------------------------------ > IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter > Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, > Stefan Lutz, Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht > Stuttgart, HRB 14562 WEEE-Reg.-Nr. DE 99369940 > > > > > > > From: Michal Zacek > To: gpfsug-discuss at spectrumscale.org > Date: 04/05/2018 08:03 > Subject: [gpfsug-discuss] Temporary office files > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Hello, > > I have problem with "~$somename.xlsx" files in Samba shares at GPFS > Samba cluster. These lock files are supposed to be removed by Samba with > "delete on close" function. This function is working? at standard Samba > server in Centos but not with Samba cluster at GPFS. Is this function > disabled on purpose or is ti an error? I'm not sure if this problem was > in older versions, but now with version 5.0.0.0 it's easy to reproduce. > Just open and close any excel file, and "~$xxxx.xlsx" file will remain > at share. You have to uncheck "hide protected operating system files" on > Windows to see them. > Any help would be appreciated. > > Regards, > Michal > > [attachment "smime.p7s" deleted by Achim Rehor/Germany/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nfhdombajgidkknc.png Type: image/png Size: 7182 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3776 bytes Desc: Elektronicky podpis S/MIME URL: From makaplan at us.ibm.com Fri May 4 15:03:37 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 4 May 2018 10:03:37 -0400 Subject: [gpfsug-discuss] Recharging where HSM is used In-Reply-To: <75615.1525374963@turing-police.cc.vt.edu> References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org><1525362764.27337.140.camel@strath.ac.uk> <75615.1525374963@turing-police.cc.vt.edu> Message-ID: "Not sure if the V and M misc_attributes are the same for other tape backends..." define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')) define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%')) define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%')) There are good, valid and fairly efficient tests for any files Spectrum Scale system that has a DMAPI based HSM system installed with it. (TSM/HSM, HPSS, LTFS/EE, ...) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From makaplan at us.ibm.com Fri May 4 16:16:26 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 4 May 2018 11:16:26 -0400 Subject: [gpfsug-discuss] Determining which files are migrated or premigated wrt HSM In-Reply-To: References: <01B49CDA-D256-4E42-BDC8-C77B772CA514@ulmer.org><1525362764.27337.140.camel@strath.ac.uk><75615.1525374963@turing-police.cc.vt.edu> Message-ID: define(is_premigrated,(MISC_ATTRIBUTES LIKE '%M%' AND MISC_ATTRIBUTES NOT LIKE '%V%')) define(is_migrated,(MISC_ATTRIBUTES LIKE '%V%')) define(is_resident,(NOT MISC_ATTRIBUTES LIKE '%M%')) THESE are good, valid and fairly efficient tests for any files Spectrum Scale system that has a DMAPI based HSM system installed with it. (TSM/HSM, HPSS, LTFS/EE, ...) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Fri May 4 16:38:57 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 4 May 2018 15:38:57 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? Message-ID: Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Fri May 4 16:52:27 2018 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Fri, 4 May 2018 15:52:27 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From skylar2 at uw.edu Fri May 4 16:49:12 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Fri, 4 May 2018 15:49:12 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <20180504154912.vabqnigzvyacfex4@utumno.gs.washington.edu> Our experience is that CES (at least NFS/ganesha) can easily consume all of the CPU resources on a system. If you're running it on the same hardware as your NSD services, then you risk delaying native GPFS I/O requests as well. We haven't found a great way to limit the amount of resources that NFS/ganesha can use, though maybe in the future it could be put in a cgroup since it's all user-space? On Fri, May 04, 2018 at 03:38:57PM +0000, Buterbaugh, Kevin L wrote: > Hi All, > > In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ??? but I???ve not found any detailed explanation of why not. > > I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ??? say, late model boxes with 2 x 8 core CPU???s, 256 GB RAM, 10 GbE networking ??? is there any reason why I still should not combine the two? > > To answer the question of why I would want to ??? simple, server licenses. > > Thanks??? > > Kevin > > ??? > Kevin Buterbaugh - Senior System Administrator > Vanderbilt University - Advanced Computing Center for Research and Education > Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From Kevin.Buterbaugh at Vanderbilt.Edu Fri May 4 16:56:44 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 4 May 2018 15:56:44 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <9AF296B0-E8B0-4DE9-A235-97CCE9A58F5F@vanderbilt.edu> Hi Anderson, Thanks for the response ? however, the scenario you describe below wouldn?t impact us. We have 8 NSD servers and they can easily provide the needed performance to native GPFS clients. We could also take a downtime if we ever did need to expand in the manner described below. In fact, one of the things that?s kinda surprising to me is that upgrading the SMB portion of CES requires a downtime. Let?s just say that I know for a fact that sernet-samba can be done rolling / live. Kevin On May 4, 2018, at 10:52 AM, Anderson Ferreira Nobre > wrote: Hi Kevin, I think one of the reasons is if you need to add or remove nodes from cluster you will start to face the constrains of this kind of solution. Let's say you have a cluster with two nodes and share the same set of LUNs through SAN. And for some reason you need to add more two nodes that are NSD Servers and Protocol nodes. For the new nodes become NSD Servers, you will have to redistribute the NSD disks among four nodes. But for you do that you will have to umount the filesystems. And for you umount the filesystems you would need to stop protocol services. At the end you will realize that a simple task like that is disrruptive. You won't be able to do online. Abra?os / Regards / Saludos, Anderson Nobre AIX & Power Consultant Master Certified IT Specialist IBM Systems Hardware Client Technical Team ? IBM Systems Lab Services [community_general_lab_services] ________________________________ Phone: 55-19-2132-4317 E-mail: anobre at br.ibm.com [IBM] ----- Original message ----- From: "Buterbaugh, Kevin L" > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: [gpfsug-discuss] Not recommended, but why not? Date: Fri, May 4, 2018 12:39 PM Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C2b0fc12c4dc24aa1f7fb08d5b1d70c9e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610459542553835&sdata=8aArQLzU5q%2BySqHcoQ3SI420XzP08ICph7F18G7C4pw%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From oehmes at gmail.com Fri May 4 17:26:54 2018 From: oehmes at gmail.com (Sven Oehme) Date: Fri, 04 May 2018 16:26:54 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L < Kevin.Buterbaugh at vanderbilt.edu> wrote: > Hi All, > > In doing some research, I have come across numerous places (IBM docs, > DeveloperWorks posts, etc.) where it is stated that it is not recommended > to run CES on NSD servers ? but I?ve not found any detailed explanation of > why not. > > I understand that CES, especially if you enable SMB, can be a resource > hog. But if I size the servers appropriately ? say, late model boxes with > 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I > still should not combine the two? > > To answer the question of why I would want to ? simple, server licenses. > > Thanks? > > Kevin > > ? > Kevin Buterbaugh - Senior System Administrator > Vanderbilt University - Advanced Computing Center for Research and > Education > Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 <(615)%20875-9633> > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Fri May 4 18:30:05 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Fri, 4 May 2018 17:30:05 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> You also have to be careful with network utilization? we have some very hungry NFS clients in our environment and the NFS traffic can actually DOS other services that need to use the network links. If you configure GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then this could lead to GPFS node evictions if disk leases cannot get renewed. You could limit the amount that SMV/NFS use on the network with something like the tc facility if you?re sharing the network interfaces for GPFS and CES services. HTH, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Sven Oehme Sent: Friday, May 04, 2018 11:27 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Not recommended, but why not? Note: External Email ________________________________ there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L > wrote: Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Fri May 4 23:08:39 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Fri, 4 May 2018 22:08:39 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <9AF296B0-E8B0-4DE9-A235-97CCE9A58F5F@vanderbilt.edu> References: <9AF296B0-E8B0-4DE9-A235-97CCE9A58F5F@vanderbilt.edu>, Message-ID: An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Sat May 5 09:57:11 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Sat, 5 May 2018 09:57:11 +0100 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> References: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> Message-ID: <426d4185-a163-2eb0-954d-7c1947fea607@strath.ac.uk> On 04/05/18 18:30, Bryan Banister wrote: > You also have to be careful with network utilization? we have some very > hungry NFS clients in our environment and the NFS traffic can actually > DOS other services that need to use the network links.? If you configure > GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then > this could lead to GPFS node evictions if disk leases cannot get > renewed.? You could limit the amount that SMV/NFS use on the network > with something like the tc facility if you?re sharing the network > interfaces for GPFS and CES services. > The right answer to that IMHO is a separate VLAN for the GPFS command/control traffic that is prioritized above all other VLAN's. Do something like mark it as a voice VLAN. Basically don't rely on some OS layer to do the right thing at layer three, enforce it at layer two in the switches. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jagga13 at gmail.com Mon May 7 02:35:19 2018 From: jagga13 at gmail.com (Jagga Soorma) Date: Sun, 6 May 2018 18:35:19 -0700 Subject: [gpfsug-discuss] CES NFS export Message-ID: Hi Guys, We are new to gpfs and have a few client that will be mounting gpfs via nfs. We have configured the exports but all user/group permissions are showing up as nobody. The gateway/protocol nodes can query the uid/gid's via centrify without any issues as well as the clients and the perms look good on a client that natively accesses the gpfs filesystem. Is there some specific config that we might be missing? -- # mmnfs export list --nfsdefs /gpfs/datafs1 Path Delegations Clients Access_Type Protocols Transports Squash Anonymous_uid Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids NFS_Commit ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE TRUE FALSE /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP NO_ROOT_SQUASH -2 -2 SYS FALSE NONE TRUE FALSE /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP ROOT_SQUASH -2 -2 SYS FALSE NONE TRUE FALSE -- On the nfs clients I see this though: -- # ls -l total 0 drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 -- Here is our mmnfs config: -- # mmnfs config list NFS Ganesha Configuration: ========================== NFS_PROTOCOLS: 3,4 NFS_PORT: 2049 MNT_PORT: 0 NLM_PORT: 0 RQUOTA_PORT: 0 NB_WORKER: 256 LEASE_LIFETIME: 60 DOMAINNAME: VIRTUAL1.COM DELEGATIONS: Disabled ========================== STATD Configuration ========================== STATD_PORT: 0 ========================== CacheInode Configuration ========================== ENTRIES_HWMARK: 1500000 ========================== Export Defaults ========================== ACCESS_TYPE: NONE PROTOCOLS: 3,4 TRANSPORTS: TCP ANONYMOUS_UID: -2 ANONYMOUS_GID: -2 SECTYPE: SYS PRIVILEGEDPORT: FALSE MANAGE_GIDS: TRUE SQUASH: ROOT_SQUASH NFS_COMMIT: FALSE ========================== Log Configuration ========================== LOG_LEVEL: EVENT ========================== Idmapd Configuration ========================== LOCAL-REALMS: LOCALDOMAIN DOMAIN: LOCALDOMAIN ========================== -- Thanks! From jagga13 at gmail.com Mon May 7 04:05:01 2018 From: jagga13 at gmail.com (Jagga Soorma) Date: Sun, 6 May 2018 20:05:01 -0700 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: Looks like this is due to nfs v4 and idmapd domain not being configured correctly. I am going to test further and reach out if more assistance is needed. Thanks! On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > Hi Guys, > > We are new to gpfs and have a few client that will be mounting gpfs > via nfs. We have configured the exports but all user/group > permissions are showing up as nobody. The gateway/protocol nodes can > query the uid/gid's via centrify without any issues as well as the > clients and the perms look good on a client that natively accesses the > gpfs filesystem. Is there some specific config that we might be > missing? > > -- > # mmnfs export list --nfsdefs /gpfs/datafs1 > Path Delegations Clients > Access_Type Protocols Transports Squash Anonymous_uid > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > NFS_Commit > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE NONE > TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > -- > > On the nfs clients I see this though: > > -- > # ls -l > total 0 > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > -- > > Here is our mmnfs config: > > -- > # mmnfs config list > > NFS Ganesha Configuration: > ========================== > NFS_PROTOCOLS: 3,4 > NFS_PORT: 2049 > MNT_PORT: 0 > NLM_PORT: 0 > RQUOTA_PORT: 0 > NB_WORKER: 256 > LEASE_LIFETIME: 60 > DOMAINNAME: VIRTUAL1.COM > DELEGATIONS: Disabled > ========================== > > STATD Configuration > ========================== > STATD_PORT: 0 > ========================== > > CacheInode Configuration > ========================== > ENTRIES_HWMARK: 1500000 > ========================== > > Export Defaults > ========================== > ACCESS_TYPE: NONE > PROTOCOLS: 3,4 > TRANSPORTS: TCP > ANONYMOUS_UID: -2 > ANONYMOUS_GID: -2 > SECTYPE: SYS > PRIVILEGEDPORT: FALSE > MANAGE_GIDS: TRUE > SQUASH: ROOT_SQUASH > NFS_COMMIT: FALSE > ========================== > > Log Configuration > ========================== > LOG_LEVEL: EVENT > ========================== > > Idmapd Configuration > ========================== > LOCAL-REALMS: LOCALDOMAIN > DOMAIN: LOCALDOMAIN > ========================== > -- > > Thanks! From YARD at il.ibm.com Mon May 7 06:16:15 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Mon, 7 May 2018 08:16:15 +0300 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: Hi If you want to use NFSv3 , define only NFSv3 on the export. In case you work with NFSv4 - you should have "DOMAIN\user" all the way - so this way you will not get any user mismatch errors, and see permissions like nobody. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Jagga Soorma To: gpfsug-discuss at spectrumscale.org Date: 05/07/2018 06:05 AM Subject: Re: [gpfsug-discuss] CES NFS export Sent by: gpfsug-discuss-bounces at spectrumscale.org Looks like this is due to nfs v4 and idmapd domain not being configured correctly. I am going to test further and reach out if more assistance is needed. Thanks! On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > Hi Guys, > > We are new to gpfs and have a few client that will be mounting gpfs > via nfs. We have configured the exports but all user/group > permissions are showing up as nobody. The gateway/protocol nodes can > query the uid/gid's via centrify without any issues as well as the > clients and the perms look good on a client that natively accesses the > gpfs filesystem. Is there some specific config that we might be > missing? > > -- > # mmnfs export list --nfsdefs /gpfs/datafs1 > Path Delegations Clients > Access_Type Protocols Transports Squash Anonymous_uid > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > NFS_Commit > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE NONE > TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > -- > > On the nfs clients I see this though: > > -- > # ls -l > total 0 > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > -- > > Here is our mmnfs config: > > -- > # mmnfs config list > > NFS Ganesha Configuration: > ========================== > NFS_PROTOCOLS: 3,4 > NFS_PORT: 2049 > MNT_PORT: 0 > NLM_PORT: 0 > RQUOTA_PORT: 0 > NB_WORKER: 256 > LEASE_LIFETIME: 60 > DOMAINNAME: VIRTUAL1.COM > DELEGATIONS: Disabled > ========================== > > STATD Configuration > ========================== > STATD_PORT: 0 > ========================== > > CacheInode Configuration > ========================== > ENTRIES_HWMARK: 1500000 > ========================== > > Export Defaults > ========================== > ACCESS_TYPE: NONE > PROTOCOLS: 3,4 > TRANSPORTS: TCP > ANONYMOUS_UID: -2 > ANONYMOUS_GID: -2 > SECTYPE: SYS > PRIVILEGEDPORT: FALSE > MANAGE_GIDS: TRUE > SQUASH: ROOT_SQUASH > NFS_COMMIT: FALSE > ========================== > > Log Configuration > ========================== > LOG_LEVEL: EVENT > ========================== > > Idmapd Configuration > ========================== > LOCAL-REALMS: LOCALDOMAIN > DOMAIN: LOCALDOMAIN > ========================== > -- > > Thanks! _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From chetkulk at in.ibm.com Mon May 7 09:08:33 2018 From: chetkulk at in.ibm.com (Chetan R Kulkarni) Date: Mon, 7 May 2018 13:38:33 +0530 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: Make sure NFSv4 ID Mapping value matches on client and server. On server side (i.e. CES nodes); you can set as below: $ mmnfs config change IDMAPD_DOMAIN=test.com On client side (e.g. RHEL NFS client); one can set it using Domain attribute in /etc/idmapd.conf file. $ egrep ^Domain /etc/idmapd.conf Domain = test.com [root at rh73node2 2018_05_07-13:31:11 ~]$ $ service nfs-idmap restart Please refer following link for the details: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/b1ladm_authconsidfornfsv4access.htm Thanks, Chetan. From: "Yaron Daniel" To: gpfsug main discussion list Date: 05/07/2018 10:46 AM Subject: Re: [gpfsug-discuss] CES NFS export Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi If you want to use NFSv3 , define only NFSv3 on the export. In case you work with NFSv4 - you should have "DOMAIN\user" all the way - so this way you will not get any user mismatch errors, and see permissions like nobody. Regards Yaron 94 Em Daniel Ha'Moshavot Rd Storage Petach Tiqva, Architect 49527 IBM Israel Global Markets, Systems HW Sales Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel IBM Storage Strategy and Solutions v1IBM Storage Management and Data Protection v1 Related image From: Jagga Soorma To: gpfsug-discuss at spectrumscale.org Date: 05/07/2018 06:05 AM Subject: Re: [gpfsug-discuss] CES NFS export Sent by: gpfsug-discuss-bounces at spectrumscale.org Looks like this is due to nfs v4 and idmapd domain not being configured correctly. I am going to test further and reach out if more assistance is needed. Thanks! On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > Hi Guys, > > We are new to gpfs and have a few client that will be mounting gpfs > via nfs. We have configured the exports but all user/group > permissions are showing up as nobody. The gateway/protocol nodes can > query the uid/gid's via centrify without any issues as well as the > clients and the perms look good on a client that natively accesses the > gpfs filesystem. Is there some specific config that we might be > missing? > > -- > # mmnfs export list --nfsdefs /gpfs/datafs1 > Path Delegations Clients > Access_Type Protocols Transports Squash Anonymous_uid > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > NFS_Commit > ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE NONE > TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > ROOT_SQUASH -2 -2 SYS FALSE > NONE TRUE FALSE > -- > > On the nfs clients I see this though: > > -- > # ls -l > total 0 > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > -- > > Here is our mmnfs config: > > -- > # mmnfs config list > > NFS Ganesha Configuration: > ========================== > NFS_PROTOCOLS: 3,4 > NFS_PORT: 2049 > MNT_PORT: 0 > NLM_PORT: 0 > RQUOTA_PORT: 0 > NB_WORKER: 256 > LEASE_LIFETIME: 60 > DOMAINNAME: VIRTUAL1.COM > DELEGATIONS: Disabled > ========================== > > STATD Configuration > ========================== > STATD_PORT: 0 > ========================== > > CacheInode Configuration > ========================== > ENTRIES_HWMARK: 1500000 > ========================== > > Export Defaults > ========================== > ACCESS_TYPE: NONE > PROTOCOLS: 3,4 > TRANSPORTS: TCP > ANONYMOUS_UID: -2 > ANONYMOUS_GID: -2 > SECTYPE: SYS > PRIVILEGEDPORT: FALSE > MANAGE_GIDS: TRUE > SQUASH: ROOT_SQUASH > NFS_COMMIT: FALSE > ========================== > > Log Configuration > ========================== > LOG_LEVEL: EVENT > ========================== > > Idmapd Configuration > ========================== > LOCAL-REALMS: LOCALDOMAIN > DOMAIN: LOCALDOMAIN > ========================== > -- > > Thanks! _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=uic-29lyJ5TCiTRi0FyznYhKJx5I7Vzu80WyYuZ4_iM&m=3k9qWcL7UfySpNVW2J8S1XsIekUHTHBBYQhN7cPVg3Q&s=844KFrfpsN6nT-DKV6HdfS8EEejdwHuQxbNR8cX2cyc&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15633834.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15657152.gif Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15750750.gif Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15967392.gif Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15858665.gif Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15884206.jpg Type: image/jpeg Size: 11294 bytes Desc: not available URL: From Kevin.Buterbaugh at Vanderbilt.Edu Mon May 7 16:05:36 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Mon, 7 May 2018 15:05:36 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: References: Message-ID: <4E0D4232-14FC-4229-BFBC-B61242473456@vanderbilt.edu> Hi All, I want to thank all of you who took the time to respond to this question ? your thoughts / suggestions are much appreciated. What I?m taking away from all of this is that it is OK to run CES on NSD servers as long as you are very careful in how you set things up. This would include: 1. Making sure you have enough CPU horsepower and using cgroups to limit how much CPU SMB and NFS can utilize. 2. Making sure you have enough RAM ? 256 GB sounds like it should be ?enough? when using SMB. 3. Making sure you have your network config properly set up. We would be able to provide three separate, dedicated 10 GbE links for GPFS daemon communication, GPFS multi-cluster link to our HPC cluster, and SMB / NFS communication. 4. Making sure you have good monitoring of all of the above in place. Have I missed anything or does anyone have any additional thoughts? Thanks? Kevin On May 4, 2018, at 11:26 AM, Sven Oehme > wrote: there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. sven On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L > wrote: Hi All, In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? To answer the question of why I would want to ? simple, server licenses. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C6ec06d262ea84752b1d408d5b1dbe2cc%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610480314880560&sdata=J5%2F9X4dNeLrGKH%2BwmhIObVK%2BQ4oyoIa1vZ9F2yTU854%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Mon May 7 17:53:19 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 7 May 2018 16:53:19 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <426d4185-a163-2eb0-954d-7c1947fea607@strath.ac.uk> References: <7dfa04aceca74a5eb530ce10f5dd57f3@jumptrading.com> <426d4185-a163-2eb0-954d-7c1947fea607@strath.ac.uk> Message-ID: <9b83806da68c4afe85a048ac736e0d5c@jumptrading.com> Sure, many ways to solve the same problem, just depends on where you want to have the controls. Having a separate VLAN doesn't give you as fine grained controls over each network workload you are using, such as metrics collection, monitoring, GPFS, SSH, NFS vs SMB, vs Object, etc. But it doesn't matter how it's done as long as you ensure GPFS has enough bandwidth to function, cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Jonathan Buzzard Sent: Saturday, May 05, 2018 3:57 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Not recommended, but why not? Note: External Email ------------------------------------------------- On 04/05/18 18:30, Bryan Banister wrote: > You also have to be careful with network utilization? we have some very > hungry NFS clients in our environment and the NFS traffic can actually > DOS other services that need to use the network links. If you configure > GPFS admin/daemon traffic over the same link as the SMB/NFS traffic then > this could lead to GPFS node evictions if disk leases cannot get > renewed. You could limit the amount that SMV/NFS use on the network > with something like the tc facility if you?re sharing the network > interfaces for GPFS and CES services. > The right answer to that IMHO is a separate VLAN for the GPFS command/control traffic that is prioritized above all other VLAN's. Do something like mark it as a voice VLAN. Basically don't rely on some OS layer to do the right thing at layer three, enforce it at layer two in the switches. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From jfosburg at mdanderson.org Tue May 8 14:32:54 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Tue, 8 May 2018 13:32:54 +0000 Subject: [gpfsug-discuss] Snapshots for backups Message-ID: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From LloydDean at us.ibm.com Tue May 8 15:59:37 2018 From: LloydDean at us.ibm.com (Lloyd Dean) Date: Tue, 8 May 2018 14:59:37 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: Jonathan, First it must be understood the snap is either at the filesystems or fileset, and more importantly is not an application level backup. This is a huge difference to say Protects many application integrations like exchange, databases, etc. With that understood the approach is similar to what others are doing. Just understand the restrictions. Lloyd Dean IBM Software Storage Architect/Specialist Communication & CSI Heartland Email: LloydDean at us.ibm.com Phone: (720) 395-1246 > On May 8, 2018, at 8:44 AM, Fosburgh,Jonathan wrote: > > We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: > > Replicate to a remote filesystem (I assume this is best done via AFM). > Take periodic (probably daily) snapshots at the remote site. > > The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? > The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From UWEFALKE at de.ibm.com Tue May 8 18:20:49 2018 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Tue, 8 May 2018 19:20:49 +0200 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: One thought: file A is created and synched out. it is changed bit later (say a few days). You have the original version in one snapshot, and the modified in the eternal fs (unless changed again). At some day you will need to delete the snapshot with the initial version since you can keep only a finite number. The initial version is gone then forever. Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Thomas Wolter, Sven Schoo? Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From: "Fosburgh,Jonathan" To: gpfsug main discussion list Date: 08/05/2018 15:44 Subject: [gpfsug-discuss] Snapshots for backups Sent by: gpfsug-discuss-bounces at spectrumscale.org We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From valdis.kletnieks at vt.edu Tue May 8 18:24:37 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Tue, 08 May 2018 13:24:37 -0400 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: Message-ID: <29277.1525800277@turing-police.cc.vt.edu> On Tue, 08 May 2018 14:59:37 -0000, "Lloyd Dean" said: > First it must be understood the snap is either at the filesystems or fileset, > and more importantly is not an application level backup. This is a huge > difference to say Protects many application integrations like exchange, > databases, etc. And remember that a GPFS snapshot will only capture the disk as GPFS knows about it - any memory-cached data held by databases etc will *not* be captured (leading to the possibility of an inconsistent version being snapped). You'll need to do some sort of handshaking with any databases to get them to do a "flush everything to disk" to ensure on-disk consistency. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From Kevin.Buterbaugh at Vanderbilt.Edu Tue May 8 19:23:35 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Tue, 8 May 2018 18:23:35 +0000 Subject: [gpfsug-discuss] Node list error Message-ID: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue May 8 21:51:02 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 8 May 2018 20:51:02 +0000 Subject: [gpfsug-discuss] Node list error In-Reply-To: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> References: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> Message-ID: <342034e96e1f409b889b0e9aa4036098@jumptrading.com> What does `mmlsnodeclass -N ` give you? -B From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] Node list error Note: External Email ________________________________ Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Tue May 8 22:38:09 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Tue, 8 May 2018 21:38:09 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed May 9 13:16:03 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 9 May 2018 12:16:03 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: Our existing environments are using Scale+Protect with tape. Management wants us to move away from tape where possible. We do one filesystem per cluster. So, there will be two new clusters. We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range. We understand that if we replicate corrupted data, the corruption will go with it. But the same would be true for a backup (unless I am not quite following you). The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data. FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup. Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup. We abandoned tape backups on our NAS at around 600TB. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Tuesday, May 8, 2018 at 4:38 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups Hi Jonathan, First off a couple of questions: 1) your using Scale+Protect with Tape today? 2) your new filesystems will be within the same cluster ? 3) What capacity are the new filesystems Based on the above then: AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent) Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Snapshots for backups Date: Tue, May 8, 2018 11:43 PM We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed May 9 13:50:20 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 9 May 2018 12:50:20 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: , <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 9 14:13:04 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2018 14:13:04 +0100 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: , <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: <1525871584.27337.200.camel@strath.ac.uk> On Wed, 2018-05-09 at 12:50 +0000, Andrew Beattie wrote: > ? > From my perspective the difference / benefits of using something like > Protect and using backup policies over snapshot policies - even if > its disk based rather than tape based,? is that with a backup you get > far better control over your Disaster Recovery process. The policy > integration with Scale and Protect is very comprehensive.? If the > issue is Tape time for recovery - simply change from tape medium to a > Disk storage pool as your repository for Protect, you get all the > benefits of Spectrum Protect and the restore speeds of disk, (you > might even - subject to type of data start to see some benefits of > duplication and compression for your backups as you will be able to > take advantage of Protect's dedupe and compression for the disk based > storage pool, something that's not available on your tape > environment. The way I see it is that snapshots are not backup. They are handy for quick recovery from file deletion mistakes. They are utterly useless when your disaster recovery is needed because for example all your NSD descriptors have been overwritten (not my mistake I hasten to add). AT that point your snapshots are for jack. > ? > If your looking for a way to further reduce your disk costs then > potentially the benefits of Object Storage erasure coding might be > worth looking at although for a 1 or 2 site scenario the overheads > are pretty much the same if you use some variant of distributed raid > or if you use erasure coding. > ? At scale tape is a lot cheaper than disk. Also sorry your data is going to take a couple of weeks to recover goes down a lot better than sorry your data is gone for ever. Finally it's also hard for a hacker or disgruntled admin to wipe your tapes in a short period of time. The robot don't go that fast. Your disks/file systems on the other hand effectively be gone in seconds. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jfosburg at mdanderson.org Wed May 9 14:29:23 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 9 May 2018 13:29:23 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: I agree with your points. The thought here, is that if we had a complete loss of the primary site, we could bring up the secondary in relatively short order (hours or days instead of weeks or months). Maybe this is true, and maybe this isn?t, though I do see (and have advocated for) a DR setup much like that. My concern is that the use of snapshots as a substitute for traditional backups for a Scale environment is that that is an inappropriate use of the technology, particularly when we have a tool designed for that and that works. Let me take a moment to reiterate something that may be getting lost. The snapshots will be taken against the remote copy and recovered from there. We will not be relying on the primary site for this function. We were starting to look at ESS as a destination for these backups. I have also considered that a multisite ICOS implementation might work to satisfy some of our general backup requirements. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Wednesday, May 9, 2018 at 7:51 AM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups From my perspective the difference / benefits of using something like Protect and using backup policies over snapshot policies - even if its disk based rather than tape based, is that with a backup you get far better control over your Disaster Recovery process. The policy integration with Scale and Protect is very comprehensive. If the issue is Tape time for recovery - simply change from tape medium to a Disk storage pool as your repository for Protect, you get all the benefits of Spectrum Protect and the restore speeds of disk, (you might even - subject to type of data start to see some benefits of duplication and compression for your backups as you will be able to take advantage of Protect's dedupe and compression for the disk based storage pool, something that's not available on your tape environment. If your looking for a way to further reduce your disk costs then potentially the benefits of Object Storage erasure coding might be worth looking at although for a 1 or 2 site scenario the overheads are pretty much the same if you use some variant of distributed raid or if you use erasure coding. Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: Re: [gpfsug-discuss] Snapshots for backups Date: Wed, May 9, 2018 10:28 PM Our existing environments are using Scale+Protect with tape. Management wants us to move away from tape where possible. We do one filesystem per cluster. So, there will be two new clusters. We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range. We understand that if we replicate corrupted data, the corruption will go with it. But the same would be true for a backup (unless I am not quite following you). The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data. FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup. Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup. We abandoned tape backups on our NAS at around 600TB. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Tuesday, May 8, 2018 at 4:38 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups Hi Jonathan, First off a couple of questions: 1) your using Scale+Protect with Tape today? 2) your new filesystems will be within the same cluster ? 3) What capacity are the new filesystems Based on the above then: AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent) Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Snapshots for backups Date: Tue, May 8, 2018 11:43 PM We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jfosburg at mdanderson.org Wed May 9 14:31:36 2018 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 9 May 2018 13:31:36 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1525871584.27337.200.camel@strath.ac.uk> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> Message-ID: <81738C1C-FAFC-416A-9937-B99E86809EE4@mdanderson.org> That is the use case for snapshots, taken at the remote site. Recovery from accidental deletion. ?On 5/9/18, 8:13 AM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Jonathan Buzzard" wrote: The way I see it is that snapshots are not backup. They are handy for quick recovery from file deletion mistakes. They are utterly useless when your disaster recovery is needed because for example all your NSD descriptors have been overwritten (not my mistake I hasten to add). AT that point your snapshots are for jack. The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. From MKEIGO at jp.ibm.com Wed May 9 14:36:37 2018 From: MKEIGO at jp.ibm.com (Keigo Matsubara) Date: Wed, 9 May 2018 22:36:37 +0900 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1525871584.27337.200.camel@strath.ac.uk> References: , <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> Message-ID: Not sure if the topic is appropriate, but I know an installation case which employs IBM Spectrum Scale's snapshot function along with IBM Spectrum Protect to save the backup date onto LTO7 tape media. Both software components running on Linux on Power (RHEL 7.3 BE) if that matters. Of course, snapshots are taken per independent fileset. --- Keigo Matsubara, Storage Solutions Client Technical Specialist, IBM Japan TEL: +81-50-3150-0595, T/L: 6205-0595 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Wed May 9 14:37:43 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Wed, 9 May 2018 13:37:43 +0000 Subject: [gpfsug-discuss] mmlsnsd -m or -M Message-ID: <6f1760ea2d1244959d25763442ba96c0@SMXRF105.msg.hukrf.de> Hallo All, we experience some difficults in using mmlsnsd -m on 4.2.3.8 and 5.0.0.2. Are there any known bugs or changes happening here, that these function don?t does what it wants. The outputs are now for these suboption -m or -M the same!!??. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 9 15:23:59 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 9 May 2018 14:23:59 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> Message-ID: <08326DC0-30CF-4A63-A111-1EDBDC19E3F0@bham.ac.uk> For DR, what about making your secondary site mostly an object store, use TCT to pre-migrate the data out and then use SOBAR to dump the catalogue. You then restore the SOBAR dump to the DR site and have pretty much instant most of your data available. You could do the DR with tape/pre-migration as well, it?s just slower. OFC with SOBAR, you are just restoring the data that is being accessed or you target to migrate back in. Equally Protect can also backup/migrate to an object pool (note you can?t currently migrate in the Protect sense from a TSM object pool to a TSM disk/tape pool). And put snapshots in at home for the instant ?need to restore a file?. If this is appropriate depends on what you agree your RPO to be. Scale/Protect for us allows us to recover data N months after the user deleted the file and didn?t notice. Simon From: on behalf of "jfosburg at mdanderson.org" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Wednesday, 9 May 2018 at 14:30 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups I agree with your points. The thought here, is that if we had a complete loss of the primary site, we could bring up the secondary in relatively short order (hours or days instead of weeks or months). Maybe this is true, and maybe this isn?t, though I do see (and have advocated for) a DR setup much like that. My concern is that the use of snapshots as a substitute for traditional backups for a Scale environment is that that is an inappropriate use of the technology, particularly when we have a tool designed for that and that works. Let me take a moment to reiterate something that may be getting lost. The snapshots will be taken against the remote copy and recovered from there. We will not be relying on the primary site for this function. We were starting to look at ESS as a destination for these backups. I have also considered that a multisite ICOS implementation might work to satisfy some of our general backup requirements. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Wednesday, May 9, 2018 at 7:51 AM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups From my perspective the difference / benefits of using something like Protect and using backup policies over snapshot policies - even if its disk based rather than tape based, is that with a backup you get far better control over your Disaster Recovery process. The policy integration with Scale and Protect is very comprehensive. If the issue is Tape time for recovery - simply change from tape medium to a Disk storage pool as your repository for Protect, you get all the benefits of Spectrum Protect and the restore speeds of disk, (you might even - subject to type of data start to see some benefits of duplication and compression for your backups as you will be able to take advantage of Protect's dedupe and compression for the disk based storage pool, something that's not available on your tape environment. If your looking for a way to further reduce your disk costs then potentially the benefits of Object Storage erasure coding might be worth looking at although for a 1 or 2 site scenario the overheads are pretty much the same if you use some variant of distributed raid or if you use erasure coding. Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: Re: [gpfsug-discuss] Snapshots for backups Date: Wed, May 9, 2018 10:28 PM Our existing environments are using Scale+Protect with tape. Management wants us to move away from tape where possible. We do one filesystem per cluster. So, there will be two new clusters. We are still finalizing the sizing, but the expectation is both of them will be somewhere in the3-5PB range. We understand that if we replicate corrupted data, the corruption will go with it. But the same would be true for a backup (unless I am not quite following you). The thought is that not using Protect and simply doing replication with snapshots will enable faster recovery from a catastrophic failure of the production environment, whereas with Protect we would have to restore petabytes of data. FWIW, this is the same method we are using in our NAS (Isilon), but those utilities are designed for that type of use, and there is no equivalent to mmbackup. Our largest Scale environment is 7+PB, and we can complete a backup of it in one night with mmbackup. We abandoned tape backups on our NAS at around 600TB. From: on behalf of Andrew Beattie Reply-To: gpfsug main discussion list Date: Tuesday, May 8, 2018 at 4:38 PM To: "gpfsug-discuss at spectrumscale.org" Cc: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] Snapshots for backups Hi Jonathan, First off a couple of questions: 1) your using Scale+Protect with Tape today? 2) your new filesystems will be within the same cluster ? 3) What capacity are the new filesystems Based on the above then: AFM-DR will give you the Replication that you are talking about -- please talk to your local IBM people about the limitations of AFM-DR to ensure it will work for your use case Scale supports snapshots - but as mentioned snapshots are not a backup of your filesystem - if you snapshot corrupt data you will replicate that to the DR location If you are going to spin up new infrastructure in a DR location have you considered looking at an object store and using your existing Protect environment to allow you to Protect environment to HSM out to a Disk basked object storage pool distributed over disparate geographic locations? (obviously capacity dependent) Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: "Fosburgh,Jonathan" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Snapshots for backups Date: Tue, May 8, 2018 11:43 PM We are looking at standing up some new filesystems and management would like us to investigate alternative options to Scale+Protect. In particular, they are interested in the following: Replicate to a remote filesystem (I assume this is best done via AFM). Take periodic (probably daily) snapshots at the remote site. The thought here is that this gives us the ability to restore data more quickly than we could with tape and also gives us a DR system in the event of a failure at the primary site. Does anyone have experience with this kind of setup? I know this is a solution that will require a fair amount of scripting and some cron jobs, both of which will introduce a level of human error. Are there any other gotchas we should be aware of? The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkr at lbl.gov Wed May 9 17:01:30 2018 From: kkr at lbl.gov (Kristy Kallback-Rose) Date: Wed, 9 May 2018 09:01:30 -0700 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <1525871584.27337.200.camel@strath.ac.uk> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> Message-ID: +1 for benefits of tape and also power consumption/heat production (may help a case to management) is obviously better with things that don?t have to be spinning all the time. > > At scale tape is a lot cheaper than disk. Also sorry your data is going > to take a couple of weeks to recover goes down a lot better than sorry > your data is gone for ever. > > Finally it's also hard for a hacker or disgruntled admin to wipe your > tapes in a short period of time. The robot don't go that fast. Your > disks/file systems on the other hand effectively be gone in seconds. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Wed May 9 20:01:55 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 9 May 2018 15:01:55 -0400 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org><1525871584.27337.200.camel@strath.ac.uk> Message-ID: I see there are also low-power / zero-power disk archive/arrays available. Any experience with those? From: Kristy Kallback-Rose To: gpfsug main discussion list Date: 05/09/2018 12:20 PM Subject: Re: [gpfsug-discuss] Snapshots for backups Sent by: gpfsug-discuss-bounces at spectrumscale.org +1 for benefits of tape and also power consumption/heat production (may help a case to management) is obviously better with things that don?t have to be spinning all the time. > > At scale tape is a lot cheaper than disk. Also sorry your data is going > to take a couple of weeks to recover goes down a lot better than sorry > your data is gone for ever. > > Finally it's also hard for a hacker or disgruntled admin to wipe your > tapes in a short period of time. The robot don't go that fast. Your > disks/file systems on the other hand effectively be gone in seconds. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valdis.kletnieks at vt.edu Wed May 9 21:33:26 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Wed, 09 May 2018 16:33:26 -0400 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org><1525871584.27337.200.camel@strath.ac.uk> Message-ID: <31428.1525898006@turing-police.cc.vt.edu> On Wed, 09 May 2018 15:01:55 -0400, "Marc A Kaplan" said: > I see there are also low-power / zero-power disk archive/arrays available. > Any experience with those? The last time I looked at those (which was a few years ago) they were competitive with tape for power consumption, but not on cost per terabyte - it takes a lot less cable and hardware to hook up a dozen tape drives and a robot arm that can reach 10,000 volumes than it does to wire up 10,000 disks of which only 500 are actually spinning at any given time... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From skylar2 at uw.edu Wed May 9 21:46:45 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Wed, 9 May 2018 20:46:45 +0000 Subject: [gpfsug-discuss] Snapshots for backups In-Reply-To: <31428.1525898006@turing-police.cc.vt.edu> References: <1B035EFE-EF8E-43A7-B78D-083C97A36392@mdanderson.org> <1525871584.27337.200.camel@strath.ac.uk> <31428.1525898006@turing-police.cc.vt.edu> Message-ID: <20180509204645.fy5js7kjxslihjjr@utumno.gs.washington.edu> On Wed, May 09, 2018 at 04:33:26PM -0400, valdis.kletnieks at vt.edu wrote: > On Wed, 09 May 2018 15:01:55 -0400, "Marc A Kaplan" said: > > > I see there are also low-power / zero-power disk archive/arrays available. > > Any experience with those? > > The last time I looked at those (which was a few years ago) they were competitive > with tape for power consumption, but not on cost per terabyte - it takes a lot less > cable and hardware to hook up a dozen tape drives and a robot arm that can > reach 10,000 volumes than it does to wire up 10,000 disks of which only 500 are > actually spinning at any given time... I also wonder what the lifespan of cold-storage hard drives are relative to tape. With BaFe universal for LTO now, our failure rate for tapes has gone way down (not that it was very high relative to HDDs anyways). FWIW, the operating+capital costs we recharge our grants for tape storage is ~50% of what we recharge them for bulk disk storage. -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From daniel.kidger at uk.ibm.com Thu May 10 11:19:49 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Thu, 10 May 2018 10:19:49 +0000 Subject: [gpfsug-discuss] Not recommended, but why not? In-Reply-To: <4E0D4232-14FC-4229-BFBC-B61242473456@vanderbilt.edu> Message-ID: One additional point to consider is what happens on a hardware failure. eg. If you have two NSD servers that are both CES servers and one fails, then there is a double-failure at exactly the same point in time. Daniel Dr Daniel Kidger IBM Technical Sales Specialist Software Defined Solution Sales +44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 7 May 2018, at 16:39, Buterbaugh, Kevin L wrote: > > Hi All, > > I want to thank all of you who took the time to respond to this question ? your thoughts / suggestions are much appreciated. > > What I?m taking away from all of this is that it is OK to run CES on NSD servers as long as you are very careful in how you set things up. This would include: > > 1. Making sure you have enough CPU horsepower and using cgroups to limit how much CPU SMB and NFS can utilize. > 2. Making sure you have enough RAM ? 256 GB sounds like it should be ?enough? when using SMB. > 3. Making sure you have your network config properly set up. We would be able to provide three separate, dedicated 10 GbE links for GPFS daemon communication, GPFS multi-cluster link to our HPC cluster, and SMB / NFS communication. > 4. Making sure you have good monitoring of all of the above in place. > > Have I missed anything or does anyone have any additional thoughts? Thanks? > > Kevin > >> On May 4, 2018, at 11:26 AM, Sven Oehme wrote: >> >> there is nothing wrong with running CES on NSD Servers, in fact if all CES nodes have access to all LUN's of the filesystem thats the fastest possible configuration as you eliminate 1 network hop. >> the challenge is always to do the proper sizing, so you don't run out of CPU and memory on the nodes as you overlay functions. as long as you have good monitoring in place you are good. if you want to do the extra precaution, you could 'jail' the SMB and NFS daemons into a c-group on the node, i probably wouldn't limit memory but CPU as this is the more critical resource to prevent expels and other time sensitive issues. >> >> sven >> >>> On Fri, May 4, 2018 at 8:39 AM Buterbaugh, Kevin L wrote: >>> Hi All, >>> >>> In doing some research, I have come across numerous places (IBM docs, DeveloperWorks posts, etc.) where it is stated that it is not recommended to run CES on NSD servers ? but I?ve not found any detailed explanation of why not. >>> >>> I understand that CES, especially if you enable SMB, can be a resource hog. But if I size the servers appropriately ? say, late model boxes with 2 x 8 core CPU?s, 256 GB RAM, 10 GbE networking ? is there any reason why I still should not combine the two? >>> >>> To answer the question of why I would want to ? simple, server licenses. >>> >>> Thanks? >>> >>> Kevin >>> >>> ? >>> Kevin Buterbaugh - Senior System Administrator >>> Vanderbilt University - Advanced Computing Center for Research and Education >>> Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 >>> >>> >>> >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C6ec06d262ea84752b1d408d5b1dbe2cc%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636610480314880560&sdata=J5%2F9X4dNeLrGKH%2BwmhIObVK%2BQ4oyoIa1vZ9F2yTU854%3D&reserved=0 > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Thu May 10 13:51:45 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Thu, 10 May 2018 15:51:45 +0300 Subject: [gpfsug-discuss] Node list error In-Reply-To: <342034e96e1f409b889b0e9aa4036098@jumptrading.com> References: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> <342034e96e1f409b889b0e9aa4036098@jumptrading.com> Message-ID: Hi Just to verify - there is no Firewalld running or Selinux ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Bryan Banister To: gpfsug main discussion list Date: 05/08/2018 11:51 PM Subject: Re: [gpfsug-discuss] Node list error Sent by: gpfsug-discuss-bounces at spectrumscale.org What does `mmlsnodeclass -N ` give you? -B From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] Node list error Note: External Email Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633 Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From Kevin.Buterbaugh at Vanderbilt.Edu Thu May 10 14:37:05 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Thu, 10 May 2018 13:37:05 +0000 Subject: [gpfsug-discuss] Node list error In-Reply-To: References: <661E011D-9A85-4321-AD54-FB7771DED649@vanderbilt.edu> <342034e96e1f409b889b0e9aa4036098@jumptrading.com> Message-ID: Hi Yaron, Thanks for the response ? no firewalld nor SELinux. I went ahead and opened up a PMR and it turns out this is a known defect (at least in GPFS 5, I may have been the first to report it in GPFS 4.2.3.x) and IBM is working on a fix. Thanks? Kevin On May 10, 2018, at 7:51 AM, Yaron Daniel > wrote: Hi Just to verify - there is no Firewalld running or Selinux ? Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Bryan Banister > To: gpfsug main discussion list > Date: 05/08/2018 11:51 PM Subject: Re: [gpfsug-discuss] Node list error Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ What does `mmlsnodeclass -N ` give you? -B From:gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Buterbaugh, Kevin L Sent: Tuesday, May 08, 2018 1:24 PM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Node list error Note: External Email ________________________________ Hi All, I can open a PMR for this if necessary, but does anyone know offhand what the following messages mean: 2018-05-08_12:16:39.567-0500: [I] Calling user exit script mmNodeRoleChange: event ccrFileChange, Async command /usr/lpp/mmfs/bin/mmsysmonc. 2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: event ccrFileChange, Async command /usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh. 2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes in list 1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823, 2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event is registered This is GPFS 4.2.3-8. We have not done any addition or deletion of nodes and have not had a bunch of nodes go offline, either. Thanks? Kevin ? Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education Kevin.Buterbaugh at vanderbilt.edu- (615)875-9633 ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C58826c68a116427f5c2d08d5b674e2b2%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636615535509439494&sdata=eB3wc4PtGINXs0pAA9GYowE6ERimMahPBWzejHuOexQ%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From JRLang at uwyo.edu Thu May 10 20:32:00 2018 From: JRLang at uwyo.edu (Jeffrey R. Lang) Date: Thu, 10 May 2018 19:32:00 +0000 Subject: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? In-Reply-To: References: Message-ID: Hi Just a quick check. I upgraded my test GPFS system to RHEL 7.5 today and now GPFS 4.2.3-6 and 4.2.3-8 no longer compile properly. What version of GPFS (Spectrum Scale) is support on RHEL 7.5? Thanks Jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (IT Research Support) Sent: Monday, December 4, 2017 4:29 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? The FAQ at: https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it would work with a build your own kernel, but that doesn?t mean it is **supported** Simon On 04/12/2017, 09:52, "gpfsug-discuss-bounces at spectrumscale.org on behalf of z.han at imperial.ac.uk" wrote: Hi All, Any one is using a Linux kernel 3.12.x or above to run gpfs 4.2.3-4.2? I mean you've compiled your own kernel without paying for a professional service. We're stuck by CentOS/RHEL's distributed kernel as the PCI passthrough is required for VMs. Your comments or suggestions are much appreciated. Kind regards, Zong-Pei _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luis.bolinches at fi.ibm.com Thu May 10 23:22:01 2018 From: luis.bolinches at fi.ibm.com (Luis Bolinches) Date: Fri, 11 May 2018 00:22:01 +0200 Subject: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? In-Reply-To: References: Message-ID: https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html#linuxrest By reading table 30, none at this point Thanks -- Yst?v?llisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis-bolinches "If you always give you will always have" -- Anonymous From: "Jeffrey R. Lang" To: gpfsug main discussion list Date: 11/05/2018 00:05 Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Just a quick check. I upgraded my test GPFS system to RHEL 7.5 today and now GPFS 4.2.3-6 and 4.2.3-8 no longer compile properly. What version of GPFS (Spectrum Scale) is support on RHEL 7.5? Thanks Jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (IT Research Support) Sent: Monday, December 4, 2017 4:29 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? The FAQ at: https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it would work with a build your own kernel, but that doesn?t mean it is **supported** Simon On 04/12/2017, 09:52, "gpfsug-discuss-bounces at spectrumscale.org on behalf of z.han at imperial.ac.uk" wrote: Hi All, Any one is using a Linux kernel 3.12.x or above to run gpfs 4.2.3-4.2? I mean you've compiled your own kernel without paying for a professional service. We're stuck by CentOS/RHEL's distributed kernel as the PCI passthrough is required for VMs. Your comments or suggestions are much appreciated. Kind regards, Zong-Pei _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Ellei edell? ole toisin mainittu: / Unless stated otherwise above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Fri May 11 04:32:42 2018 From: knop at us.ibm.com (Felipe Knop) Date: Thu, 10 May 2018 23:32:42 -0400 Subject: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x orabove? In-Reply-To: References: Message-ID: Luis, Correct. Jeff: The Spectrum Scale team has been actively working on the support for RHEL 7.5 . Since code changes will be required, the support will require upcoming 4.2.3 and 5.0 PTFs. The FAQ will be updated when support for 7.5 becomes available. Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: Luis Bolinches To: gpfsug main discussion list Date: 05/10/2018 06:22 PM Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? Sent by: gpfsug-discuss-bounces at spectrumscale.org https://www.ibm.com/support/knowledgecenter/SSFKCN/com.ibm.cluster.gpfs.doc/gpfs_faqs/gpfsclustersfaq.html#linuxrest By reading table 30, none at this point Thanks -- Yst?v?llisin terveisin / Kind regards / Saludos cordiales / Salutations Luis Bolinches Consultant IT Specialist Mobile Phone: +358503112585 https://www.youracclaim.com/user/luis-bolinches "If you always give you will always have" -- Anonymous From: "Jeffrey R. Lang" To: gpfsug main discussion list Date: 11/05/2018 00:05 Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Just a quick check. I upgraded my test GPFS system to RHEL 7.5 today and now GPFS 4.2.3-6 and 4.2.3-8 no longer compile properly. What version of GPFS (Spectrum Scale) is support on RHEL 7.5? Thanks Jeff -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Simon Thompson (IT Research Support) Sent: Monday, December 4, 2017 4:29 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Can gpfs 4.2.3-4.2 work for kernel 3.12.x or above? The FAQ at: https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html#linux Lists support with (e.g. Ubutu 16.04.2) with kernel 4.4.0-62, so likely it would work with a build your own kernel, but that doesn?t mean it is **supported** Simon On 04/12/2017, 09:52, "gpfsug-discuss-bounces at spectrumscale.org on behalf of z.han at imperial.ac.uk" wrote: Hi All, Any one is using a Linux kernel 3.12.x or above to run gpfs 4.2.3-4.2? I mean you've compiled your own kernel without paying for a professional service. We're stuck by CentOS/RHEL's distributed kernel as the PCI passthrough is required for VMs. Your comments or suggestions are much appreciated. Kind regards, Zong-Pei _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Ellei edell? ole toisin mainittu: / Unless stated otherwise above: Oy IBM Finland Ab PL 265, 00101 Helsinki, Finland Business ID, Y-tunnus: 0195876-3 Registered in Finland_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From bbanister at jumptrading.com Fri May 11 17:25:06 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Fri, 11 May 2018 16:25:06 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out Message-ID: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> It's on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Paul.Sanchez at deshaw.com Fri May 11 18:11:12 2018 From: Paul.Sanchez at deshaw.com (Sanchez, Paul) Date: Fri, 11 May 2018 17:11:12 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out In-Reply-To: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> References: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> Message-ID: <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> I'd normally be excited by this, since we do aggressively apply GPFS upgrades. But it's worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you're also in the habit of aggressively upgrading RedHat then you're going to have to wait for 5.0.1-1 before you can resume that practice. From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Bryan Banister Sent: Friday, May 11, 2018 12:25 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out It's on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Fri May 11 18:56:49 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Fri, 11 May 2018 17:56:49 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out In-Reply-To: <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> References: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> Message-ID: On the other hand, we are very excited by this (from the README): File systems: Traditional NSD nodes and servers can use checksums NSD clients and servers that are configured with IBM Spectrum Scale can use checksums to verify data integrity and detect network corruption of file data that the client reads from or writes to the NSD server. For more information, see the nsdCksumTraditional and nsdDumpBuffersOnCksumError attributes in the topic mmchconfig command. Finally! Thanks, IBM (seriously)? Kevin On May 11, 2018, at 12:11 PM, Sanchez, Paul > wrote: I?d normally be excited by this, since we do aggressively apply GPFS upgrades. But it?s worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re also in the habit of aggressively upgrading RedHat then you?re going to have to wait for 5.0.1-1 before you can resume that practice. From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Bryan Banister Sent: Friday, May 11, 2018 12:25 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) > Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out It?s on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cfba17a5bf8c54359d5a308d5b7636fc4%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636616560077181684&sdata=ymNFnAFOsfzWoFLXWiQMgaHdUKn9sAC8WMv4%2FNjCP%2B0%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri May 11 19:34:30 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 11 May 2018 18:34:30 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out -NSD Checksum Message-ID: <30E7142C-3D77-4A97-834B-D54FFF06564B@nuance.com> Ah be careful! looking at the man page for mmchconfig ?nsdCksumTraditional: https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1adm_mmchconfig.htm * Enabling this feature can result in significant I/O performance degradation and a considerable increase in CPU usage. Bob Oesterlin Sr Principal Storage Engineer, Nuance From: on behalf of "Buterbaugh, Kevin L" Reply-To: gpfsug main discussion list Date: Friday, May 11, 2018 at 1:29 PM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out On the other hand, we are very excited by this (from the README): File systems: Traditional NSD nodes and servers can use checksums NSD clients and servers that are configured with IBM Spectrum Scale can use checksums to verify data integrity and detect network corruption of file data that the client reads from or writes to the NSD server. For more information, see the nsdCksumTraditional and nsdDumpBuffersOnCksumError attributes in the topic mmchconfig command. Finally! Thanks, IBM (seriously)? Kevin On May 11, 2018, at 12:11 PM, Sanchez, Paul > wrote: I?d normally be excited by this, since we do aggressively apply GPFS upgrades. But it?s worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re also in the habit of aggressively upgrading RedHat then you?re going to have to wait for 5.0.1-1 before you can resume that practice. From: gpfsug-discuss-bounces at spectrumscale.org > On Behalf Of Bryan Banister Sent: Friday, May 11, 2018 12:25 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) > Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out It?s on fix central, https://www-945.ibm.com/support/fixcentral Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cfba17a5bf8c54359d5a308d5b7636fc4%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636616560077181684&sdata=ymNFnAFOsfzWoFLXWiQMgaHdUKn9sAC8WMv4%2FNjCP%2B0%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Fri May 11 20:02:30 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Fri, 11 May 2018 19:02:30 +0000 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out -NSD Checksum In-Reply-To: <30E7142C-3D77-4A97-834B-D54FFF06564B@nuance.com> Message-ID: >From some graphs I have seen the overhead varies a lot depending on the I/O size and if read or write and if random IO or not. So definitely YMMV. Remember too that ESS uses powerful processors in order to do the erasure coding and hence has performance to do checksums too. Traditionally ordinary NSD servers are merely ?routers? and as such are often using low spec cpus which may not be fast enough for the extra load? Daniel Dr Daniel Kidger IBM Technical Sales Specialist Software Defined Solution Sales + 44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 11 May 2018, at 19:34, Oesterlin, Robert wrote: > > Ah be careful! looking at the man page for mmchconfig ?nsdCksumTraditional: > > https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.1/com.ibm.spectrum.scale.v5r01.doc/bl1adm_mmchconfig.htm > > Enabling this feature can result in significant I/O performance degradation and a considerable increase in CPU usage. > > > Bob Oesterlin > Sr Principal Storage Engineer, Nuance > > > From: on behalf of "Buterbaugh, Kevin L" > Reply-To: gpfsug main discussion list > Date: Friday, May 11, 2018 at 1:29 PM > To: gpfsug main discussion list > Subject: [EXTERNAL] Re: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out > > On the other hand, we are very excited by this (from the README): > File systems: Traditional NSD nodes and servers can use checksums > > NSD clients and servers that are configured with IBM Spectrum Scale can use checksums > > to verify data integrity and detect network corruption of file data that the client > > reads from or writes to the NSD server. For more information, see the > > nsdCksumTraditional and nsdDumpBuffersOnCksumError attributes in the topic mmchconfig command. > > Finally! Thanks, IBM (seriously)? > > Kevin > > > On May 11, 2018, at 12:11 PM, Sanchez, Paul wrote: > > I?d normally be excited by this, since we do aggressively apply GPFS upgrades. But it?s worth noting that no released version of Scale works with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re also in the habit of aggressively upgrading RedHat then you?re going to have to wait for 5.0.1-1 before you can resume that practice. > > From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Bryan Banister > Sent: Friday, May 11, 2018 12:25 PM > To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) > Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out > > It?s on fix central, https://www-945.ibm.com/support/fixcentral > > Cheers, > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cfba17a5bf8c54359d5a308d5b7636fc4%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636616560077181684&sdata=ymNFnAFOsfzWoFLXWiQMgaHdUKn9sAC8WMv4%2FNjCP%2B0%3D&reserved=0 > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From valdis.kletnieks at vt.edu Fri May 11 20:35:40 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Fri, 11 May 2018 15:35:40 -0400 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out -NSD Checksum In-Reply-To: References: Message-ID: <112843.1526067340@turing-police.cc.vt.edu> On Fri, 11 May 2018 19:02:30 -0000, "Daniel Kidger" said: > Remember too that ESS uses powerful processors in order to do the erasure > coding and hence has performance to do checksums too. Traditionally ordinary > NSD servers are merely ???routers??? and as such are often using low spec cpus > which may not be fast enough for the extra load? More to the point - if you're at all clever, you can do the erasure encoding in such a way that a perfectly usable checksum just drops out the bottom free of charge, so no additional performance is needed to checksum stuff.... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From jonathan at buzzard.me.uk Fri May 11 21:38:03 2018 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 11 May 2018 21:38:03 +0100 Subject: [gpfsug-discuss] FYI, Spectrum Scale 5.0.1 is out In-Reply-To: <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> References: <82e122b8989243dabf9c025335dad1d3@jumptrading.com> <8ee8f740061e4451a3f0d8d8351fa244@mbxtoa1.winmail.deshaw.com> Message-ID: <7a6eeed3-134f-620a-b49b-ed79ade90733@buzzard.me.uk> On 11/05/18 18:11, Sanchez, Paul wrote: > I?d normally be excited by this, since we do aggressively apply GPFS > upgrades.? But it?s worth noting that no released version of Scale works > with the latest RHEL7 kernel yet (anything >= 3.10.0-780). So if you?re > also in the habit of aggressively upgrading RedHat then you?re going to > have to wait for 5.0.1-1 before you can resume that practice. > You can upgrade to RHEL 7.5 and then just boot the last of the 7.4 kernels. I have done that in the past with early RHEL 5. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From goncalves.erika at gene.com Fri May 11 22:55:42 2018 From: goncalves.erika at gene.com (Erika Goncalves) Date: Fri, 11 May 2018 14:55:42 -0700 Subject: [gpfsug-discuss] CES NFS export In-Reply-To: References: Message-ID: I'm new on the Forum (hello to everyone!!) Quick question related to Chetan mail, How is the procedure when you have more than one domain? Make sure NFSv4 ID Mapping value matches on client and server. On server side (i.e. CES nodes); you can set as below: $ mmnfs config change IDMAPD_DOMAIN=test.com On client side (e.g. RHEL NFS client); one can set it using Domain attribute in /etc/idmapd.conf file. $ egrep ^Domain /etc/idmapd.conf Domain = test.com [root at rh73node2 2018_05_07-13:31:11 ~]$ $ service nfs-idmap restart It is possible to configure the IDMAPD_DOMAIN to support more than one? Thanks! -- *E**rika Goncalves* SSF Agile Operations Global IT Infrastructure & Solutions (GIS) Genentech - A member of the Roche Group +1 (650) 529 5458 goncalves.erika at gene.com *Confidentiality Note: *This message is intended only for the use of the named recipient(s) and may contain confidential and/or proprietary information. If you are not the intended recipient, please contact the sender and delete this message. Any unauthorized use of the information contained in this message is prohibited. On Mon, May 7, 2018 at 1:08 AM, Chetan R Kulkarni wrote: > Make sure NFSv4 ID Mapping value matches on client and server. > > On server side (i.e. CES nodes); you can set as below: > > $ mmnfs config change IDMAPD_DOMAIN=test.com > > On client side (e.g. RHEL NFS client); one can set it using Domain > attribute in /etc/idmapd.conf file. > > $ egrep ^Domain /etc/idmapd.conf > Domain = test.com > [root at rh73node2 2018_05_07-13:31:11 ~]$ > $ service nfs-idmap restart > > Please refer following link for the details: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0. > 0/com.ibm.spectrum.scale.v5r00.doc/b1ladm_authconsidfornfsv4access.htm > > Thanks, > Chetan. > > [image: Inactive hide details for "Yaron Daniel" ---05/07/2018 10:46:32 > AM---Hi If you want to use NFSv3 , define only NFSv3 on the exp]"Yaron > Daniel" ---05/07/2018 10:46:32 AM---Hi If you want to use NFSv3 , define > only NFSv3 on the export. > > From: "Yaron Daniel" > To: gpfsug main discussion list > Date: 05/07/2018 10:46 AM > > Subject: Re: [gpfsug-discuss] CES NFS export > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hi > > If you want to use NFSv3 , define only NFSv3 on the export. > In case you work with NFSv4 - you should have "DOMAIN\user" all the way - > so this way you will not get any user mismatch errors, and see permissions > like nobody. > > > > Regards > ------------------------------ > > *Yaron Daniel* 94 Em Ha'Moshavot Rd > *Storage Architect* Petach Tiqva, 49527 > *IBM Global Markets, Systems HW Sales* Israel > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > *IBM Israel* > > [image: IBM Storage Strategy and Solutions v1][image: IBM Storage > Management and Data Protection v1] [image: Related image] > > > > From: Jagga Soorma > To: gpfsug-discuss at spectrumscale.org > Date: 05/07/2018 06:05 AM > Subject: Re: [gpfsug-discuss] CES NFS export > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Looks like this is due to nfs v4 and idmapd domain not being > configured correctly. I am going to test further and reach out if > more assistance is needed. > > Thanks! > > On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma wrote: > > Hi Guys, > > > > We are new to gpfs and have a few client that will be mounting gpfs > > via nfs. We have configured the exports but all user/group > > permissions are showing up as nobody. The gateway/protocol nodes can > > query the uid/gid's via centrify without any issues as well as the > > clients and the perms look good on a client that natively accesses the > > gpfs filesystem. Is there some specific config that we might be > > missing? > > > > -- > > # mmnfs export list --nfsdefs /gpfs/datafs1 > > Path Delegations Clients > > Access_Type Protocols Transports Squash Anonymous_uid > > Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids > > NFS_Commit > > ------------------------------------------------------------ > ------------------------------------------------------------ > ------------------------------------------------------------ > ----------------------- > > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > > ROOT_SQUASH -2 -2 SYS FALSE NONE > > TRUE FALSE > > /gpfs/datafs1 NONE {nodenames} RW 3,4 > > TCP NO_ROOT_SQUASH -2 -2 SYS FALSE > > NONE TRUE FALSE > > /gpfs/datafs1 NONE {nodenames} RW 3,4 TCP > > ROOT_SQUASH -2 -2 SYS FALSE > > NONE TRUE FALSE > > -- > > > > On the nfs clients I see this though: > > > > -- > > # ls -l > > total 0 > > drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1 > > drwxr-xr-x 4 nobody nobody 4096 Feb 9 17:57 dir2 > > -- > > > > Here is our mmnfs config: > > > > -- > > # mmnfs config list > > > > NFS Ganesha Configuration: > > ========================== > > NFS_PROTOCOLS: 3,4 > > NFS_PORT: 2049 > > MNT_PORT: 0 > > NLM_PORT: 0 > > RQUOTA_PORT: 0 > > NB_WORKER: 256 > > LEASE_LIFETIME: 60 > > DOMAINNAME: VIRTUAL1.COM > > DELEGATIONS: Disabled > > ========================== > > > > STATD Configuration > > ========================== > > STATD_PORT: 0 > > ========================== > > > > CacheInode Configuration > > ========================== > > ENTRIES_HWMARK: 1500000 > > ========================== > > > > Export Defaults > > ========================== > > ACCESS_TYPE: NONE > > PROTOCOLS: 3,4 > > TRANSPORTS: TCP > > ANONYMOUS_UID: -2 > > ANONYMOUS_GID: -2 > > SECTYPE: SYS > > PRIVILEGEDPORT: FALSE > > MANAGE_GIDS: TRUE > > SQUASH: ROOT_SQUASH > > NFS_COMMIT: FALSE > > ========================== > > > > Log Configuration > > ========================== > > LOG_LEVEL: EVENT > > ========================== > > > > Idmapd Configuration > > ========================== > > LOCAL-REALMS: LOCALDOMAIN > > DOMAIN: LOCALDOMAIN > > ========================== > > -- > > > > Thanks! > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > *http://gpfsug.org/mailman/listinfo/gpfsug-discuss* > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug. > org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_ > iaSHvJObTbx-siA1ZOg&r=uic-29lyJ5TCiTRi0FyznYhKJx5I7Vzu80WyYuZ4_iM&m= > 3k9qWcL7UfySpNVW2J8S1XsIekUHTHBBYQhN7cPVg3Q&s=844KFrfpsN6nT- > DKV6HdfS8EEejdwHuQxbNR8cX2cyc&e= > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15633834.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15884206.jpg Type: image/jpeg Size: 11294 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15750750.gif Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15967392.gif Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15858665.gif Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 15657152.gif Type: image/gif Size: 4376 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Mon May 14 11:09:10 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Mon, 14 May 2018 10:09:10 +0000 Subject: [gpfsug-discuss] SMB quotas query Message-ID: Hi all, I want to run this past the group to see if I?m going mad or not. We do have an open PMR about the issue which is currently being escalated. We have 400 independent filesets all linked to a path in the filesystem. The root of that path is then exported via SMB, e.g.: Fileset1: /gpfs/rootsmb/fileset1 Fileset2: /gpfs/rootsmb/fileset2 The CES export is /gpfs/rootsmb and the name of the share is (for example) ?share?. All our filesets have block quotas applied to them with the hard and soft limit being the same. Customers then map drives to these filesets using the following path: \\ces-cluster\share\fileset1 \\ces-cluster\share\fileset2 ?fileset400 Some customers have one drive mapping only, others have two or more. For the customers that map two or more drives, the quota that Windows reports is identical for each fileset, and is usually for the last fileset that gets mapped. I do not believe this has always been the case: our customers have only recently (since the New Year at least) started complaining in the three+ years we?ve been running GPFS. In my test cluster I?ve tried rolling back to 4.2.3-2 which we were running last Summer and I can easily reproduce the problem. So a couple of questions: 1. Am I right to think that since GPFS is actually exposing the quota of a fileset over SMB then each fileset mapped as a drive in the manner above *should* each report the correct quota? 2. Does anyone else see the same behaviour? 3. There is suspicion this could be recent changes from a Microsoft Update and I?m not ruling that out just yet. Ok so that?s not a question ? I am worried that IBM may tell us we?re doing it wrong (humm) and to create individual exports for each fileset but this will quickly become tiresome! Thanks Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From z.han at imperial.ac.uk Mon May 14 11:33:07 2018 From: z.han at imperial.ac.uk (z.han at imperial.ac.uk) Date: Mon, 14 May 2018 11:33:07 +0100 (BST) Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Message-ID: Dear All, Any one has the same problem? /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); ^ ...... From jonathan.buzzard at strath.ac.uk Mon May 14 11:44:51 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 14 May 2018 11:44:51 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: Message-ID: <1526294691.17680.18.camel@strath.ac.uk> On Mon, 2018-05-14 at 10:09 +0000, Sobey, Richard A wrote: [SNIP] > ? > I am worried that IBM may tell us we?re doing it wrong (humm) and to > create individual exports for each fileset but this will quickly > become tiresome! > Worst case scenario you could fall back to using the dfree option in smb.conf and then use a program to get the file quota. I have the ~100 lines of C that you need it. Though it has been ~5 years since I last used it. In fact the whole reporting the fileset quota as the disk size is my idea, and the dfree config option is how I implemented it prior to IBM adding it to the vfs_gpfs module. A quick check shows a commit from Jeremy Allison on June 18th last year to use const stuct smb_filename, the comment on the commit is ?instead of const char *. We need to migrate all pathname based VFS calls to use a struct to finish modernising the VFS with extra timestamp and flags parameters. I suspect this change has broken the behaviour. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From spectrumscale at kiranghag.com Mon May 14 11:56:37 2018 From: spectrumscale at kiranghag.com (KG) Date: Mon, 14 May 2018 16:26:37 +0530 Subject: [gpfsug-discuss] pool-metadata_high_error Message-ID: Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rohwedder at de.ibm.com Mon May 14 12:18:55 2018 From: rohwedder at de.ibm.com (Markus Rohwedder) Date: Mon, 14 May 2018 13:18:55 +0200 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: Hello, the pool metadata high error reports issues with the free blocks in the metadataOnly and/or dataAndMetadata NSDs in the system pool. mmlspool and subsequently the GPFSPool sensor is the source of the information that is used be the threshold that reports this error. So please compare with mmlspool and mmperfmon query gpfs_pool_disksize, gpfs_pool_free_fullkb -b 86400 -n 1 Once inodes are allocated I am not aware of a method to de-allocate them. This is what the Knowledge Center says: "Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata space that cannot be reclaimed. " Mit freundlichen Gr??en / Kind regards Dr. Markus Rohwedder Spectrum Scale GUI Development Phone: +49 7034 6430190 IBM Deutschland Research & Development E-Mail: rohwedder at de.ibm.com Am Weiher 24 65451 Kelsterbach Germany From: KG To: gpfsug main discussion list Date: 14.05.2018 12:57 Subject: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1A908817.gif Type: image/gif Size: 4659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stockf at us.ibm.com Mon May 14 12:28:58 2018 From: stockf at us.ibm.com (Frederick Stock) Date: Mon, 14 May 2018 07:28:58 -0400 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: The difference in your inode information is presumably because the fileset you reference is an independent fileset and it has its own inode space distinct from the indoe space used for the "root" fileset (file system). Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com From: "Markus Rohwedder" To: gpfsug main discussion list Date: 05/14/2018 07:19 AM Subject: Re: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello, the pool metadata high error reports issues with the free blocks in the metadataOnly and/or dataAndMetadata NSDs in the system pool. mmlspool and subsequently the GPFSPool sensor is the source of the information that is used be the threshold that reports this error. So please compare with mmlspool and mmperfmon query gpfs_pool_disksize, gpfs_pool_free_fullkb -b 86400 -n 1 Once inodes are allocated I am not aware of a method to de-allocate them. This is what the Knowledge Center says: "Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata space that cannot be reclaimed. " Mit freundlichen Gr??en / Kind regards Dr. Markus Rohwedder Spectrum Scale GUI Development Phone: +49 7034 6430190 IBM Deutschland Research & Development E-Mail: rohwedder at de.ibm.com Am Weiher 24 65451 Kelsterbach Germany KG ---14.05.2018 12:57:33---Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. From: KG To: gpfsug main discussion list Date: 14.05.2018 12:57 Subject: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 4659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From arc at b4restore.com Mon May 14 12:10:18 2018 From: arc at b4restore.com (Andi Rhod Christiansen) Date: Mon, 14 May 2018 11:10:18 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 In-Reply-To: References: Message-ID: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> Hi, Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 and latest support is 7.4. You have to revert back to 3.10.0-693 ? I just had the same issue Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. Best regards Andi R. Christiansen -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk Sendt: 14. maj 2018 12:33 Til: gpfsug main discussion list Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Dear All, Any one has the same problem? /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); ^ ...... From spectrumscale at kiranghag.com Mon May 14 12:35:47 2018 From: spectrumscale at kiranghag.com (KG) Date: Mon, 14 May 2018 17:05:47 +0530 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: On Mon, May 14, 2018 at 4:48 PM, Markus Rohwedder wrote: > Once inodes are allocated I am not aware of a method to de-allocate them. > This is what the Knowledge Center says: > > *"Inodes are allocated when they are used. When a file is deleted, the > inode is reused, but inodes are never deallocated. When setting the maximum > number of inodes in a file system, there is the option to preallocate > inodes. However, in most cases there is no need to preallocate inodes > because, by default, inodes are allocated in sets as needed. If you do > decide to preallocate inodes, be careful not to preallocate more inodes > than will be used; otherwise, the allocated inodes will unnecessarily > consume metadata space that cannot be reclaimed. "* > > > I believe the Maximum number of inodes cannot be reduced but allocated number of inodes can be. Not sure why the GUI isnt allowing to reduce it. ? > > From: KG > To: gpfsug main discussion list > Date: 14.05.2018 12:57 > Subject: [gpfsug-discuss] pool-metadata_high_error > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > Hi Folks > > IHAC who is reporting pool-metadata_high_error on GUI. > > The inode utilisation on filesystem is as below > Used inodes - 92922895 > free inodes - 1684812529 > allocated - 1777735424 > max inodes - 1911363520 > > the inode utilization on one fileset (it is only one being used) is below > Used inodes - 93252664 > allocated - 1776624128 > max inodes 1876624064 > > is this because the difference in allocated and max inodes is very less? > > Customer tried reducing allocated inodes on fileset (between max and used > inode) and GUI complains that it is out of range. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 26124 bytes Desc: not available URL: From rohwedder at de.ibm.com Mon May 14 12:50:49 2018 From: rohwedder at de.ibm.com (Markus Rohwedder) Date: Mon, 14 May 2018 13:50:49 +0200 Subject: [gpfsug-discuss] pool-metadata_high_error In-Reply-To: References: Message-ID: Hi, The GUI behavior is correct. You can reduce the maximum number of inodes of an inode space, but not below the allocated inodes level. See below: # Setting inode levels to 300000 max/ 200000 preallocated [root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 300000:200000 Set maxInodes for inode space 0 to 300000 Fileset root changed. # The actually allocated values may be sloightly different: [root at cache-11 ~]# mmlsfileset gpfs0 -L Filesets in file system 'gpfs0': Name Id RootInode ParentId Created InodeSpace MaxInodes AllocInodes Comment root 0 3 -- Mon Feb 26 11:34:06 2018 0 300000 200032 root fileset # Lowering the allocated values is not allowed [root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 300000:150000 The number of inodes to preallocate cannot be lower than the 200032 inodes already allocated. Input parameter value for inode limit out of range. mmchfileset: Command failed. Examine previous error messages to determine cause. # However, you can change the max inodes up to the allocated value [root at cache-11 ~]# mmchfileset gpfs0 root --inode-limit 200032:200032 Set maxInodes for inode space 0 to 200032 Fileset root changed. [root at cache-11 ~]# mmlsfileset gpfs0 -L Filesets in file system 'gpfs0': Name Id RootInode ParentId Created InodeSpace MaxInodes AllocInodes Comment root 0 3 -- Mon Feb 26 11:34:06 2018 0 200032 200032 root fileset Mit freundlichen Gr??en / Kind regards Dr. Markus Rohwedder Spectrum Scale GUI Development Phone: +49 7034 6430190 IBM Deutschland Research & Development E-Mail: rohwedder at de.ibm.com Am Weiher 24 65451 Kelsterbach Germany From: KG To: gpfsug main discussion list Date: 14.05.2018 13:37 Subject: Re: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org On Mon, May 14, 2018 at 4:48 PM, Markus Rohwedder wrote: Once inodes are allocated I am not aware of a method to de-allocate them. This is what the Knowledge Center says: "Inodes are allocated when they are used. When a file is deleted, the inode is reused, but inodes are never deallocated. When setting the maximum number of inodes in a file system, there is the option to preallocate inodes. However, in most cases there is no need to preallocate inodes because, by default, inodes are allocated in sets as needed. If you do decide to preallocate inodes, be careful not to preallocate more inodes than will be used; otherwise, the allocated inodes will unnecessarily consume metadata space that cannot be reclaimed. " I believe the Maximum number of inodes cannot be reduced but allocated number of inodes can be. Not sure why the GUI isnt allowing to reduce it. ? From: KG To: gpfsug main discussion list Date: 14.05.2018 12:57 Subject: [gpfsug-discuss] pool-metadata_high_error Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Folks IHAC who is reporting pool-metadata_high_error on GUI. The inode utilisation on filesystem is as below Used inodes - 92922895 free inodes - 1684812529 allocated - 1777735424 max inodes - 1911363520 the inode utilization on one fileset (it is only one being used) is below Used inodes - 93252664 allocated - 1776624128 max inodes 1876624064 is this because the difference in allocated and max inodes is very less? Customer tried reducing allocated inodes on fileset (between max and used inode) and GUI complains that it is out of range. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 18426749.gif Type: image/gif Size: 4659 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 18361734.gif Type: image/gif Size: 26124 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Mon May 14 12:54:17 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Mon, 14 May 2018 11:54:17 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526294691.17680.18.camel@strath.ac.uk> References: <1526294691.17680.18.camel@strath.ac.uk> Message-ID: Thanks Jonathan. What I failed to mention in my OP was that MacOS clients DO report the correct size of each mounted folder. Not sure how that changes anything except to reinforce the idea that it's Windows at fault. Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: 14 May 2018 11:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query On Mon, 2018-05-14 at 10:09 +0000, Sobey, Richard A wrote: [SNIP] > ? > I am worried that IBM may tell us we?re doing it wrong (humm) and to > create individual exports for each fileset but this will quickly > become tiresome! > Worst case scenario you could fall back to using the dfree option in smb.conf and then use a program to get the file quota. I have the ~100 lines of C that you need it. Though it has been ~5 years since I last used it. In fact the whole reporting the fileset quota as the disk size is my idea, and the dfree config option is how I implemented it prior to IBM adding it to the vfs_gpfs module. A quick check shows a commit from Jeremy Allison on June 18th last year to use const stuct smb_filename, the comment on the commit is ?instead of const char *. We need to migrate all pathname based VFS calls to use a struct to finish modernising the VFS with extra timestamp and flags parameters. I suspect this change has broken the behaviour. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From z.han at imperial.ac.uk Mon May 14 12:59:25 2018 From: z.han at imperial.ac.uk (z.han at imperial.ac.uk) Date: Mon, 14 May 2018 12:59:25 +0100 (BST) Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 In-Reply-To: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> Message-ID: Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh https://access.redhat.com/errata/RHSA-2018:1318 Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) Kernel: error in exception handling leads to DoS (CVE-2018-8897) Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) ... On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > Date: Mon, 14 May 2018 11:10:18 +0000 > From: Andi Rhod Christiansen > Reply-To: gpfsug main discussion list > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Hi, > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > I just had the same issue > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > Best regards > Andi R. Christiansen > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk > Sendt: 14. maj 2018 12:33 > Til: gpfsug main discussion list > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > > Dear All, > > Any one has the same problem? > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > exit 1;\ > fi > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > ^ ...... > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From arc at b4restore.com Mon May 14 13:13:21 2018 From: arc at b4restore.com (Andi Rhod Christiansen) Date: Mon, 14 May 2018 12:13:21 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> Message-ID: <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" Best regards. -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk Sendt: 14. maj 2018 13:59 Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh https://access.redhat.com/errata/RHSA-2018:1318 Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) Kernel: error in exception handling leads to DoS (CVE-2018-8897) Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) ... On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > Date: Mon, 14 May 2018 11:10:18 +0000 > From: Andi Rhod Christiansen > Reply-To: gpfsug main discussion list > > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Hi, > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > I just had the same issue > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > Best regards > Andi R. Christiansen > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af > z.han at imperial.ac.uk > Sendt: 14. maj 2018 12:33 > Til: gpfsug main discussion list > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Dear All, > > Any one has the same problem? > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > exit 1;\ > fi > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > ^ ...... > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From jonathan.buzzard at strath.ac.uk Mon May 14 13:19:43 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Mon, 14 May 2018 13:19:43 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> Message-ID: <1526300383.17680.20.camel@strath.ac.uk> On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From knop at us.ibm.com Mon May 14 14:30:41 2018 From: knop at us.ibm.com (Felipe Knop) Date: Mon, 14 May 2018 09:30:41 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> Message-ID: All, Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed in Scale to support this kernel level, upgrading to one of those upcoming PTFs will be required in order to run with that kernel. Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: Andi Rhod Christiansen To: gpfsug main discussion list Date: 05/14/2018 08:15 AM Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Sent by: gpfsug-discuss-bounces at spectrumscale.org You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" Best regards. -----Oprindelig meddelelse----- Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk Sendt: 14. maj 2018 13:59 Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh https://access.redhat.com/errata/RHSA-2018:1318 Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) Kernel: error in exception handling leads to DoS (CVE-2018-8897) Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) ... On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > Date: Mon, 14 May 2018 11:10:18 +0000 > From: Andi Rhod Christiansen > Reply-To: gpfsug main discussion list > > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Hi, > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > I just had the same issue > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > Best regards > Andi R. Christiansen > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af > z.han at imperial.ac.uk > Sendt: 14. maj 2018 12:33 > Til: gpfsug main discussion list > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 > > Dear All, > > Any one has the same problem? > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > exit 1;\ > fi > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > ^ ...... > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From bbanister at jumptrading.com Mon May 14 21:29:02 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 14 May 2018 20:29:02 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas Message-ID: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> Hi all, I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? Can't find anything in man pages, thanks! -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Mon May 14 22:26:44 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Tue, 15 May 2018 00:26:44 +0300 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526300383.17680.20.camel@strath.ac.uk> References: <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: Hi What is the output of mmlsfs - does you have --filesetdf enabled ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Jonathan Buzzard To: gpfsug main discussion list Date: 05/14/2018 03:22 PM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From peserocka at gmail.com Mon May 14 22:51:36 2018 From: peserocka at gmail.com (Peter Serocka) Date: Mon, 14 May 2018 23:51:36 +0200 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> Message-ID: <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From kywang at us.ibm.com Mon May 14 23:12:48 2018 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Mon, 14 May 2018 18:12:48 -0400 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> Message-ID: Try disabling and re-enabling default quotas withthe -d option for that fileset. mmdefquotaon command Activates default quota limit usage. Synopsis mmdefquotaon [?u] [?g] [?j] [?v] [?d] {Device [Device... ] | ?a} or mmdefquotaon [?u] [?g] [?v] [?d] {Device:Fileset ... | ?a} ... ?d Assigns default quota limits to existing users, groups, or filesets when the mmdefedquota command is issued. When ??perfileset?quota is not in effect for the file system, this option will only affect existing users, groups, or filesets with no established quota limits. When ??perfileset?quota is in effect for the file system, this option will affect existing users, groups, or filesets with no established quota limits, and it will also change existing users or groups that refer to default quotas at the file system level into users or groups that refer to fileset?level default quota. For more information about default quota priorities, see the following IBM Spectrum Scale: Administration and Programming Reference topic: Default quotas. If this option is not chosen, existing quota entries remain in effect and are not governed by the default quota rules. Kuei-Yu Wang-Knop IBM Scalable I/O development From: Bryan Banister To: "gpfsug main discussion list (gpfsug-discuss at spectrumscale.org)" Date: 05/14/2018 04:29 PM Subject: [gpfsug-discuss] How to clear explicitly set quotas Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi all, I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? Can?t find anything in man pages, thanks! -Bryan Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From christof.schmitt at us.ibm.com Mon May 14 23:17:45 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Mon, 14 May 2018 22:17:45 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: , <1526294691.17680.18.camel@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Tue May 15 06:59:38 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Tue, 15 May 2018 05:59:38 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de> Message-ID: <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Tue May 15 08:10:32 2018 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Tue, 15 May 2018 09:10:32 +0200 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de> <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> Message-ID: An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Tue May 15 09:10:21 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Tue, 15 May 2018 08:10:21 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: Hi Yaron It's currently set to no. Thanks Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Yaron Daniel Sent: 14 May 2018 22:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query Hi What is the output of mmlsfs - does you have --filesetdfenabled ? Regards ________________________________ Yaron Daniel 94 Em Ha'Moshavot Rd [cid:image001.gif at 01D3EC2C.8ACE5310] Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel [IBM Storage Strategy and Solutions v1][IBM Storage Management and Data Protection v1][cid:image004.gif at 01D3EC2C.8ACE5310][cid:image005.gif at 01D3EC2C.8ACE5310] [Related image] From: Jonathan Buzzard > To: gpfsug main discussion list > Date: 05/14/2018 03:22 PM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 1851 bytes Desc: image001.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.gif Type: image/gif Size: 4376 bytes Desc: image002.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.gif Type: image/gif Size: 5093 bytes Desc: image003.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.gif Type: image/gif Size: 4746 bytes Desc: image004.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.gif Type: image/gif Size: 4557 bytes Desc: image005.gif URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 11294 bytes Desc: image006.jpg URL: From YARD at il.ibm.com Tue May 15 11:10:45 2018 From: YARD at il.ibm.com (Yaron Daniel) Date: Tue, 15 May 2018 13:10:45 +0300 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: Hi So - u want to get quota report per fileset quota - right ? We use this param when we want to monitor the NFS exports with df , i think this should also affect the SMB filesets. Can u try to enable it and see if it works ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: "Sobey, Richard A" To: gpfsug main discussion list Date: 05/15/2018 11:11 AM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Yaron It?s currently set to no. Thanks Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Yaron Daniel Sent: 14 May 2018 22:27 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query Hi What is the output of mmlsfs - does you have --filesetdfenabled ? Regards Yaron Daniel 94 Em Ha'Moshavot Rd Storage Architect Petach Tiqva, 49527 IBM Global Markets, Systems HW Sales Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel From: Jonathan Buzzard To: gpfsug main discussion list Date: 05/14/2018 03:22 PM Subject: Re: [gpfsug-discuss] SMB quotas query Sent by: gpfsug-discuss-bounces at spectrumscale.org On Mon, 2018-05-14 at 11:54 +0000, Sobey, Richard A wrote: > Thanks Jonathan. What I failed to mention in my OP was that MacOS > clients DO report the correct size of each mounted folder. Not sure > how that changes anything except to reinforce the idea that it's > Windows at fault. > In which case I would try using the dfree option in the smb.conf and then having it call a shell script that wrote it's inputs to a log file and see if there are any differences between macOS and Windows. If they are the same you could fall back to my old hack and investigate what the changes where to vfs_gpfs. If they are different then the assumptions that vfs_gpfs is making are obviously incorrect. Finally you should test it against an actual Windows server. From memory if you have a quota it reports the quota size as the disk size. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4376 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 5093 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4746 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 4557 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 11294 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Tue May 15 11:23:49 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 15 May 2018 11:23:49 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: <1526379829.17680.27.camel@strath.ac.uk> On Tue, 2018-05-15 at 13:10 +0300, Yaron Daniel wrote: > Hi > > So - u want to get quota report per fileset quota - right ? > We use this param when we want to monitor the NFS exports with df , i > think this should also affect the SMB filesets. > > Can u try to enable it and see if it works ? > It is irrelevant to Samba, this is or should be handled in vfs_gpfs as Christof said earlier. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jonathan.buzzard at strath.ac.uk Tue May 15 11:28:00 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 15 May 2018 11:28:00 +0100 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> Message-ID: <1526380080.17680.29.camel@strath.ac.uk> On Mon, 2018-05-14 at 09:30 -0400, Felipe Knop wrote: > All, > > Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are > needed in Scale to support this kernel level, upgrading to one of > those upcoming PTFs will be required in order to run with that > kernel. > One wonders what the mmfs26/mmfslinux does that you can't achieve with fuse these days? Sure I understand back in the day fuse didn't exist and it could be a significant rewrite of code to use fuse instead. On the plus side though it would make all these sorts of security issues, can't upgrade your distro etc. disappear. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From valdis.kletnieks at vt.edu Tue May 15 13:51:07 2018 From: valdis.kletnieks at vt.edu (valdis.kletnieks at vt.edu) Date: Tue, 15 May 2018 08:51:07 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <1526380080.17680.29.camel@strath.ac.uk> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <1526380080.17680.29.camel@strath.ac.uk> Message-ID: <201401.1526388667@turing-police.cc.vt.edu> On Tue, 15 May 2018 11:28:00 +0100, Jonathan Buzzard said: > One wonders what the mmfs26/mmfslinux does that you can't achieve with > fuse these days? Handling each disk I/O request without several transitions to/from userspace comes to mind... -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 486 bytes Desc: not available URL: From ulmer at ulmer.org Tue May 15 16:09:01 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 15 May 2018 10:09:01 -0500 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <1526380080.17680.29.camel@strath.ac.uk> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <1526380080.17680.29.camel@strath.ac.uk> Message-ID: <26DF1F4F-BC66-40C8-89F1-3A64E94CE5B4@ulmer.org> > On May 15, 2018, at 5:28 AM, Jonathan Buzzard wrote: > > On Mon, 2018-05-14 at 09:30 -0400, Felipe Knop wrote: >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is >> planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are >> needed in Scale to support this kernel level, upgrading to one of >> those upcoming PTFs will be required in order to run with that >> kernel. >> > > One wonders what the mmfs26/mmfslinux does that you can't achieve with > fuse these days? Sure I understand back in the day fuse didn't exist > and it could be a significant rewrite of code to use fuse instead. On > the plus side though it would make all these sorts of security issues, > can't upgrade your distro etc. disappear. > > JAB. > > -- > Jonathan A. Buzzard Tel: +44141-5483420 > HPC System Administrator, ARCHIE-WeSt. > University of Strathclyde, John Anderson Building, Glasgow. G4 0NG > > More lines of code. More code is bad. :) Liberty, -- Stephen From bbanister at jumptrading.com Tue May 15 16:35:51 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 15:35:51 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> Message-ID: <723293fee7214938ae20cdfdbaf99149@jumptrading.com> That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue May 15 16:59:56 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 15:59:56 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <723293fee7214938ae20cdfdbaf99149@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> <723293fee7214938ae20cdfdbaf99149@jumptrading.com> Message-ID: <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Tue May 15 16:13:15 2018 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Tue, 15 May 2018 15:13:15 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> Message-ID: <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> I know these dates can move, but any vague idea of a timeframe target for release (this quarter, next quarter, etc.)? Thanks! -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' > On May 14, 2018, at 9:30 AM, Felipe Knop wrote: > > All, > > Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed in Scale to support this kernel level, upgrading to one of those upcoming PTFs will be required in order to run with that kernel. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that > > From: Andi Rhod Christiansen > To: gpfsug main discussion list > Date: 05/14/2018 08:15 AM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > You are welcome. > > I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. > > they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" > > Best regards. > > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk > Sendt: 14. maj 2018 13:59 > Til: gpfsug main discussion list > Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > > Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh > > > https://access.redhat.com/errata/RHSA-2018:1318 > > Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) > > Kernel: error in exception handling leads to DoS (CVE-2018-8897) > Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) > > kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) > > ... > > > On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > > Date: Mon, 14 May 2018 11:10:18 +0000 > > From: Andi Rhod Christiansen > > Reply-To: gpfsug main discussion list > > > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Hi, > > > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > > > I just had the same issue > > > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > > > > Best regards > > Andi R. Christiansen > > > > -----Oprindelig meddelelse----- > > Fra: gpfsug-discuss-bounces at spectrumscale.org > > P? vegne af > > z.han at imperial.ac.uk > > Sendt: 14. maj 2018 12:33 > > Til: gpfsug main discussion list > > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Dear All, > > > > Any one has the same problem? > > > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > > exit 1;\ > > fi > > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP->i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > > ^ ...... > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 195 bytes Desc: Message signed with OpenPGP URL: From bbanister at jumptrading.com Tue May 15 19:04:40 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 18:04:40 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Message-ID: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> I am now trying to get our system automation to play with the new Spectrum Scale Protocols 5.0.1-0 release and have found that the nfs-ganesha.service can no longer start: # systemctl status nfs-ganesha ? nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2018-05-15 12:43:23 CDT; 8s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 8398 ExecStart=/usr/bin/ganesha.nfsd $OPTIONS (code=exited, status=203/EXEC) May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Starting NFS-Ganesha file server... May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[8398]: Failed at step EXEC spawning /usr/bin/ganesha.nfsd: No such file or directory May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service: control process exited, code=exited status=203 May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Failed to start NFS-Ganesha file server. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Unit nfs-ganesha.service entered failed state. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service failed. Sure enough, it?s not there anymore: # ls /usr/bin/*ganesha* /usr/bin/ganesha_conf /usr/bin/ganesha_mgr /usr/bin/ganesha_stats /usr/bin/gpfs.ganesha.nfsd /usr/bin/sm_notify.ganesha So I wondered what does provide it: # yum whatprovides /usr/bin/ganesha.nfsd Loaded plugins: etckeeper, priorities 2490 packages excluded due to repository priority protections [snip] nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 : NFS-Ganesha is a NFS Server running in user space Repo : @rhel7-universal-linux-production Matched from: Filename : /usr/bin/ganesha.nfsd Confirmed again just for sanity sake: # rpm -ql nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" /usr/bin/ganesha.nfsd But it?s not in the latest release: # rpm -ql nfs-ganesha-2.5.3-ibm020.00.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" # I also looked in every RPM package that was provided in the Spectrum Scale 5.0.1-0 download. So should it be provided? Or should the service really try to start `/usr/bin/gpfs.ganesha.nfsd`?? Or should there be a symlink between the two??? Is this something the magical Spectrum Scale Install Toolkit would do under the covers???? Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Tue May 15 19:08:08 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Tue, 15 May 2018 18:08:08 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> Message-ID: <6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> BTW, I just tried the symlink option and it seems to work: # ln -s gpfs.ganesha.nfsd ganesha.nfsd # ls -ld ganesha.nfsd lrwxrwxrwx 1 root root 17 May 15 13:05 ganesha.nfsd -> gpfs.ganesha.nfsd # # systemctl restart nfs-ganesha.service # systemctl status nfs-ganesha.service ? nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2018-05-15 13:06:10 CDT; 5s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 62888 ExecStop=/bin/dbus-send --system --dest=org.ganesha.nfsd --type=method_call /org/ganesha/nfsd/admin org.ganesha.nfsd.admin.shutdown (code=exited, status=0/SUCCESS) Process: 63091 ExecStartPost=/bin/bash -c prlimit --pid $MAINPID --nofile=$NOFILE:$NOFILE (code=exited, status=0/SUCCESS) Process: 63089 ExecStart=/usr/bin/ganesha.nfsd $OPTIONS (code=exited, status=0/SUCCESS) Main PID: 63090 (ganesha.nfsd) Memory: 6.1M CGroup: /system.slice/nfs-ganesha.service ??63090 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT May 15 13:06:10 fpia-gpfs-testing-cnfs01 systemd[1]: Starting NFS-Ganesha file server... May 15 13:06:10 fpia-gpfs-testing-cnfs01 systemd[1]: Started NFS-Ganesha file server. [root at fpia-gpfs-testing-cnfs01 bin]# Cheers, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 1:05 PM To: gpfsug main discussion list (gpfsug-discuss at spectrumscale.org) Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Note: External Email ________________________________ I am now trying to get our system automation to play with the new Spectrum Scale Protocols 5.0.1-0 release and have found that the nfs-ganesha.service can no longer start: # systemctl status nfs-ganesha ? nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2018-05-15 12:43:23 CDT; 8s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 8398 ExecStart=/usr/bin/ganesha.nfsd $OPTIONS (code=exited, status=203/EXEC) May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Starting NFS-Ganesha file server... May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[8398]: Failed at step EXEC spawning /usr/bin/ganesha.nfsd: No such file or directory May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service: control process exited, code=exited status=203 May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Failed to start NFS-Ganesha file server. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: Unit nfs-ganesha.service entered failed state. May 15 12:43:23 fpia-gpfs-testing-cnfs01 systemd[1]: nfs-ganesha.service failed. Sure enough, it?s not there anymore: # ls /usr/bin/*ganesha* /usr/bin/ganesha_conf /usr/bin/ganesha_mgr /usr/bin/ganesha_stats /usr/bin/gpfs.ganesha.nfsd /usr/bin/sm_notify.ganesha So I wondered what does provide it: # yum whatprovides /usr/bin/ganesha.nfsd Loaded plugins: etckeeper, priorities 2490 packages excluded due to repository priority protections [snip] nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 : NFS-Ganesha is a NFS Server running in user space Repo : @rhel7-universal-linux-production Matched from: Filename : /usr/bin/ganesha.nfsd Confirmed again just for sanity sake: # rpm -ql nfs-ganesha-2.3.2-0.ibm55.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" /usr/bin/ganesha.nfsd But it?s not in the latest release: # rpm -ql nfs-ganesha-2.5.3-ibm020.00.el7.x86_64 | grep "/usr/bin/ganesha.nfsd" # I also looked in every RPM package that was provided in the Spectrum Scale 5.0.1-0 download. So should it be provided? Or should the service really try to start `/usr/bin/gpfs.ganesha.nfsd`?? Or should there be a symlink between the two??? Is this something the magical Spectrum Scale Install Toolkit would do under the covers???? Cheers, -Bryan ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue May 15 19:31:13 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 15 May 2018 19:31:13 +0100 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com> <6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> Message-ID: <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From christof.schmitt at us.ibm.com Tue May 15 19:49:44 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Tue, 15 May 2018 18:49:44 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526379829.17680.27.camel@strath.ac.uk> References: <1526379829.17680.27.camel@strath.ac.uk>, <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: From knop at us.ibm.com Tue May 15 20:02:53 2018 From: knop at us.ibm.com (Felipe Knop) Date: Tue, 15 May 2018 15:02:53 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com><83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: All, Validation of RHEL 7.5 on Scale is currently under way, and we are currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which will include the corresponding fix. Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: Ryan Novosielski To: gpfsug main discussion list Date: 05/15/2018 12:56 PM Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Sent by: gpfsug-discuss-bounces at spectrumscale.org I know these dates can move, but any vague idea of a timeframe target for release (this quarter, next quarter, etc.)? Thanks! -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' > On May 14, 2018, at 9:30 AM, Felipe Knop wrote: > > All, > > Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed in Scale to support this kernel level, upgrading to one of those upcoming PTFs will be required in order to run with that kernel. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are welcome. I see your concern but as long as IBM has not released spectrum scale for 7.5 that > > From: Andi Rhod Christiansen > To: gpfsug main discussion list > Date: 05/14/2018 08:15 AM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > You are welcome. > > I see your concern but as long as IBM has not released spectrum scale for 7.5 that is their only solution, in regards to them caring about security I would say yes they do care, but from their point of view either they tell the customer to upgrade as soon as red hat releases new versions and forcing the customer to be down until they have a new release or they tell them to stay on supported level to a new release is ready. > > they should release a version supporting the new kernel soon, IBM told me when I asked that they are "currently testing and have a support date soon" > > Best regards. > > > -----Oprindelig meddelelse----- > Fra: gpfsug-discuss-bounces at spectrumscale.org P? vegne af z.han at imperial.ac.uk > Sendt: 14. maj 2018 13:59 > Til: gpfsug main discussion list > Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel 3.10.0-862.2.3.el7 > > Thanks. Does IBM care about security, one would ask? In this case I'd choose to use the new kernel for my virtualization over gpfs ... sigh > > > https://access.redhat.com/errata/RHSA-2018:1318 > > Kernel: KVM: error in exception handling leads to wrong debug stack value (CVE-2018-1087) > > Kernel: error in exception handling leads to DoS (CVE-2018-8897) > Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation (CVE-2017-16939) > > kernel: Out-of-bounds write via userland offsets in ebt_entry struct in netfilter/ebtables.c (CVE-2018-1068) > > ... > > > On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > > Date: Mon, 14 May 2018 11:10:18 +0000 > > From: Andi Rhod Christiansen > > Reply-To: gpfsug main discussion list > > > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Hi, > > > > Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > > and latest support is 7.4. You have to revert back to 3.10.0-693 ? > > > > I just had the same issue > > > > Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this level. > > > > > > Best regards > > Andi R. Christiansen > > > > -----Oprindelig meddelelse----- > > Fra: gpfsug-discuss-bounces at spectrumscale.org > > P? vegne af > > z.han at imperial.ac.uk > > Sendt: 14. maj 2018 12:33 > > Til: gpfsug main discussion list > > Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > > > > Dear All, > > > > Any one has the same problem? > > > > /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ > > exit 1;\ > > fi > > make[2]: Entering directory `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > > LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > > LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > > CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > > In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > > from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > > /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > > /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has no member named ?i_wb_list? > > _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > > ^ ...... > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stijn.deweirdt at ugent.be Tue May 15 20:25:31 2018 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Tue, 15 May 2018 21:25:31 +0200 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: so this means running out-of-date kernels for at least another month? oh boy... i hope this is not some new trend in gpfs support. othwerwise all RHEL based sites will have to start adding EUS as default cost to run gpfs with basic security compliance. stijn On 05/15/2018 09:02 PM, Felipe Knop wrote: > All, > > Validation of RHEL 7.5 on Scale is currently under way, and we are > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > will include the corresponding fix. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > > > From: Ryan Novosielski > To: gpfsug main discussion list > Date: 05/15/2018 12:56 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > I know these dates can move, but any vague idea of a timeframe target for > release (this quarter, next quarter, etc.)? > > Thanks! > > -- > ____ > || \\UTGERS, > |---------------------------*O*--------------------------- > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > || \\ of NJ | Office of Advanced Research Computing - MSB > C630, Newark > `' > >> On May 14, 2018, at 9:30 AM, Felipe Knop wrote: >> >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > in Scale to support this kernel level, upgrading to one of those upcoming > PTFs will be required in order to run with that kernel. >> >> Regards, >> >> Felipe >> >> ---- >> Felipe Knop knop at us.ibm.com >> GPFS Development and Security >> IBM Systems >> IBM Building 008 >> 2455 South Rd, Poughkeepsie, NY 12601 >> (845) 433-9314 T/L 293-9314 >> >> >> >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > welcome. I see your concern but as long as IBM has not released spectrum > scale for 7.5 that >> >> From: Andi Rhod Christiansen >> To: gpfsug main discussion list >> Date: 05/14/2018 08:15 AM >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> >> >> >> You are welcome. >> >> I see your concern but as long as IBM has not released spectrum scale for > 7.5 that is their only solution, in regards to them caring about security I > would say yes they do care, but from their point of view either they tell > the customer to upgrade as soon as red hat releases new versions and > forcing the customer to be down until they have a new release or they tell > them to stay on supported level to a new release is ready. >> >> they should release a version supporting the new kernel soon, IBM told me > when I asked that they are "currently testing and have a support date soon" >> >> Best regards. >> >> >> -----Oprindelig meddelelse----- >> Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af z.han at imperial.ac.uk >> Sendt: 14. maj 2018 13:59 >> Til: gpfsug main discussion list >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> >> Thanks. Does IBM care about security, one would ask? In this case I'd > choose to use the new kernel for my virtualization over gpfs ... sigh >> >> >> https://access.redhat.com/errata/RHSA-2018:1318 >> >> Kernel: KVM: error in exception handling leads to wrong debug stack value > (CVE-2018-1087) >> >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > escalation (CVE-2017-16939) >> >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > netfilter/ebtables.c (CVE-2018-1068) >> >> ... >> >> >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: >>> Date: Mon, 14 May 2018 11:10:18 +0000 >>> From: Andi Rhod Christiansen >>> Reply-To: gpfsug main discussion list >>> >>> To: gpfsug main discussion list >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Hi, >>> >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? >>> >>> I just had the same issue >>> >>> Revert to previous working kernel at redhat 7.4 release which is > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > level. >>> >>> >>> Best regards >>> Andi R. Christiansen >>> >>> -----Oprindelig meddelelse----- >>> Fra: gpfsug-discuss-bounces at spectrumscale.org >>> P? vegne af >>> z.han at imperial.ac.uk >>> Sendt: 14. maj 2018 12:33 >>> Til: gpfsug main discussion list >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Dear All, >>> >>> Any one has the same problem? >>> >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > [ $? -ne 0 ]; then \ >>> exit 1;\ >>> fi >>> make[2]: Entering directory > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > no member named ?i_wb_list? >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); >>> ^ ...... >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From abeattie at au1.ibm.com Tue May 15 22:45:47 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Tue, 15 May 2018 21:45:47 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: , <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com><83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com><4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Tue May 15 23:00:48 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 15 May 2018 18:00:48 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks Message-ID: Hello All, Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? I understand that i will not need a redundant SMB server configuration. I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Buterbaugh at Vanderbilt.Edu Tue May 15 22:57:12 2018 From: Kevin.Buterbaugh at Vanderbilt.Edu (Buterbaugh, Kevin L) Date: Tue, 15 May 2018 21:57:12 +0000 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: All, I have to kind of agree with Andrew ? it seems that there is a broad range of takes on kernel upgrades ? everything from ?install the latest kernel the day it comes out? to ?stick with this kernel, we know it works.? Related to that, let me throw out this question ? what about those who haven?t upgraded their kernel in a while at least because they?re concerned with the negative performance impacts of the meltdown / spectre patches??? So let?s just say a customer has upgraded the non-GPFS servers in their cluster, but they?ve left their NSD servers unpatched (I?m talking about the kernel only here; all other updates are applied) due to the aforementioned performance concerns ? as long as they restrict access (i.e. who can log in) and use appropriate host-based firewall rules, is their some risk that they should be aware of? Discuss. Thanks! Kevin On May 15, 2018, at 4:45 PM, Andrew Beattie > wrote: this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux that they "just can't move off" Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: Stijn De Weirdt > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Date: Wed, May 16, 2018 5:35 AM so this means running out-of-date kernels for at least another month? oh boy... i hope this is not some new trend in gpfs support. othwerwise all RHEL based sites will have to start adding EUS as default cost to run gpfs with basic security compliance. stijn On 05/15/2018 09:02 PM, Felipe Knop wrote: > All, > > Validation of RHEL 7.5 on Scale is currently under way, and we are > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > will include the corresponding fix. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > > > From: Ryan Novosielski > > To: gpfsug main discussion list > > Date: 05/15/2018 12:56 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > I know these dates can move, but any vague idea of a timeframe target for > release (this quarter, next quarter, etc.)? > > Thanks! > > -- > ____ > || \\UTGERS, > |---------------------------*O*--------------------------- > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > || \\ of NJ | Office of Advanced Research Computing - MSB > C630, Newark > `' > >> On May 14, 2018, at 9:30 AM, Felipe Knop > wrote: >> >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > in Scale to support this kernel level, upgrading to one of those upcoming > PTFs will be required in order to run with that kernel. >> >> Regards, >> >> Felipe >> >> ---- >> Felipe Knop knop at us.ibm.com >> GPFS Development and Security >> IBM Systems >> IBM Building 008 >> 2455 South Rd, Poughkeepsie, NY 12601 >> (845) 433-9314 T/L 293-9314 >> >> >> >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > welcome. I see your concern but as long as IBM has not released spectrum > scale for 7.5 that >> >> From: Andi Rhod Christiansen > >> To: gpfsug main discussion list > >> Date: 05/14/2018 08:15 AM >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> >> >> >> You are welcome. >> >> I see your concern but as long as IBM has not released spectrum scale for > 7.5 that is their only solution, in regards to them caring about security I > would say yes they do care, but from their point of view either they tell > the customer to upgrade as soon as red hat releases new versions and > forcing the customer to be down until they have a new release or they tell > them to stay on supported level to a new release is ready. >> >> they should release a version supporting the new kernel soon, IBM told me > when I asked that they are "currently testing and have a support date soon" >> >> Best regards. >> >> >> -----Oprindelig meddelelse----- >> Fra: gpfsug-discuss-bounces at spectrumscale.org > > P? vegne af z.han at imperial.ac.uk >> Sendt: 14. maj 2018 13:59 >> Til: gpfsug main discussion list > >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> >> Thanks. Does IBM care about security, one would ask? In this case I'd > choose to use the new kernel for my virtualization over gpfs ... sigh >> >> >> https://access.redhat.com/errata/RHSA-2018:1318 >> >> Kernel: KVM: error in exception handling leads to wrong debug stack value > (CVE-2018-1087) >> >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > escalation (CVE-2017-16939) >> >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > netfilter/ebtables.c (CVE-2018-1068) >> >> ... >> >> >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: >>> Date: Mon, 14 May 2018 11:10:18 +0000 >>> From: Andi Rhod Christiansen > >>> Reply-To: gpfsug main discussion list >>> > >>> To: gpfsug main discussion list > >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Hi, >>> >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? >>> >>> I just had the same issue >>> >>> Revert to previous working kernel at redhat 7.4 release which is > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > level. >>> >>> >>> Best regards >>> Andi R. Christiansen >>> >>> -----Oprindelig meddelelse----- >>> Fra: gpfsug-discuss-bounces at spectrumscale.org >>> > P? vegne af >>> z.han at imperial.ac.uk >>> Sendt: 14. maj 2018 12:33 >>> Til: gpfsug main discussion list > >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Dear All, >>> >>> Any one has the same problem? >>> >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > [ $? -ne 0 ]; then \ >>> exit 1;\ >>> fi >>> make[2]: Entering directory > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > no member named ?i_wb_list? >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); >>> ^ ...... >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From leslie.james.elliott at gmail.com Tue May 15 23:18:45 2018 From: leslie.james.elliott at gmail.com (leslie elliott) Date: Wed, 16 May 2018 08:18:45 +1000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: you might want to read the license details of gpfs before you try do this :) pretty sure you need a server license to re-export the files from a GPFS mount On 16 May 2018 at 08:00, wrote: > Hello All, > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on > GPFS client? Is it supported and does it lead to any issues? > I understand that i will not need a redundant SMB server configuration. > > I could use CES, but CES does not support follow-symlinks outside > respective SMB export. Follow-symlinks is a however a hard-requirement for > to follow links outside GPFS filesystems. > > Thanks, > Lohit > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Tue May 15 23:32:02 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Tue, 15 May 2018 22:32:02 +0000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 15 23:46:18 2018 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 15 May 2018 18:46:18 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com><83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com><4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: Kevin, that seems to be a good point. IF you have dedicated hardware to acting only as a storage and/or file server, THEN neither meltdown nor spectre should not be a worry. BECAUSE meltdown and spectre are just about an adversarial process spying on another process or kernel memory. IF we're not letting any potential adversary run her code on our file server, what's the exposure? NOW, let the security experts tell us where the flaw is in this argument... From: "Buterbaugh, Kevin L" To: gpfsug main discussion list Date: 05/15/2018 06:12 PM Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Sent by: gpfsug-discuss-bounces at spectrumscale.org All, I have to kind of agree with Andrew ? it seems that there is a broad range of takes on kernel upgrades ? everything from ?install the latest kernel the day it comes out? to ?stick with this kernel, we know it works.? Related to that, let me throw out this question ? what about those who haven?t upgraded their kernel in a while at least because they?re concerned with the negative performance impacts of the meltdown / spectre patches??? So let?s just say a customer has upgraded the non-GPFS servers in their cluster, but they?ve left their NSD servers unpatched (I?m talking about the kernel only here; all other updates are applied) due to the aforementioned performance concerns ? as long as they restrict access (i.e. who can log in) and use appropriate host-based firewall rules, is their some risk that they should be aware of? Discuss. Thanks! Kevin On May 15, 2018, at 4:45 PM, Andrew Beattie wrote: this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux that they "just can't move off" Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeattie at au1.ibm.com ----- Original message ----- From: Stijn De Weirdt Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 Date: Wed, May 16, 2018 5:35 AM so this means running out-of-date kernels for at least another month? oh boy... i hope this is not some new trend in gpfs support. othwerwise all RHEL based sites will have to start adding EUS as default cost to run gpfs with basic security compliance. stijn On 05/15/2018 09:02 PM, Felipe Knop wrote: > All, > > Validation of RHEL 7.5 on Scale is currently under way, and we are > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > will include the corresponding fix. > > Regards, > > Felipe > > ---- > Felipe Knop knop at us.ibm.com > GPFS Development and Security > IBM Systems > IBM Building 008 > 2455 South Rd, Poughkeepsie, NY 12601 > (845) 433-9314 T/L 293-9314 > > > > > > From: Ryan Novosielski > To: gpfsug main discussion list > Date: 05/15/2018 12:56 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > I know these dates can move, but any vague idea of a timeframe target for > release (this quarter, next quarter, etc.)? > > Thanks! > > -- > ____ > || \\UTGERS, > |---------------------------*O*--------------------------- > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > || \\ of NJ | Office of Advanced Research Computing - MSB > C630, Newark > `' > >> On May 14, 2018, at 9:30 AM, Felipe Knop wrote: >> >> All, >> >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > in Scale to support this kernel level, upgrading to one of those upcoming > PTFs will be required in order to run with that kernel. >> >> Regards, >> >> Felipe >> >> ---- >> Felipe Knop knop at us.ibm.com >> GPFS Development and Security >> IBM Systems >> IBM Building 008 >> 2455 South Rd, Poughkeepsie, NY 12601 >> (845) 433-9314 T/L 293-9314 >> >> >> >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > welcome. I see your concern but as long as IBM has not released spectrum > scale for 7.5 that >> >> From: Andi Rhod Christiansen >> To: gpfsug main discussion list >> Date: 05/14/2018 08:15 AM >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> >> >> >> You are welcome. >> >> I see your concern but as long as IBM has not released spectrum scale for > 7.5 that is their only solution, in regards to them caring about security I > would say yes they do care, but from their point of view either they tell > the customer to upgrade as soon as red hat releases new versions and > forcing the customer to be down until they have a new release or they tell > them to stay on supported level to a new release is ready. >> >> they should release a version supporting the new kernel soon, IBM told me > when I asked that they are "currently testing and have a support date soon" >> >> Best regards. >> >> >> -----Oprindelig meddelelse----- >> Fra: gpfsug-discuss-bounces at spectrumscale.org > P? vegne af z.han at imperial.ac.uk >> Sendt: 14. maj 2018 13:59 >> Til: gpfsug main discussion list >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > 3.10.0-862.2.3.el7 >> >> Thanks. Does IBM care about security, one would ask? In this case I'd > choose to use the new kernel for my virtualization over gpfs ... sigh >> >> >> https://access.redhat.com/errata/RHSA-2018:1318 >> >> Kernel: KVM: error in exception handling leads to wrong debug stack value > (CVE-2018-1087) >> >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > escalation (CVE-2017-16939) >> >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > netfilter/ebtables.c (CVE-2018-1068) >> >> ... >> >> >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: >>> Date: Mon, 14 May 2018 11:10:18 +0000 >>> From: Andi Rhod Christiansen >>> Reply-To: gpfsug main discussion list >>> >>> To: gpfsug main discussion list >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Hi, >>> >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? >>> >>> I just had the same issue >>> >>> Revert to previous working kernel at redhat 7.4 release which is > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > level. >>> >>> >>> Best regards >>> Andi R. Christiansen >>> >>> -----Oprindelig meddelelse----- >>> Fra: gpfsug-discuss-bounces at spectrumscale.org >>> P? vegne af >>> z.han at imperial.ac.uk >>> Sendt: 14. maj 2018 12:33 >>> Til: gpfsug main discussion list >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel >>> 3.10.0-862.2.3.el7 >>> >>> Dear All, >>> >>> Any one has the same problem? >>> >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > [ $? -ne 0 ]; then \ >>> exit 1;\ >>> fi >>> make[2]: Entering directory > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > no member named ?i_wb_list? >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); >>> ^ ...... >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Wed May 16 00:48:40 2018 From: valleru at cbio.mskcc.org (Lohit Valleru) Date: Tue, 15 May 2018 19:48:40 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: <7aef4353-058f-4741-9760-319bcd037213@Spark> Thanks Christof. The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. Now we are migrating most of the data to GPFS keeping the symlinks as they are. Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? Regards, Lohit On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. > > Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. > > > Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? > > Regards, > > Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ > christof.schmitt at us.ibm.com? ||? +1-520-799-2469??? (T/L: 321-2469) > > > > ----- Original message ----- > > From: valleru at cbio.mskcc.org > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > To: gpfsug main discussion list > > Cc: > > Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > Date: Tue, May 15, 2018 3:04 PM > > > > Hello All, > > > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? > > I understand that i will not need a redundant SMB server configuration. > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > Thanks, > > Lohit > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron.s.knister at nasa.gov Wed May 16 02:03:36 2018 From: aaron.s.knister at nasa.gov (Aaron Knister) Date: Tue, 15 May 2018 21:03:36 -0400 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: The one thing that comes to mind is if you're able to affect some unprivileged process on the NSD servers. Let's say there's a daemon that listens on a port but runs as an unprivileged user in which a vulnerability appears (lets say a 0-day remote code execution bug). One might be tempted to ignore that vulnerability for one reason or another but you couple that with something like meltdown/spectre and in *theory* you could do something like sniff ssh key material and get yourself on the box. In principle I agree with your argument but I've find that when one accepts and justifies a particular risk it can become easy to remember which vulnerability risks you've accepted and end up more exposed than one may realize. Still, the above scenario is low risk (but potentially very high impact), though :) -Aaron On 5/15/18 6:46 PM, Marc A Kaplan wrote: > Kevin, that seems to be a good point. > > IF you have dedicated hardware to acting only as a storage and/or file > server, THEN neither meltdown nor spectre should not be a worry. > > BECAUSE meltdown and spectre are just about an adversarial process > spying on another process or kernel memory. ?IF we're not letting any > potential adversary run her code on our file server, what's the exposure? > > NOW, let the security experts tell us where the flaw is in this argument... > > > > From: "Buterbaugh, Kevin L" > To: gpfsug main discussion list > Date: 05/15/2018 06:12 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working > ?withkernel ? ? ? ?3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------------------------------------------------ > > > > All, > > I have to kind of agree with Andrew ? it seems that there is a broad > range of takes on kernel upgrades ? everything from ?install the latest > kernel the day it comes out? to ?stick with this kernel, we know it works.? > > Related to that, let me throw out this question ? what about those who > haven?t upgraded their kernel in a while at least because they?re > concerned with the negative performance impacts of the meltdown / > spectre patches??? ?So let?s just say a customer has upgraded the > non-GPFS servers in their cluster, but they?ve left their NSD servers > unpatched (I?m talking about the kernel only here; all other updates are > applied) due to the aforementioned performance concerns ? as long as > they restrict access (i.e. who can log in) and use appropriate > host-based firewall rules, is their some risk that they should be aware of? > > Discuss. ?Thanks! > > Kevin > > On May 15, 2018, at 4:45 PM, Andrew Beattie <_abeattie at au1.ibm.com_ > > wrote: > > this thread is mildly amusing, given we regularly get customers asking > why we are dropping support for versions of linux > that they "just can't move off" > > > *Andrew Beattie* > *Software Defined Storage ?- IT Specialist* > *Phone: *614-2133-7927 > *E-mail: *_abeattie at au1.ibm.com_ > > > ----- Original message ----- > From: Stijn De Weirdt <_stijn.deweirdt at ugent.be_ > > > Sent by: _gpfsug-discuss-bounces at spectrumscale.org_ > > To: _gpfsug-discuss at spectrumscale.org_ > > Cc: > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > 3.10.0-862.2.3.el7 > Date: Wed, May 16, 2018 5:35 AM > > so this means running out-of-date kernels for at least another month? oh > boy... > > i hope this is not some new trend in gpfs support. othwerwise all RHEL > based sites will have to start adding EUS as default cost to run gpfs > with basic security compliance. > > stijn > > > On 05/15/2018 09:02 PM, Felipe Knop wrote: > > All, > > > > Validation of RHEL 7.5 on Scale is currently under way, and we are > > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > > will include the corresponding fix. > > > > Regards, > > > > ? Felipe > > > > ---- > > Felipe Knop _knop at us.ibm.com_ > > GPFS Development and Security > > IBM Systems > > IBM Building 008 > > 2455 South Rd, Poughkeepsie, NY 12601 > > (845) 433-9314 ?T/L 293-9314 > > > > > > > > > > > > From: Ryan Novosielski <_novosirj at rutgers.edu_ > > > > To: gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > > Date: 05/15/2018 12:56 PM > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > > ? ? ? ? ? ? 3.10.0-862.2.3.el7 > > Sent by: _gpfsug-discuss-bounces at spectrumscale.org_ > > > > > > > > > I know these dates can move, but any vague idea of a timeframe target for > > release (this quarter, next quarter, etc.)? > > > > Thanks! > > > > -- > > ____ > > || \\UTGERS, > > |---------------------------*O*--------------------------- > > ||_// the State ?| ? ? ? ? Ryan Novosielski - _novosirj at rutgers.edu_ > > > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS > Campus > > || ?\\ ? ?of NJ ?| Office of Advanced Research Computing - MSB > > C630, Newark > > ? ? ?`' > > > >> On May 14, 2018, at 9:30 AM, Felipe Knop <_knop at us.ibm.com_ > > wrote: > >> > >> All, > >> > >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > > in Scale to support this kernel level, upgrading to one of those upcoming > > PTFs will be required in order to run with that kernel. > >> > >> Regards, > >> > >> Felipe > >> > >> ---- > >> Felipe Knop _knop at us.ibm.com_ > >> GPFS Development and Security > >> IBM Systems > >> IBM Building 008 > >> 2455 South Rd, Poughkeepsie, NY 12601 > >> (845) 433-9314 T/L 293-9314 > >> > >> > >> > >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > > welcome. I see your concern but as long as IBM has not released spectrum > > scale for 7.5 that > >> > >> From: ?Andi Rhod Christiansen <_arc at b4restore.com_ > > > >> To: ?gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > >> Date: ?05/14/2018 08:15 AM > >> Subject: ?Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> Sent by: _gpfsug-discuss-bounces at spectrumscale.org_ > > >> > >> > >> > >> > >> You are welcome. > >> > >> I see your concern but as long as IBM has not released spectrum > scale for > > 7.5 that is their only solution, in regards to them caring about > security I > > would say yes they do care, but from their point of view either they tell > > the customer to upgrade as soon as red hat releases new versions and > > forcing the customer to be down until they have a new release or they > tell > > them to stay on supported level to a new release is ready. > >> > >> they should release a version supporting the new kernel soon, IBM > told me > > when I asked that they are "currently testing and have a support date > soon" > >> > >> Best regards. > >> > >> > >> -----Oprindelig meddelelse----- > >> Fra: _gpfsug-discuss-bounces at spectrumscale.org_ > > > <_gpfsug-discuss-bounces at spectrumscale.org_ > > P? vegne af > _z.han at imperial.ac.uk_ > >> Sendt: 14. maj 2018 13:59 > >> Til: gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> > >> Thanks. Does IBM care about security, one would ask? In this case I'd > > choose to use the new kernel for my virtualization over gpfs ... sigh > >> > >> > >> _https://access.redhat.com/errata/RHSA-2018:1318_ > > >> > >> Kernel: KVM: error in exception handling leads to wrong debug stack > value > > (CVE-2018-1087) > >> > >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) > >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > > escalation (CVE-2017-16939) > >> > >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > > netfilter/ebtables.c (CVE-2018-1068) > >> > >> ... > >> > >> > >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > >>> Date: Mon, 14 May 2018 11:10:18 +0000 > >>> From: Andi Rhod Christiansen <_arc at b4restore.com_ > > > >>> Reply-To: gpfsug main discussion list > >>> <_gpfsug-discuss at spectrumscale.org_ > > > >>> To: gpfsug main discussion list <_gpfsug-discuss at spectrumscale.org_ > > > >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> ? ? 3.10.0-862.2.3.el7 > >>> > >>> Hi, > >>> > >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? > >>> > >>> I just had the same issue > >>> > >>> Revert to previous working kernel at redhat 7.4 release which is > > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > > level. > >>> > >>> > >>> Best regards > >>> Andi R. Christiansen > >>> > >>> -----Oprindelig meddelelse----- > >>> Fra: _gpfsug-discuss-bounces at spectrumscale.org_ > > >>> <_gpfsug-discuss-bounces at spectrumscale.org_ > > P? vegne af > >>> _z.han at imperial.ac.uk_ > >>> Sendt: 14. maj 2018 12:33 > >>> Til: gpfsug main discussion list > <_gpfsug-discuss at spectrumscale.org_ > > > >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> 3.10.0-862.2.3.el7 > >>> > >>> Dear All, > >>> > >>> Any one has the same problem? > >>> > >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ?; \ if > > [ $? -ne 0 ]; then \ > >>> exit 1;\ > >>> fi > >>> make[2]: Entering directory > > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > >>> ? LD ? ? ?/usr/lpp/mmfs/src/gpl-linux/built-in.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/tracelin.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/relaytrc.o > >>> ? LD [M] ?/usr/lpp/mmfs/src/gpl-linux/tracedev.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > >>> ? LD [M] ?/usr/lpp/mmfs/src/gpl-linux/mmfs26.o > >>> ? CC [M] ?/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > >>> ? ? ? ? ? ? ? ? ?from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > >>> ? ? ? ? ? ? ? ? ?from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > > no member named ?i_wb_list? > >>> ? ? ?_TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > >>> ? ? ? ? ? ? ? ? ?^ ...... > >>> _______________________________________________ > >>> gpfsug-discuss mailing list > >>> gpfsug-discuss at _spectrumscale.org_ > >>> _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at _spectrumscale.org_ > >> _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > >> > >> > >> > >> > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at _spectrumscale.org_ > >> > > > _https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0_ > > > > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at _spectrumscale.org_ > > _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at _spectrumscale.org_ > > _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at _spectrumscale.org_ _ > __http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at _spectrumscale.org_ _ > __https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0_ > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Aaron Knister NASA Center for Climate Simulation (Code 606.2) Goddard Space Flight Center (301) 286-2776 From ulmer at ulmer.org Wed May 16 03:19:47 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 15 May 2018 21:19:47 -0500 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: <7aef4353-058f-4741-9760-319bcd037213@Spark> References: <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: Lohit, Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You?ve mentioned client a few times now. :) -- Stephen > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > Thanks Christof. > > The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. > > The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. > > Now we are migrating most of the data to GPFS keeping the symlinks as they are. > Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. > The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. > > I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? > > Regards, > > Lohit > > On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: >> > I could use CES, but CES does not support follow-symlinks outside respective SMB export. >> >> Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. >> >> > Follow-symlinks is a however a hard-requirement for to follow links outside GPFS filesystems. >> >> I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? >> >> Regards, >> >> Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ >> christof.schmitt at us.ibm.com || +1-520-799-2469 (T/L: 321-2469 ) >> >> >> ----- Original message ----- >> From: valleru at cbio.mskcc.org >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> To: gpfsug main discussion list >> Cc: >> Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks >> Date: Tue, May 15, 2018 3:04 PM >> >> Hello All, >> >> Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? >> I understand that i will not need a redundant SMB server configuration. >> >> I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement for to follow links outside GPFS filesystems. >> >> Thanks, >> Lohit >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed May 16 03:22:48 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Tue, 15 May 2018 21:22:48 -0500 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> Message-ID: <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> There isn?t a flaw in that argument, but where the security experts are concerned there is no argument. Apparently this time Red Hat just told all of their RHEL 7.4 customers to upgrade to RHEL 7.5, rather than back-porting the security patches. So this time the retirement to upgrade distributions is much worse than normal. -- Stephen > On May 15, 2018, at 5:46 PM, Marc A Kaplan wrote: > > Kevin, that seems to be a good point. > > IF you have dedicated hardware to acting only as a storage and/or file server, THEN neither meltdown nor spectre should not be a worry. > > BECAUSE meltdown and spectre are just about an adversarial process spying on another process or kernel memory. IF we're not letting any potential adversary run her code on our file server, what's the exposure? > > NOW, let the security experts tell us where the flaw is in this argument... > > > > From: "Buterbaugh, Kevin L" > To: gpfsug main discussion list > Date: 05/15/2018 06:12 PM > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > All, > > I have to kind of agree with Andrew ? it seems that there is a broad range of takes on kernel upgrades ? everything from ?install the latest kernel the day it comes out? to ?stick with this kernel, we know it works.? > > Related to that, let me throw out this question ? what about those who haven?t upgraded their kernel in a while at least because they?re concerned with the negative performance impacts of the meltdown / spectre patches??? So let?s just say a customer has upgraded the non-GPFS servers in their cluster, but they?ve left their NSD servers unpatched (I?m talking about the kernel only here; all other updates are applied) due to the aforementioned performance concerns ? as long as they restrict access (i.e. who can log in) and use appropriate host-based firewall rules, is their some risk that they should be aware of? > > Discuss. Thanks! > > Kevin > > On May 15, 2018, at 4:45 PM, Andrew Beattie > wrote: > > this thread is mildly amusing, given we regularly get customers asking why we are dropping support for versions of linux > that they "just can't move off" > > > Andrew Beattie > Software Defined Storage - IT Specialist > Phone: 614-2133-7927 > E-mail: abeattie at au1.ibm.com > > > ----- Original message ----- > From: Stijn De Weirdt > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Cc: > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel 3.10.0-862.2.3.el7 > Date: Wed, May 16, 2018 5:35 AM > > so this means running out-of-date kernels for at least another month? oh > boy... > > i hope this is not some new trend in gpfs support. othwerwise all RHEL > based sites will have to start adding EUS as default cost to run gpfs > with basic security compliance. > > stijn > > > On 05/15/2018 09:02 PM, Felipe Knop wrote: > > All, > > > > Validation of RHEL 7.5 on Scale is currently under way, and we are > > currently targeting mid June to release the PTFs on 4.2.3 and 5.0 which > > will include the corresponding fix. > > > > Regards, > > > > Felipe > > > > ---- > > Felipe Knop knop at us.ibm.com > > GPFS Development and Security > > IBM Systems > > IBM Building 008 > > 2455 South Rd, Poughkeepsie, NY 12601 > > (845) 433-9314 T/L 293-9314 > > > > > > > > > > > > From: Ryan Novosielski > > > To: gpfsug main discussion list > > > Date: 05/15/2018 12:56 PM > > Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working withkernel > > 3.10.0-862.2.3.el7 > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > > > > I know these dates can move, but any vague idea of a timeframe target for > > release (this quarter, next quarter, etc.)? > > > > Thanks! > > > > -- > > ____ > > || \\UTGERS, > > |---------------------------*O*--------------------------- > > ||_// the State | Ryan Novosielski - novosirj at rutgers.edu > > || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus > > || \\ of NJ | Office of Advanced Research Computing - MSB > > C630, Newark > > `' > > > >> On May 14, 2018, at 9:30 AM, Felipe Knop > wrote: > >> > >> All, > >> > >> Support for RHEL 7.5 and kernel level 3.10.0-862 in Spectrum Scale is > > planned for upcoming PTFs on 4.2.3 and 5.0. Since code changes are needed > > in Scale to support this kernel level, upgrading to one of those upcoming > > PTFs will be required in order to run with that kernel. > >> > >> Regards, > >> > >> Felipe > >> > >> ---- > >> Felipe Knop knop at us.ibm.com > >> GPFS Development and Security > >> IBM Systems > >> IBM Building 008 > >> 2455 South Rd, Poughkeepsie, NY 12601 > >> (845) 433-9314 T/L 293-9314 > >> > >> > >> > >> Andi Rhod Christiansen ---05/14/2018 08:15:25 AM---You are > > welcome. I see your concern but as long as IBM has not released spectrum > > scale for 7.5 that > >> > >> From: Andi Rhod Christiansen > > >> To: gpfsug main discussion list > > >> Date: 05/14/2018 08:15 AM > >> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> Sent by: gpfsug-discuss-bounces at spectrumscale.org > >> > >> > >> > >> > >> You are welcome. > >> > >> I see your concern but as long as IBM has not released spectrum scale for > > 7.5 that is their only solution, in regards to them caring about security I > > would say yes they do care, but from their point of view either they tell > > the customer to upgrade as soon as red hat releases new versions and > > forcing the customer to be down until they have a new release or they tell > > them to stay on supported level to a new release is ready. > >> > >> they should release a version supporting the new kernel soon, IBM told me > > when I asked that they are "currently testing and have a support date soon" > >> > >> Best regards. > >> > >> > >> -----Oprindelig meddelelse----- > >> Fra: gpfsug-discuss-bounces at spectrumscale.org > > > P? vegne af z.han at imperial.ac.uk > >> Sendt: 14. maj 2018 13:59 > >> Til: gpfsug main discussion list > > >> Emne: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > > 3.10.0-862.2.3.el7 > >> > >> Thanks. Does IBM care about security, one would ask? In this case I'd > > choose to use the new kernel for my virtualization over gpfs ... sigh > >> > >> > >> https://access.redhat.com/errata/RHSA-2018:1318 > >> > >> Kernel: KVM: error in exception handling leads to wrong debug stack value > > (CVE-2018-1087) > >> > >> Kernel: error in exception handling leads to DoS (CVE-2018-8897) > >> Kernel: ipsec: xfrm: use-after-free leading to potential privilege > > escalation (CVE-2017-16939) > >> > >> kernel: Out-of-bounds write via userland offsets in ebt_entry struct in > > netfilter/ebtables.c (CVE-2018-1068) > >> > >> ... > >> > >> > >> On Mon, 14 May 2018, Andi Rhod Christiansen wrote: > >>> Date: Mon, 14 May 2018 11:10:18 +0000 > >>> From: Andi Rhod Christiansen > > >>> Reply-To: gpfsug main discussion list > >>> > > >>> To: gpfsug main discussion list > > >>> Subject: Re: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> 3.10.0-862.2.3.el7 > >>> > >>> Hi, > >>> > >>> Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 > >>> and latest support is 7.4. You have to revert back to 3.10.0-693 ? > >>> > >>> I just had the same issue > >>> > >>> Revert to previous working kernel at redhat 7.4 release which is > > 3.10.9.693. Make sure kernel-headers and kernel-devel are also at this > > level. > >>> > >>> > >>> Best regards > >>> Andi R. Christiansen > >>> > >>> -----Oprindelig meddelelse----- > >>> Fra: gpfsug-discuss-bounces at spectrumscale.org > >>> > P? vegne af > >>> z.han at imperial.ac.uk > >>> Sendt: 14. maj 2018 12:33 > >>> Til: gpfsug main discussion list > > >>> Emne: [gpfsug-discuss] gpfs 4.2.3.6 stops working with kernel > >>> 3.10.0-862.2.3.el7 > >>> > >>> Dear All, > >>> > >>> Any one has the same problem? > >>> > >>> /usr/bin/make -C /usr/src/kernels/3.10.0-862.2.3.el7.x86_64 ARCH=x86_64 > > M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if > > [ $? -ne 0 ]; then \ > >>> exit 1;\ > >>> fi > >>> make[2]: Entering directory > > `/usr/src/kernels/3.10.0-862.2.3.el7.x86_64' > >>> LD /usr/lpp/mmfs/src/gpl-linux/built-in.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o > >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o > >>> LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o > >>> CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o > >>> In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, > >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:58, > >>> from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:55: > >>> /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: > >>> /usr/lpp/mmfs/src/gpl-linux/trcid.h:1208:57: error: ?struct inode? has > > no member named ?i_wb_list? > >>> _TRACE6D(_HOOKWORD(TRCID_PRINTINODE_8), (Int64)(&(iP->i_wb_list)), > > (Int64)(iP->i_wb_list.next), (Int64)(iP->i_wb_list.prev), (Int64)(&(iP-> > > i_lru)), (Int64)(iP->i_lru.next), (Int64)(iP->i_lru.prev)); > >>> ^ ...... > >>> _______________________________________________ > >>> gpfsug-discuss mailing list > >>> gpfsug-discuss at spectrumscale.org > >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> > >> > >> > >> > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> > > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C78d95c4d4db84a37453408d5b99eeb7d%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636619014583822500&sdata=MDYseJ9NFu1C1UVFKHpQIfcwuhM5qJrVYzpJkB70yCM%3D&reserved=0 > > > > > > [attachment "signature.asc" deleted by Felipe Knop/Poughkeepsie/IBM] > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C9de921b6a0484477f7bd08d5baad3f4e%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636620175613553935&sdata=qyLoxKzFv5mUr9XEGMcsEZIhqXjyKu0YzlQ6yiDSslw%3D&reserved=0 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Wed May 16 03:21:22 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 15 May 2018 22:21:22 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: Thanks Stephen, Yes i do acknowledge, that it will need a SERVER license and thank you for reminding me. I just wanted to make sure, from the technical point of view that we won?t face any issues by exporting a GPFS mount as a SMB export. I remember, i had seen in documentation about few years ago that it is not recommended to export a GPFS mount via Third party SMB services (not CES). But i don?t exactly remember why. Regards, Lohit On May 15, 2018, 10:19 PM -0400, Stephen Ulmer , wrote: > Lohit, > > Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You?ve mentioned client a few times now. :) > > -- > Stephen > > > > > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > > > Thanks Christof. > > > > The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. > > > > The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. > > > > Now we are migrating most of the data to GPFS keeping the symlinks as they are. > > Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. > > The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. > > > > I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? > > > > Regards, > > > > Lohit > > > > On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. > > > > > > Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. > > > > > > > Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? > > > > > > Regards, > > > > > > Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ > > > christof.schmitt at us.ibm.com? ||? +1-520-799-2469??? (T/L: 321-2469) > > > > > > > > > > ----- Original message ----- > > > > From: valleru at cbio.mskcc.org > > > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > To: gpfsug main discussion list > > > > Cc: > > > > Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > > > Date: Tue, May 15, 2018 3:04 PM > > > > > > > > Hello All, > > > > > > > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? > > > > I understand that i will not need a redundant SMB server configuration. > > > > > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > > > Thanks, > > > > Lohit > > > > > > > > > > > > _______________________________________________ > > > > gpfsug-discuss mailing list > > > > gpfsug-discuss at spectrumscale.org > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Wed May 16 03:38:59 2018 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Wed, 16 May 2018 02:38:59 +0000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: , <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Wed May 16 04:05:50 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 15 May 2018 23:05:50 -0400 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: <7aef4353-058f-4741-9760-319bcd037213@Spark> Message-ID: Thank you for the detailed answer Andrew. I do understand that anything above the posix level will not be supported by IBM and might lead to scaling/other issues. We will start small, and discuss with IBM representative on any other possible efforts. Regards, Lohit On May 15, 2018, 10:39 PM -0400, Andrew Beattie , wrote: > Lohit, > > There is no technical reason why if you use the correct licensing that you can't publish a Posix fileystem using external Protocol tool rather than CES > the key thing to note is that if its not the IBM certified solution that IBM support stops at the Posix level and the protocol issues are your own to resolve. > > The reason we provide the CES environment is to provide a supported architecture to deliver protocol access,? does it have some limitations - certainly > but it is a supported environment.? Moving away from this moves the risk onto the customer to resolve and maintain. > > The other part of this, and potentially the reason why you might have been warned off using an external solution is that not all systems provide scalability and resiliency > so you may end up bumping into scaling issues by building your own environment --- and from the sound of things this is a large complex environment.? These issues are clearly defined in the CES stack and are well understood.? moving away from this will move you into the realm of the unknown -- again the risk becomes yours. > > it may well be worth putting a request in with your local IBM representative to have IBM Scale protocol development team involved in your design and see what we can support for your requirements. > > > Regards, > Andrew Beattie > Software Defined Storage? - IT Specialist > Phone: 614-2133-7927 > E-mail: abeattie at au1.ibm.com > > > > ----- Original message ----- > > From: valleru at cbio.mskcc.org > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > To: gpfsug main discussion list > > Cc: > > Subject: Re: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > Date: Wed, May 16, 2018 12:25 PM > > > > Thanks Stephen, > > > > Yes i do acknowledge, that it will need a SERVER license and thank you for reminding me. > > > > I just wanted to make sure, from the technical point of view that we won?t face any issues by exporting a GPFS mount as a SMB export. > > > > I remember, i had seen in documentation about few years ago that it is not recommended to export a GPFS mount via Third party SMB services (not CES). But i don?t exactly remember why. > > > > Regards, > > Lohit > > > > On May 15, 2018, 10:19 PM -0400, Stephen Ulmer , wrote: > > > Lohit, > > > > > > Just be aware that exporting the data from GPFS via SMB requires a SERVER license for the node in question. You?ve mentioned client a few times now. :) > > > > > > -- > > > Stephen > > > > > > > > > > > > > On May 15, 2018, at 6:48 PM, Lohit Valleru wrote: > > > > > > > > Thanks Christof. > > > > > > > > The usecase is just that : it is easier to have symlinks of files/dirs from various locations/filesystems rather than copying or duplicating that data. > > > > > > > > The design from many years was maintaining about 8 PB of NFS filesystem with thousands of symlinks to various locations and the same directories being exported on SMB. > > > > > > > > Now we are migrating most of the data to GPFS keeping the symlinks as they are. > > > > Thus the need to follow symlinks from the GPFS filesystem to the NFS Filesystem. > > > > The client wants to effectively use the symlinks design that works when used on Linux but is not happy to hear that he will have to redo years of work just because GPFS does not support the same. > > > > > > > > I understand that there might be a reason on why CES might not support this, but is it an issue if we run SMB server on the GPFS clients to expose a read only or read write GPFS mounts? > > > > > > > > Regards, > > > > > > > > Lohit > > > > > > > > On May 15, 2018, 6:32 PM -0400, Christof Schmitt , wrote: > > > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. > > > > > > > > > > Samba has the 'wide links' option, that we currently do not test and support as part of the mmsmb integration. You can always open a RFE and ask that we support this option in a future release. > > > > > > > > > > > Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > > > > > I might be reading this wrong, but do you actually want symlinks that point to a file or directory outside of the GPFS file system? Could you outline a usecase for that? > > > > > > > > > > Regards, > > > > > > > > > > Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ > > > > > christof.schmitt at us.ibm.com? ||? +1-520-799-2469??? (T/L: 321-2469) > > > > > > > > > > > > > > > > ----- Original message ----- > > > > > > From: valleru at cbio.mskcc.org > > > > > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > > To: gpfsug main discussion list > > > > > > Cc: > > > > > > Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks > > > > > > Date: Tue, May 15, 2018 3:04 PM > > > > > > > > > > > > Hello All, > > > > > > > > > > > > Has anyone tried serving SMB export of GPFS mounts from a SMB server on GPFS client? Is it supported and does it lead to any issues? > > > > > > I understand that i will not need a redundant SMB server configuration. > > > > > > > > > > > > I could use CES, but CES does not support follow-symlinks outside respective SMB export. Follow-symlinks is a however a hard-requirement ?for to follow links outside GPFS filesystems. > > > > > > > > > > > > Thanks, > > > > > > Lohit > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > > > gpfsug-discuss mailing list > > > > > > gpfsug-discuss at spectrumscale.org > > > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > > > > _______________________________________________ > > > > > gpfsug-discuss mailing list > > > > > gpfsug-discuss at spectrumscale.org > > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > > > > gpfsug-discuss mailing list > > > > gpfsug-discuss at spectrumscale.org > > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From stijn.deweirdt at ugent.be Wed May 16 05:55:24 2018 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Wed, 16 May 2018 06:55:24 +0200 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> Message-ID: <3cab44ce-42c0-c8e4-01f7-3876541d2511@ugent.be> hi stephen, > There isn?t a flaw in that argument, but where the security experts > are concerned there is no argument. we have gpfs clients hosts where users can login, we can't update those. that is a certain worry. > > Apparently this time Red Hat just told all of their RHEL 7.4 > customers to upgrade to RHEL 7.5, rather than back-porting the > security patches. So this time the retirement to upgrade > distributions is much worse than normal. there's no 'this time', this is the default rhel support model. only with EUS you get patches for non-latest minor releases. stijn > > > > _______________________________________________ gpfsug-discuss > mailing list gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From mnaineni at in.ibm.com Wed May 16 06:18:30 2018 From: mnaineni at in.ibm.com (Malahal R Naineni) Date: Wed, 16 May 2018 10:48:30 +0530 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com><6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> Message-ID: The systemd service file also was updated to account for the daemon binary rename (the rename itself was done to avoid SELinux issues). It is possible that the systemd was using an old cache (unlikely as I didn't see daemon-reload message here) or the rpm update couldn't update the file as user changed the systemd unit service file (most likely case here). Please provide "rpm -qV ", the RPM shipped unit file should NOT have any reference to ganesha.nfsd (it should have gpfs.ganesha.nfsd). Regards, Malahal. PS: No symlink magic is necessary with usual cases! From: Jonathan Buzzard To: gpfsug-discuss at spectrumscale.org Date: 05/16/2018 12:01 AM Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Sent by: gpfsug-discuss-bounces at spectrumscale.org On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 16 09:14:14 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 16 May 2018 08:14:14 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de> <803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> Message-ID: <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: on behalf of "olaf.weiser at de.ibm.com" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" To: "'gpfsug-discuss at spectrumscale.org'" Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Wed May 16 09:51:25 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Wed, 16 May 2018 08:51:25 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526379829.17680.27.camel@strath.ac.uk>, <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: For us the only one that matters is the fileset quota. With or without ?perfileset-quota set, we simply see a quota value for one of the filesets that is mapped to a drive, and every other mapped drives inherits the same value. whether it?s true or not. Just about to do some SMB tracing for my PMR. Richard From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Christof Schmitt Sent: 15 May 2018 19:50 To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] SMB quotas query To maybe clarify a few points: There are three quotas: user, group and fileset. User and group quota can be applied on the fileset level or the file system level. Samba with the vfs_gpfs module, only queries the user and group quotas on the requested path. If the fileset quota should also be applied to the reported free space, that has to be done through the --filesetdf parameter. We had the fileset quota query from Samba in the past, but that was a very problematic codepath, and it was removed as --filesetdf is the more reliabel way to achieve the same result. So another part of the question is which quotas should be applied to the reported free space. Regards, Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZ christof.schmitt at us.ibm.com || +1-520-799-2469 (T/L: 321-2469) ----- Original message ----- From: Jonathan Buzzard > Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list > Cc: Subject: Re: [gpfsug-discuss] SMB quotas query Date: Tue, May 15, 2018 3:24 AM On Tue, 2018-05-15 at 13:10 +0300, Yaron Daniel wrote: > Hi > > So - u want to get quota report per fileset quota - right ? > We use this param when we want to monitor the NFS exports with df , i > think this should also affect the SMB filesets. > > Can u try to enable it and see if it works ? > It is irrelevant to Samba, this is or should be handled in vfs_gpfs as Christof said earlier. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 16 10:02:06 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 16 May 2018 10:02:06 +0100 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526379829.17680.27.camel@strath.ac.uk> , <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> Message-ID: <1526461326.17680.48.camel@strath.ac.uk> On Wed, 2018-05-16 at 08:51 +0000, Sobey, Richard A wrote: > For us the only one that matters is the fileset quota. With or > without ?perfileset-quota set, we simply see a quota value for one of > the filesets that is mapped to a drive, and every other mapped drives > inherits the same value. whether it?s true or not. > ? > Just about to do some SMB tracing for my PMR. > ? I have a fully working solution that uses the dfree option in Samba if you want. I am with you here in that a lot of places will be carving a GPFS file system up with file sets with a quota that are then shared to a group of users and you want the disk size, and amount free to show up on the clients based on the quota for the fileset not the whole file system. I am really not sure what the issue with the code path for this as it is 35 lines of C including comments to get the fileset if one exists for a given path on a GPFS file system. You open a random file on the path, call gpfs_fcntl and then gpfs_getfilesetid. It's then a simple call to gpfs_quotactl. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From r.sobey at imperial.ac.uk Wed May 16 10:08:09 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Wed, 16 May 2018 09:08:09 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526461326.17680.48.camel@strath.ac.uk> References: <1526379829.17680.27.camel@strath.ac.uk> , <1526294691.17680.18.camel@strath.ac.uk> <1526300383.17680.20.camel@strath.ac.uk> <1526461326.17680.48.camel@strath.ac.uk> Message-ID: Thanks Jonathan for the offer, but I'd prefer to have this working without implementing unsupported options in production. I'd be willing to give it a go in my test cluster though, which is exhibiting the same symptoms, so if you wouldn't mind getting in touch off list I can see how it works? I am almost certain that this used to work properly in the past though. My customers would surely have noticed a problem like this - they like to say when things are wrong ? Cheers Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: 16 May 2018 10:02 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query On Wed, 2018-05-16 at 08:51 +0000, Sobey, Richard A wrote: > For us the only one that matters is the fileset quota. With or without > ?perfileset-quota set, we simply see a quota value for one of the > filesets that is mapped to a drive, and every other mapped drives > inherits the same value. whether it?s true or not. > ? > Just about to do some SMB tracing for my PMR. > ? I have a fully working solution that uses the dfree option in Samba if you want. I am with you here in that a lot of places will be carving a GPFS file system up with file sets with a quota that are then shared to a group of users and you want the disk size, and amount free to show up on the clients based on the quota for the fileset not the whole file system. I am really not sure what the issue with the code path for this as it is 35 lines of C including comments to get the fileset if one exists for a given path on a GPFS file system. You open a random file on the path, call gpfs_fcntl and then gpfs_getfilesetid. It's then a simple call to gpfs_quotactl. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From smita.raut at in.ibm.com Wed May 16 11:23:05 2018 From: smita.raut at in.ibm.com (Smita J Raut) Date: Wed, 16 May 2018 15:53:05 +0530 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> Message-ID: You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm >From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" To: gpfsug main discussion list Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: on behalf of "olaf.weiser at de.ibm.com" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" To: "'gpfsug-discuss at spectrumscale.org'" Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 16 13:23:41 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 16 May 2018 13:23:41 +0100 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: Message-ID: <1526473421.17680.57.camel@strath.ac.uk> On Tue, 2018-05-15 at 22:32 +0000, Christof Schmitt wrote: > > I could use CES, but CES does not support follow-symlinks outside > respective SMB export. > ? > Samba has the 'wide links' option, that we currently do not test and > support as part of the mmsmb integration. You can always open a RFE > and ask that we support this option in a future release. > ? Note?that if unix extensions are on then you also need the "allow insecure wide links" option, which is a pretty good hint as to why one should steer several parsecs wide of using symlinks on SMB exports. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From daniel.kidger at uk.ibm.com Wed May 16 13:37:27 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Wed, 16 May 2018 12:37:27 +0000 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: <1526473421.17680.57.camel@strath.ac.uk> References: <1526473421.17680.57.camel@strath.ac.uk>, Message-ID: An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Wed May 16 14:31:30 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Wed, 16 May 2018 13:31:30 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> Message-ID: <5ef78d14aa0c4a23b2979b13deeecab7@SMXRF108.msg.hukrf.de> Hallo Smita, i will search in wich rhel-release is the 0.15 release available. If we found one I want to install, and give feedback. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 +++ Bitte beachten Sie die neuen Telefonnummern +++ +++ Siehe auch: https://www.huk.de/presse/pressekontakt/ansprechpartner.html +++ E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" > To: gpfsug main discussion list > Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: > on behalf of "olaf.weiser at de.ibm.com" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" > To: "'gpfsug-discuss at spectrumscale.org'" > Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ulmer at ulmer.org Wed May 16 15:05:19 2018 From: ulmer at ulmer.org (Stephen Ulmer) Date: Wed, 16 May 2018 09:05:19 -0500 Subject: [gpfsug-discuss] gpfs 4.2.3.6 stops workingwithkernel3.10.0-862.2.3.el7 In-Reply-To: <3cab44ce-42c0-c8e4-01f7-3876541d2511@ugent.be> References: <19ce555cc8284b27b0b6c7ba4d31d9eb@B4RWEX01.internal.b4restore.com> <83eac6605b844a949af977681b5f509e@B4RWEX01.internal.b4restore.com> <4E109526-69D7-416E-A467-ABBB6C581F4C@rutgers.edu> <3B3F266E-ADAE-43CE-8E81-938A9EFC0174@ulmer.org> <3cab44ce-42c0-c8e4-01f7-3876541d2511@ugent.be> Message-ID: <20485D89-2F0F-4905-A5C7-FCACAAAB1FCC@ulmer.org> > On May 15, 2018, at 11:55 PM, Stijn De Weirdt wrote: > > hi stephen, > >> There isn?t a flaw in that argument, but where the security experts >> are concerned there is no argument. > we have gpfs clients hosts where users can login, we can't update those. > that is a certain worry. The original statement from Marc was about dedicated hardware for storage and/or file serving. If that?s not the use case, then neither his logic nor my support of it apply. >> >> Apparently this time Red Hat just told all of their RHEL 7.4 >> customers to upgrade to RHEL 7.5, rather than back-porting the >> security patches. So this time the retirement to upgrade >> distributions is much worse than normal. > there's no 'this time', this is the default rhel support model. only > with EUS you get patches for non-latest minor releases. > > stijn > You are correct! I did a quick check and most of my customers are enterprise-y, and many of them seem to have EUS. I thought it was standard, but it is not. I could be mixing Red Hat up with another Linux vendor at this point? Liberty, -- Stephen From bbanister at jumptrading.com Wed May 16 16:30:14 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 16 May 2018 15:30:14 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com><6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> Message-ID: <5b7aacf8e9c246b4ae06b2a0fa706ed6@jumptrading.com> Malahal is correct, we did modify our version of the systemd unit and the update is being overwritten. My bad. We seemed to have issues with the original version, but will try to use the new version and will open a ticket if we have issues. Definitely do not want to modify the IBM provided configs as this is an obvious example of how that can come back to bite you!! Not symlink is needed as Malahal states. Sorry for the confusion and false alarms. Thanks Malahal!! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Malahal R Naineni Sent: Wednesday, May 16, 2018 12:19 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Note: External Email ________________________________ The systemd service file also was updated to account for the daemon binary rename (the rename itself was done to avoid SELinux issues). It is possible that the systemd was using an old cache (unlikely as I didn't see daemon-reload message here) or the rpm update couldn't update the file as user changed the systemd unit service file (most likely case here). Please provide "rpm -qV ", the RPM shipped unit file should NOT have any reference to ganesha.nfsd (it should have gpfs.ganesha.nfsd). Regards, Malahal. PS: No symlink magic is necessary with usual cases! From: Jonathan Buzzard > To: gpfsug-discuss at spectrumscale.org Date: 05/16/2018 12:01 AM Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From novosirj at rutgers.edu Wed May 16 17:01:18 2018 From: novosirj at rutgers.edu (Ryan Novosielski) Date: Wed, 16 May 2018 16:01:18 +0000 Subject: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? In-Reply-To: <5b7aacf8e9c246b4ae06b2a0fa706ed6@jumptrading.com> References: <4815e679f9e5486aa75cb2e85ee3c296@jumptrading.com><6d9432b3bb6c41d087c308a8cba31246@jumptrading.com> <0492449e-03a7-13fc-48c1-7c7733c59694@strath.ac.uk> , <5b7aacf8e9c246b4ae06b2a0fa706ed6@jumptrading.com> Message-ID: <3D5B04DE-3BC4-478D-A32F-C4417358A003@rutgers.edu> Thing to do here ought to be using overrides in /etc/systemd, not modifying the vendor scripts. I can?t think of a case where one would want to do otherwise, but it may be out there. -- ____ || \\UTGERS, |---------------------------*O*--------------------------- ||_// the State | Ryan Novosielski - novosirj at rutgers.edu || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus || \\ of NJ | Office of Advanced Research Computing - MSB C630, Newark `' On May 16, 2018, at 11:30, Bryan Banister > wrote: Malahal is correct, we did modify our version of the systemd unit and the update is being overwritten. My bad. We seemed to have issues with the original version, but will try to use the new version and will open a ticket if we have issues. Definitely do not want to modify the IBM provided configs as this is an obvious example of how that can come back to bite you!! Not symlink is needed as Malahal states. Sorry for the confusion and false alarms. Thanks Malahal!! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Malahal R Naineni Sent: Wednesday, May 16, 2018 12:19 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Note: External Email ________________________________ The systemd service file also was updated to account for the daemon binary rename (the rename itself was done to avoid SELinux issues). It is possible that the systemd was using an old cache (unlikely as I didn't see daemon-reload message here) or the rpm update couldn't update the file as user changed the systemd unit service file (most likely case here). Please provide "rpm -qV ", the RPM shipped unit file should NOT have any reference to ganesha.nfsd (it should have gpfs.ganesha.nfsd). Regards, Malahal. PS: No symlink magic is necessary with usual cases! From: Jonathan Buzzard > To: gpfsug-discuss at spectrumscale.org Date: 05/16/2018 12:01 AM Subject: Re: [gpfsug-discuss] What happened to /usr/bin/ganesha.nfsd in 5.0.1-0?? Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ On 15/05/18 19:08, Bryan Banister wrote: > BTW, I just tried the symlink option and it seems to work: > > # ln -s gpfs.ganesha.nfsd ganesha.nfsd > > # ls -ld ganesha.nfsd > Looks more like to me that the systemd service file needs updating so that it exec's a file that exists. One wonders how this got through QA mind you. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cnovosirj%40rutgers.edu%7C333d1c944c464856be7008d5bb41f07f%7Cb92d2b234d35447093ff69aca6632ffe%7C1%7C1%7C636620814253162614&sdata=ihaClVwGs9Cp69UflH7eYp%2F0q7%2FR29AY%2FbM1IzbZrsI%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Wed May 16 18:01:52 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Wed, 16 May 2018 17:01:52 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: <1526461326.17680.48.camel@strath.ac.uk> References: <1526461326.17680.48.camel@strath.ac.uk>, <1526379829.17680.27.camel@strath.ac.uk> , <1526294691.17680.18.camel@strath.ac.uk><1526300383.17680.20.camel@strath.ac.uk> Message-ID: An HTML attachment was scrubbed... URL: From bevans at pixitmedia.com Thu May 17 14:41:57 2018 From: bevans at pixitmedia.com (Barry Evans) Date: Thu, 17 May 2018 14:41:57 +0100 Subject: [gpfsug-discuss] =?utf-8?Q?=E2=80=94subblocks-per-full-block_?=in 5.0.1 Message-ID: Slight wonkiness in mmcrfs script that spits this out ?subblocks-per-full-block as an invalid option. No worky: ? ? 777 ? ? ? ? subblocks-per-full-block ) ? ? 778 ? ? ? ? ? if [[ -z $optArg ]] ? ? 779 ? ? ? ? ? then ? ? 780 ? ? ? ? ? ? # The expected argument is not in the same string as its ? ? 781 ? ? ? ? ? ? # option name. ?Get it from the next token. ? ? 782 ? ? ? ? ? ? eval optArg="\${$OPTIND}" ? ? 783 ? ? ? ? ? ? [[ -z $optArg ]] && ?\ ? ? 784 ? ? ? ? ? ? ? syntaxError "missingValue" $noUsageMsg "--$optName_lc" ? ? 785 ? ? ? ? ? ? shift 1 ? ? 786 ? ? ? ? ? fi ? ? 787 ? ? ? ? ? [[ -n $subblocksPerFullBlockOpt ]] && ?\ ? ? 788 ? ? ? ? ? ? syntaxError "multiple" $noUsageMsg "--$optName_lc" ? ? 789 ? ? ? ? ? subblocksPerFullBlockOpt="--$optName_lc" ? ? 790 ? ? 791 ? ? ? ? ? nSubblocksArg=$(checkIntRange --subblocks-per-full-block $optArg 32 8192) ? ? 792 ? ? ? ? ? [[ $? -ne 0 ]] && syntaxError nomsg $noUsageMsg ? ? 793 ? ? ? ? ? tscrfsParms="$tscrfsParms --subblocks-per-full-block $nSubblocksArg" ? ? 794 ? ? ? ? ? ;; Worky: ? ? 777 ? ? ? ? subblocks-per-full-block ) ? ? 778 ? ? ? ? ? if [[ -z $optArg ]] ? ? 779 ? ? ? ? ? then ? ? 780 ? ? ? ? ? ? # The expected argument is not in the same string as its ? ? 781 ? ? ? ? ? ? # option name. ?Get it from the next token. ? ? 782 ? ? ? ? ? ? eval optArg="\${$OPTIND}" ? ? 783 ? ? ? ? ? ? [[ -z $optArg ]] && ?\ ? ? 784 ? ? ? ? ? ? ? syntaxError "missingValue" $noUsageMsg "--$optName_lc" ? ? 785 ? ? ? ? ? ? shift 1 ? ? 786 ? ? ? ? ? fi ? ? 787 ? ? ? ? ? #[[ -n $subblocksPerFullBlockOpt ]] && ?\ ? ? 788 ? ? ? ? ? [[ -n $nSubblocksArg ?]] && ?\ ? ? 789 ? ? ? ? ? ? syntaxError "multiple" $noUsageMsg "--$optName_lc" ? ? 790 ? ? ? ? ? #subblocksPerFullBlockOpt="--$optName_lc" ? ? 791 ? ? ? ? ? nSubblocksArg="--$optName_lc" ? ? 792 ? ? 793 ? ? ? ? ? nSubblocksArg=$(checkIntRange --subblocks-per-full-block $optArg 32 8192) ? ? 794 ? ? ? ? ? [[ $? -ne 0 ]] && syntaxError nomsg $noUsageMsg ? ? 795 ? ? ? ? ? tscrfsParms="$tscrfsParms --subblocks-per-full-block $nSubblocksArg" ? ? 796 ? ? ? ? ? ;; Looks like someone got halfway through the variable change ?subblocksPerFullBlockOpt"?is referenced elsewhere in the script: if [[ -z $forceOption ]] then ? [[ -n $fflag ]] && ?\ ? ? syntaxError "invalidOption" $usageMsg "$fflag" ? [[ -n $subblocksPerFullBlockOpt ]] && ?\ ? ? syntaxError "invalidOption" $usageMsg "$subblocksPerFullBlockOpt" fi ...so this is probably naughty on my behalf. Kind Regards, Barry Evans CTO/Co-Founder Pixit Media Ltd +44 7950 666 248 bevans at pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Thu May 17 16:31:47 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 17 May 2018 16:31:47 +0100 Subject: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks In-Reply-To: References: <1526473421.17680.57.camel@strath.ac.uk> , Message-ID: <1526571107.17680.81.camel@strath.ac.uk> On Wed, 2018-05-16 at 12:37 +0000, Daniel Kidger wrote: > Jonathan, > ? > Are you suggesting that a SMB?exported symlink to /etc/shadow is > somehow a bad thing ??:-) > The irony is that people are busy complaining about not being able to update their kernels for security reasons while someone else is complaining about not being able to do what can only be described in 2018 as very bad practice. The right answer IMHO is to forget about symlinks being followed server side and take the opportunity that migrating it all to GPFS gives you to re-architect your storage so they are no longer needed. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From Renar.Grunenberg at huk-coburg.de Thu May 17 17:13:30 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Thu, 17 May 2018 16:13:30 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de> <6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> Message-ID: <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> Hallo Smita, I checks these now, today there are no real way to get these package from a rhel channel. All are on 0.13.1. I checked the pike repository and see that following packages are available: python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm python2-cryptography-1.7.2-1.el7.x86_64.rpm python2-urllib3-1.21.1-1.el7.noarch.rpm My Request and question here. Why are these packages are not in the pike-release that IBM shipped. Is it possible to implement and test these package for the next ptf 5.0.1.1. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" > To: gpfsug main discussion list > Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: > on behalf of "olaf.weiser at de.ibm.com" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" > To: "'gpfsug-discuss at spectrumscale.org'" > Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at ca.ibm.com Fri May 18 16:25:52 2018 From: bzhang at ca.ibm.com (Bohai Zhang) Date: Fri, 18 May 2018 11:25:52 -0400 Subject: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Message-ID: IBM Spectrum Scale Support Webinar Spectrum Scale Disk Lease, Expel & Recovery About this Webinar IBM Spectrum Scale webinars are hosted by IBM Spectrum Scale support to share expertise and knowledge of the Spectrum Scale product, as well as product updates and best practices based on various use cases. This webinar introduces various concepts and features related to disk lease, node expel, and node recovery. It explains the mechanism of disk lease, the common scenarios and causes for node expel, and different phases of node recovery. It also explains DMS (Deadman Switch) timer which could trigger kernel panic as a result of lease expiry and hang I/O. This webinar also talks about best practice tuning, recent improvements to mitigate node expels and RAS improvements for expel debug data collection. Recent critical defects about node expel will also be discussed in this webinar. Please note that our webinars are free of charge and will be held online via WebEx. Agenda: ? Disk lease concept and mechanism ? Node expel concept, causes and use cases ? Node recover concept and explanation ? Parameter explanation and tuning ? Recent improvement and critical issues ? Q&A NA/EU Session Date: June 6, 2018 Time: 10 AM ? 11AM EDT (2 PM ? 3PM GMT) Registration: https://ibm.biz/BdZLgY Audience: Spectrum Scale Administrators AP/JP Session Date: June 6, 2018 Time: 10 AM ? 11 AM Beijing Time (11 AM ? 12 AM Tokyo Time) Registration: https://ibm.biz/BdZLgi Audience: Spectrum Scale Administrators If you have any questions, please contact IBM Spectrum Scale support. Regards, IBM Spectrum Computing Bohai Zhang Critical Senior Technical Leader, IBM Systems Situation Tel: 1-905-316-2727 Resolver Mobile: 1-416-897-7488 Expert Badge Email: bzhang at ca.ibm.com 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada Live Chat at IBMStorageSuptMobile Apps Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM | dWA We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to recommend IBM. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73794593.gif Type: image/gif Size: 2665 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73540552.gif Type: image/gif Size: 275 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73219387.gif Type: image/gif Size: 305 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73169142.gif Type: image/gif Size: 331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73563875.gif Type: image/gif Size: 3621 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 73474166.gif Type: image/gif Size: 1243 bytes Desc: not available URL: From skylar2 at uw.edu Fri May 18 16:32:05 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Fri, 18 May 2018 15:32:05 +0000 Subject: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery In-Reply-To: References: Message-ID: <20180518153205.beb5brsgadpnf7y3@utumno.gs.washington.edu> Hi Bohai, Will this be recorded? I'll be on vacation but am interested to learn about the topics under discussion. On Fri, May 18, 2018 at 11:25:52AM -0400, Bohai Zhang wrote: > > > > > > IBM Spectrum Scale Support Webinar > Spectrum Scale Disk Lease, Expel & Recovery > > > > > > > About this Webinar > IBM Spectrum Scale webinars are hosted by IBM Spectrum Scale support to > share expertise and knowledge of the Spectrum Scale product, as well as > product updates and best practices based on various use cases. This webinar > introduces various concepts and features related to disk lease, node expel, > and node recovery. It explains the mechanism of disk lease, the common > scenarios and causes for node expel, and different phases of node recovery. > It also explains DMS (Deadman Switch) timer which could trigger kernel > panic as a result of lease expiry and hang I/O. This webinar also talks > about best practice tuning, recent improvements to mitigate node expels and > RAS improvements for expel debug data collection. Recent critical defects > about node expel will also be discussed in this webinar. > > > > > Please note that our webinars are free of charge and will be held online > via WebEx. > > Agenda: > > ? Disk lease concept and mechanism > > ? Node expel concept, causes and use cases > > ? Node recover concept and explanation > > > ? Parameter explanation and tuning > > > ? Recent improvement and critical issues > > > ? Q&A > > NA/EU Session > Date: June 6, 2018 > Time: 10 AM ??? 11AM EDT (2 PM ??? 3PM GMT) > Registration: https://ibm.biz/BdZLgY > Audience: Spectrum Scale Administrators > > AP/JP Session > Date: June 6, 2018 > Time: 10 AM ??? 11 AM Beijing Time (11 AM ??? 12 AM Tokyo Time) > Registration: https://ibm.biz/BdZLgi > Audience: Spectrum Scale Administrators > > > If you have any questions, please contact IBM Spectrum Scale support. > > Regards, > > > > > > > IBM > Spectrum > Computing > > Bohai Zhang Critical > Senior Technical Leader, IBM Systems Situation > Tel: 1-905-316-2727 Resolver > Mobile: 1-416-897-7488 Expert Badge > Email: bzhang at ca.ibm.com > 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada > Live Chat at IBMStorageSuptMobile Apps > > > > Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM > | dWA > We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to > recommend IBM. > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From Robert.Oesterlin at nuance.com Fri May 18 16:37:48 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 18 May 2018 15:37:48 +0000 Subject: [gpfsug-discuss] Presentations from the May 16-17 User Group meeting in Cambridge Message-ID: Thanks to all the presenters and attendees, it was a great get-together. I?ll be posting these soon to spectrumscale.org, but I need to sort out the size restrictions with Simon, so it may be a few more days. Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From smita.raut at in.ibm.com Fri May 18 17:10:11 2018 From: smita.raut at in.ibm.com (Smita J Raut) Date: Fri, 18 May 2018 21:40:11 +0530 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de><6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> Message-ID: Hi Renar, Yes we plan to include newer pyOpenSSL in 5.0.1.1 Thanks, Smita From: "Grunenberg, Renar" To: 'gpfsug main discussion list' Date: 05/17/2018 09:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo Smita, I checks these now, today there are no real way to get these package from a rhel channel. All are on 0.13.1. I checked the pike repository and see that following packages are available: python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm python2-cryptography-1.7.2-1.el7.x86_64.rpm python2-urllib3-1.21.1-1.el7.noarch.rpm My Request and question here. Why are these packages are not in the pike-release that IBM shipped. Is it possible to implement and test these package for the next ptf 5.0.1.1. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. Von: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm >From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: on behalf of " olaf.weiser at de.ibm.com" Reply-To: "gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" To: "'gpfsug-discuss at spectrumscale.org'" < gpfsug-discuss at spectrumscale.org> Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Fri May 18 18:07:56 2018 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Fri, 18 May 2018 17:07:56 +0000 Subject: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies In-Reply-To: References: <7f57fd74e271437f800c602ce3a3f266@SMXRF105.msg.hukrf.de><803ee71fb4ef4460aab1727631806781@SMXRF108.msg.hukrf.de><6D9C4270-783D-4A81-BA78-5DB7C3DEFDBB@bham.ac.uk> <74ede00fe6af40ddb6fcc38bd2d2cf62@SMXRF105.msg.hukrf.de> Message-ID: Hallo Smita, thanks that sounds good. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Freitag, 18. Mai 2018 18:10 An: gpfsug main discussion list Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Hi Renar, Yes we plan to include newer pyOpenSSL in 5.0.1.1 Thanks, Smita From: "Grunenberg, Renar" > To: 'gpfsug main discussion list' > Date: 05/17/2018 09:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo Smita, I checks these now, today there are no real way to get these package from a rhel channel. All are on 0.13.1. I checked the pike repository and see that following packages are available: python2-pyOpenSSL-16.2.0-3.el7.noarch.rpm python2-cryptography-1.7.2-1.el7.x86_64.rpm python2-urllib3-1.21.1-1.el7.noarch.rpm My Request and question here. Why are these packages are not in the pike-release that IBM shipped. Is it possible to implement and test these package for the next ptf 5.0.1.1. Regards Renar. Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Smita J Raut Gesendet: Mittwoch, 16. Mai 2018 12:23 An: gpfsug main discussion list > Betreff: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies You are right Simon, that rpm comes from object. Below two are the new dependencies that were added with Pike support in 5.0.1 pyOpenSSL-0.14-1.ibm.el7.noarch.rpm python2-urllib3-1.21.1-1.ibm.el7.noarch.rpm From RHEL 7.0 to 7.5 the pyOpenSSL package included in the ISO is pyOpenSSL-0.13.1-3.el7.x86_64.rpm , but Pike needs >=0.14, hence pyOpenSSL-0.14 was packaged since it was not available. One possible cause of the problem could be that the yum certs may have Unicode characters. If so, then the SSL code may be rendering the cert as chars instead of bytes. And there seem to be issues in pyOpenSSL-0.14 related to unicode handling that are fixed in 0.15. Renar, could you try upgrading this package to 0.15? Thanks, Smita From: "Simon Thompson (IT Research Support)" > To: gpfsug main discussion list > Date: 05/16/2018 01:44 PM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I wondered if it came from the object RPMs maybe? I haven?t actually checked, but I recall that it was mentioned 5.0.1 was bumping to Pike swift stack (I think!) and that typically requires newer RPMs if using RDO packages so maybe it came that route? Simon From: > on behalf of "olaf.weiser at de.ibm.com" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 15 May 2018 at 08:10 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Renar, can you share , what gpfs packages you tried to install I just did a fresh 5.0.1 install and it works fine for me... even though, I don't see this ibm python rpm [root at tlinc04 ~]# rpm -qa | grep -i openssl openssl-1.0.2k-12.el7.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pyOpenSSL-0.13.1-3.el7.x86_64 openssl-devel-1.0.2k-12.el7.x86_64 xmlsec1-openssl-1.2.20-7.el7_4.x86_64 So I assume, you installed GUI, or scale mgmt .. let us know - thx From: "Grunenberg, Renar" > To: "'gpfsug-discuss at spectrumscale.org'" > Date: 05/15/2018 08:00 AM Subject: Re: [gpfsug-discuss] 5.0.1.0 Update issue with python dependencies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hallo All, follow some experiences with the update to 5.0.1.0 (from 5.0.0.2) on rhel7.4. After the complete yum update to this version, we had a non-function yum cmd. The reason for this is following packet pyOpenSSL-0.14-1.ibm.el7.noarch This package break the yum cmds. The error are: Loaded plugins: langpacks, product-id, rhnplugin, search-disabled-repos Traceback (most recent call last): File "/bin/yum", line 29, in yummain.user_main(sys.argv[1:], exit_code=True) File "/usr/share/yum-cli/yummain.py", line 370, in user_main errcode = main(args) File "/usr/share/yum-cli/yummain.py", line 165, in main base.getOptionsConfig(args) File "/usr/share/yum-cli/cli.py", line 261, in getOptionsConfig self.conf File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1078, in conf = property(fget=lambda self: self._getConfig(), File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 420, in _getConfig self.plugins.run('init') File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run func(conduitcls(self, self.base, conf, **kwargs)) File "/usr/share/yum-plugins/rhnplugin.py", line 141, in init_hook svrChannels = rhnChannel.getChannelDetails(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 71, in getChannelDetails sourceChannels = getChannels(timeout=timeout) File "/usr/share/rhn/up2date_client/rhnChannel.py", line 98, in getChannels up2dateChannels = s.up2date.listChannels(up2dateAuth.getSystemId()) File "/usr/share/rhn/up2date_client/rhnserver.py", line 63, in __call__ return rpcServer.doCall(method, *args, **kwargs) File "/usr/share/rhn/up2date_client/rpcServer.py", line 204, in doCall ret = method(*args, **kwargs) File "/usr/lib64/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/share/rhn/up2date_client/rpcServer.py", line 38, in _request1 ret = self._request(methodname, params) File "/usr/lib/python2.7/site-packages/rhn/rpclib.py", line 384, in _request self._handler, request, verbose=self._verbose) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 171, in request headers, fd = req.send_http(host, handler) File "/usr/lib/python2.7/site-packages/rhn/transports.py", line 721, in send_http self._connection.connect() File "/usr/lib/python2.7/site-packages/rhn/connections.py", line 187, in connect self.sock.init_ssl() File "/usr/lib/python2.7/site-packages/rhn/SSL.py", line 90, in init_ssl self._ctx.load_verify_locations(f) File "/usr/lib/python2.7/site-packages/OpenSSL/SSL.py", line 303, in load_verify_locations raise TypeError("cafile must be None or a byte string") TypeError: cafile must be None or a byte string My questions now: why does IBM patch here rhel python-libaries. This goes to a update nirvana. The Dependencies does looks like this!! rpm -e pyOpenSSL-0.14-1.ibm.el7.noarch error: Failed dependencies: pyOpenSSL is needed by (installed) redhat-access-insights-0:1.0.13-2.el7_3.noarch pyOpenSSL is needed by (installed) rhnlib-2.5.65-4.el7.noarch pyOpenSSL >= 0.14 is needed by (installed) python2-urllib3-1.21.1-1.ibm.el7.noarch Its PMR time. Regards Renar Renar Grunenberg Abteilung Informatik ? Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzhang at ca.ibm.com Fri May 18 19:19:24 2018 From: bzhang at ca.ibm.com (Bohai Zhang) Date: Fri, 18 May 2018 14:19:24 -0400 Subject: [gpfsug-discuss] Fw: IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Message-ID: Regards, IBM Spectrum Computing Bohai Zhang Critical Senior Technical Leader, IBM Systems Situation Tel: 1-905-316-2727 Resolver Mobile: 1-416-897-7488 Expert Badge Email: bzhang at ca.ibm.com 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada Live Chat at IBMStorageSuptMobile Apps Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM | dWA We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to recommend IBM. ----- Forwarded by Bohai Zhang/Ontario/IBM on 2018/05/18 02:18 PM ----- From: Bohai Zhang/Ontario/IBM To: Skylar Thompson Date: 2018/05/18 11:40 AM Subject: Re: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Hi Skylar, Thanks for your interesting. It will be recorded. If you register, we will send you a following up email after the webinar which will contain the link to the recording. Have a nice weekend. Regards, IBM Spectrum Computing Bohai Zhang Critical Senior Technical Leader, IBM Systems Situation Tel: 1-905-316-2727 Resolver Mobile: 1-416-897-7488 Expert Badge Email: bzhang at ca.ibm.com 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada Live Chat at IBMStorageSuptMobile Apps Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM | dWA We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to recommend IBM. From: Skylar Thompson To: bzhang at ca.ibm.com Cc: gpfsug-discuss at spectrumscale.org Date: 2018/05/18 11:34 AM Subject: Re: [gpfsug-discuss] IBM Spectrum Scale Support Webinar - Spectrum Scale Disk Lease, Expel & Recovery Hi Bohai, Will this be recorded? I'll be on vacation but am interested to learn about the topics under discussion. On Fri, May 18, 2018 at 11:25:52AM -0400, Bohai Zhang wrote: > > > > > > IBM Spectrum Scale Support Webinar > Spectrum Scale Disk Lease, Expel & Recovery > > > > > > > About this Webinar > IBM Spectrum Scale webinars are hosted by IBM Spectrum Scale support to > share expertise and knowledge of the Spectrum Scale product, as well as > product updates and best practices based on various use cases. This webinar > introduces various concepts and features related to disk lease, node expel, > and node recovery. It explains the mechanism of disk lease, the common > scenarios and causes for node expel, and different phases of node recovery. > It also explains DMS (Deadman Switch) timer which could trigger kernel > panic as a result of lease expiry and hang I/O. This webinar also talks > about best practice tuning, recent improvements to mitigate node expels and > RAS improvements for expel debug data collection. Recent critical defects > about node expel will also be discussed in this webinar. > > > > > Please note that our webinars are free of charge and will be held online > via WebEx. > > Agenda: > > ? Disk lease concept and mechanism > > ? Node expel concept, causes and use cases > > ? Node recover concept and explanation > > > ? Parameter explanation and tuning > > > ? Recent improvement and critical issues > > > ? Q&A > > NA/EU Session > Date: June 6, 2018 > Time: 10 AM ??? 11AM EDT (2 PM ??? 3PM GMT) > Registration: https://ibm.biz/BdZLgY > Audience: Spectrum Scale Administrators > > AP/JP Session > Date: June 6, 2018 > Time: 10 AM ??? 11 AM Beijing Time (11 AM ??? 12 AM Tokyo Time) > Registration: https://ibm.biz/BdZLgi > Audience: Spectrum Scale Administrators > > > If you have any questions, please contact IBM Spectrum Scale support. > > Regards, > > > > > > > IBM > Spectrum > Computing > > Bohai Zhang Critical > Senior Technical Leader, IBM Systems Situation > Tel: 1-905-316-2727 Resolver > Mobile: 1-416-897-7488 Expert Badge > Email: bzhang at ca.ibm.com > 3600 STEELES AVE EAST, MARKHAM, ON, L3R 9Z7, Canada > Live Chat at IBMStorageSuptMobile Apps > > > > Support Portal | Fix Central | Knowledge Center | Request for Enhancement | Product SMC IBM > | dWA > We meet our service commitment only when you are very satisfied and EXTREMELY LIKELY to > recommend IBM. > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F310241.gif Type: image/gif Size: 2665 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F811734.gif Type: image/gif Size: 275 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F210195.gif Type: image/gif Size: 305 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F911712.gif Type: image/gif Size: 331 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F859587.gif Type: image/gif Size: 3621 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7F303375.gif Type: image/gif Size: 1243 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From hopii at interia.pl Fri May 18 19:53:57 2018 From: hopii at interia.pl (hopii at interia.pl) Date: Fri, 18 May 2018 20:53:57 +0200 Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos authentication issue Message-ID: Hi there, I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. NFS mount with keberos works with no issues as well. But I ran out of ideas how to configure SMB using LDAP with kerberos. I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. But error message seems to point to keytab file, which is present on both, server and client nodes. I ran into simillar post, dated few days ago, so I'm not the only one. https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html Below is my configuration and error message, and I'd appreciate any hints or help. Thank you, d. Error message from /var/adm/ras/log.smbd [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) GENSEC backend 'ntlmssp_resume_ccache' registered [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR Cluster nodes spectrum1.example.com RedHat 7.4 spectrum2.example.com RedHat 7.4 spectrum3.example.com RedHat 7.4 Protocols nodes: labs1.example.com lasb2.example.com labs3.example.com ssipa.example.com Centos 7.5 spectrum scale server: [root at spectrum1 security]# klist -k Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 host/labs1.example.com at example.com 1 host/labs1.example.com at example.com 1 host/labs2.example.com at example.com 1 host/labs2.example.com at example.com 1 host/labs3.example.com at example.com 1 host/labs3.example.com at example.com 1 nfs/labs1.example.com at example.com 1 nfs/labs1.example.com at example.com 1 nfs/labs2.example.com at example.com 1 nfs/labs2.example.com at example.com 1 nfs/labs3.example.com at example.com 1 nfs/labs3.example.com at example.com 1 cifs/labs1.example.com at example.com 1 cifs/labs1.example.com at example.com 1 cifs/labs2.example.com at example.com 1 cifs/labs2.example.com at example.com 1 cifs/labs3.example.com at example.com 1 cifs/labs3.example.com at example.com [root at spectrum1 security]# net conf list [global] disable netbios = yes disable spoolss = yes printcap cache time = 0 fileid:algorithm = fsname fileid:fstype allow = gpfs syncops:onmeta = no preferred master = no client NTLMv2 auth = yes kernel oplocks = no level2 oplocks = yes debug hires timestamp = yes max log size = 100000 host msdfs = yes notify:inotify = yes wide links = no log writeable files on exit = yes ctdb locktime warn threshold = 5000 auth methods = guest sam winbind smbd:backgroundqueue = False read only = no use sendfile = no strict locking = auto posix locking = no large readwrite = yes aio read size = 1 aio write size = 1 force unknown acl user = yes store dos attributes = yes map readonly = yes map archive = yes map system = yes map hidden = yes ea support = yes groupdb:backend = tdb winbind:online check timeout = 30 winbind max domain connections = 5 winbind max clients = 10000 dmapi support = no unix extensions = no socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 strict allocate = yes tdbsam:map builtin = no aio_pthread:aio open = yes dfree cache time = 100 change notify = yes max open files = 20000 time_audit:timeout = 5000 gencache:stabilize_count = 10000 server min protocol = SMB2_02 server max protocol = SMB3_02 vfs objects = shadow_copy2 syncops gpfs fileid time_audit smbd profiling level = on log level = 1 logging = syslog at 0 file smbd exit on ip drop = yes durable handles = no ctdb:smbxsrv_open_global.tdb = false mangled names = illegal include system krb5 conf = no smbd:async search ask sharemode = yes gpfs:sharemodes = yes gpfs:leases = yes gpfs:dfreequota = yes gpfs:prealloc = yes gpfs:hsm = yes gpfs:winattr = yes gpfs:merge_writeappend = no fruit:metadata = stream fruit:nfs_aces = no fruit:veto_appledouble = no readdir_attr:aapl_max_access = false shadow:snapdir = .snapshots shadow:fixinodes = yes shadow:snapdirseverywhere = yes shadow:sort = desc nfs4:mode = simple nfs4:chown = yes nfs4:acedup = merge add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport server string = IBM NAS client use spnego = yes kerberos method = system keytab ldap admin dn = cn=Directory Manager ldap ssl = start tls ldap suffix = dc=example,dc=com netbios name = spectrum1 passdb backend = ldapsam:"ldap://ssipa.example.com" realm = example.com security = ADS dedicated keytab file = /etc/krb5.keytab password server = ssipa.example.com idmap:cache = no idmap config * : read only = no idmap config * : backend = autorid idmap config * : range = 10000000-299999999 idmap config * : rangesize = 1000000 workgroup = labs1 ntlm auth = yes [share1] path = /ibm/gpfs1/labs1 guest ok = no browseable = yes comment = jas share smb encrypt = disabled [root at spectrum1 ~]# mmsmb export list export path browseable guest ok smb encrypt share1 /ibm/gpfs1/labs1 yes no disabled userauth command: mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com root at spectrum1 ~]# mmuserauth service list FILE access configuration : LDAP PARAMETERS VALUES ------------------------------------------------- ENABLE_SERVER_TLS true ENABLE_KERBEROS true USER_NAME cn=Directory Manager SERVERS ssipa.example.com NETBIOS_NAME spectrum1 BASE_DN dc=example,dc=com USER_DN none GROUP_DN none NETGROUP_DN none USER_OBJECTCLASS posixAccount GROUP_OBJECTCLASS posixGroup USER_NAME_ATTRIB cn USER_ID_ATTRIB uid KERBEROS_SERVER ssipa.example.com KERBEROS_REALM example.com OBJECT access not configured PARAMETERS VALUES ------------------------------------------------- net ads keytab list -> does not show any keys LDAP user information was updated with Samba attributes according to the documentation: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm [root at spectrum1 ~]# pdbedit -L -v Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 No builtin backend found, trying to load plugin Module 'ldapsam' loaded db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] StartTLS issued: using a TLS connection smbldap_open_connection: connection opened ldap_connect_system: successful connection to the LDAP server smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] smbldap_search_paged: search was successful init_sam_from_ldap: Entry found for user: jas --------------- Unix username: jas NT username: jas Account Flags: [U ] User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 Forcing Primary Group to 'Domain Users' for jas Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 Full Name: jas jas Home Directory: \\spectrum1\jas HomeDir Drive: Logon Script: Profile Path: \\spectrum1\jas\profile Domain: SPECTRUM1 Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: never Kickoff time: never Password last set: Thu, 17 May 2018 14:08:01 EDT Password can change: Thu, 17 May 2018 14:08:01 EDT Password must change: never Last bad password : 0 Bad password count : 0 Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF Client keytab file: [root at test ~]# klist -k Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 host/test.example.com at example.com 1 host/test.example.com at example.com From christof.schmitt at us.ibm.com Sat May 19 00:05:56 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Fri, 18 May 2018 23:05:56 +0000 Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos authentication issue In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From spectrumscale at kiranghag.com Sat May 19 05:00:04 2018 From: spectrumscale at kiranghag.com (KG) Date: Sat, 19 May 2018 09:30:04 +0530 Subject: [gpfsug-discuss] NFS on system Z Message-ID: Hi The SS FAQ says following for system Z - Cluster Export Service (CES) is not supported. (Monitoring capabilities, Object, CIFS, User space implementation of NFS) - Kernel NFS (v3 and v4) is supported. Clustered NFS is not supported. Does this mean we can only configure OS based non-redundant NFS exports from scale nodes without CNFS/CES? Kiran Ghag -------------- next part -------------- An HTML attachment was scrubbed... URL: From olaf.weiser at de.ibm.com Sat May 19 07:58:41 2018 From: olaf.weiser at de.ibm.com (Olaf Weiser) Date: Sat, 19 May 2018 08:58:41 +0200 Subject: [gpfsug-discuss] NFS on system Z In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Sun May 20 19:42:32 2018 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Sun, 20 May 2018 18:42:32 +0000 Subject: [gpfsug-discuss] NFS on system Z In-Reply-To: Message-ID: Kieran, You can also add x86 nodes to run CES and Ganesha NFS. Either in the same cluster or perhaps neater in a separate multi-cluster Mount. Daniel Dr Daniel Kidger IBM Technical Sales Specialist Software Defined Solution Sales +44-(0)7818 522 266 daniel.kidger at uk.ibm.com > On 19 May 2018, at 07:58, Olaf Weiser wrote: > > HI, > yes.. CES comes along with lots of monitors about status, health checks and a special NFS (ganesha) code.. which is optimized / available only for a limited choice of OS/platforms > so CES is not available for e.g. AIX and in your case... not available for systemZ ... > > but - of course you can setup your own NFS server .. > > > > > From: KG > To: gpfsug main discussion list > Date: 05/19/2018 06:00 AM > Subject: [gpfsug-discuss] NFS on system Z > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hi > > The SS FAQ says following for system Z > Cluster Export Service (CES) is not supported. (Monitoring capabilities, Object, CIFS, User space implementation of NFS) > Kernel NFS (v3 and v4) is supported. Clustered NFS is not supported. > > Does this mean we can only configure OS based non-redundant NFS exports from scale nodes without CNFS/CES? > > Kiran Ghag > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Sun May 20 22:39:41 2018 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Sun, 20 May 2018 21:39:41 +0000 Subject: [gpfsug-discuss] Presentations for Spectrum Scale USA - May 16th-17th Message-ID: <7AABFF43-54F0-418E-9F3C-C0E479696528@nuance.com> I?ve uploaded what I have received so far to the spectrumscale.org website, and they are located here: https://www.spectrumscaleug.org/presentations/2018/ Still working on the other authors for their content. Bob Oesterlin Sr Principal Storage Engineer, Nuance 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron.s.knister at nasa.gov Mon May 21 02:41:08 2018 From: aaron.s.knister at nasa.gov (Aaron Knister) Date: Sun, 20 May 2018 21:41:08 -0400 (EDT) Subject: [gpfsug-discuss] Presentations for Spectrum Scale USA - May 16th-17th In-Reply-To: <7AABFF43-54F0-418E-9F3C-C0E479696528@nuance.com> References: <7AABFF43-54F0-418E-9F3C-C0E479696528@nuance.com> Message-ID: I must admit, I got a chuckle out of this typo: Compostable Infrastructure for Technical Computing sadly, I'm sure we all have stories about what we would consider "compostable" infrastructure. -Aaron -- Aaron Knister NASA Center for Climate Simulation (Code 606.2) Goddard Space Flight Center (301) 286-2776 On Sun, 20 May 2018, Oesterlin, Robert wrote: > > I?ve uploaded what I have received so far to the spectrumscale.org website, and they are located here: > > ? > > https://www.spectrumscaleug.org/presentations/2018/ > > ? > > Still working on the other authors for their content. > > ? > > ? > > Bob Oesterlin > > Sr Principal Storage Engineer, Nuance > > 507-269-0413 > > ? > > > From bbanister at jumptrading.com Mon May 21 21:51:54 2018 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 21 May 2018 20:51:54 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> <723293fee7214938ae20cdfdbaf99149@jumptrading.com> <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> Message-ID: <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system + fileset + user with this form: mmedquota -d -u :: It?s just not documented in the man page or shown in the examples. Docs need to be updated! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 11:00 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scale at us.ibm.com Tue May 22 09:01:21 2018 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Tue, 22 May 2018 16:01:21 +0800 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com><672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com><723293fee7214938ae20cdfdbaf99149@jumptrading.com><3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> Message-ID: Hi Kuei-Yu, Should we update the document as the requested below ? Thanks. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: Bryan Banister To: gpfsug main discussion list Date: 05/22/2018 04:52 AM Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Sent by: gpfsug-discuss-bounces at spectrumscale.org Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system + fileset + user with this form: mmedquota -d -u :: It?s just not documented in the man page or shown in the examples. Docs need to be updated! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 11:00 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Tue May 22 09:51:51 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Tue, 22 May 2018 08:51:51 +0000 Subject: [gpfsug-discuss] SMB quotas query In-Reply-To: References: <1526294691.17680.18.camel@strath.ac.uk> Message-ID: Hi all, This has been resolved by (I presume what Jonathan was referring to in his posts) setting "dfree cache time" to 0. Many thanks for everyone's input on this! Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Sobey, Richard A Sent: 14 May 2018 12:54 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query Thanks Jonathan. What I failed to mention in my OP was that MacOS clients DO report the correct size of each mounted folder. Not sure how that changes anything except to reinforce the idea that it's Windows at fault. Richard -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Jonathan Buzzard Sent: 14 May 2018 11:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] SMB quotas query On Mon, 2018-05-14 at 10:09 +0000, Sobey, Richard A wrote: [SNIP] > ? > I am worried that IBM may tell us we?re doing it wrong (humm) and to > create individual exports for each fileset but this will quickly > become tiresome! > Worst case scenario you could fall back to using the dfree option in smb.conf and then use a program to get the file quota. I have the ~100 lines of C that you need it. Though it has been ~5 years since I last used it. In fact the whole reporting the fileset quota as the disk size is my idea, and the dfree config option is how I implemented it prior to IBM adding it to the vfs_gpfs module. A quick check shows a commit from Jeremy Allison on June 18th last year to use const stuct smb_filename, the comment on the commit is ?instead of const char *. We need to migrate all pathname based VFS calls to use a struct to finish modernising the VFS with extra timestamp and flags parameters. I suspect this change has broken the behaviour. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From p.childs at qmul.ac.uk Tue May 22 10:23:58 2018 From: p.childs at qmul.ac.uk (Peter Childs) Date: Tue, 22 May 2018 09:23:58 +0000 Subject: [gpfsug-discuss] How to clear explicitly set quotas In-Reply-To: References: <12a8f15b8b1c4a0fbcce36b719f9dd20@jumptrading.com> <672F3C48-D02F-4D29-B298-FFBA445281AC@gmail.com> <723293fee7214938ae20cdfdbaf99149@jumptrading.com> <3451778ccd3f489cb91e34f2550b54d9@jumptrading.com> <7e44947ae4044d7ba6d12a80b0fbe79b@jumptrading.com> Message-ID: Its a little difficult that the different quota commands for Spectrum Scale are all different in there syntax and can only be used by the "right" people. As far as I can see mmedquota is the only quota command that uses this "full colon" syntax and it would be better if its syntax matched that for mmsetquota and mmlsquota. or that the reset to default quota was added to mmsetquota and mmedquota was left for editing quotas visually in an editor. Regards Peter Childs On Tue, 2018-05-22 at 16:01 +0800, IBM Spectrum Scale wrote: Hi Kuei-Yu, Should we update the document as the requested below ? Thanks. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. [Inactive hide details for Bryan Banister ---05/22/2018 04:52:15 AM---Quick update. Thanks to a colleague of mine, John Valdes,]Bryan Banister ---05/22/2018 04:52:15 AM---Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system From: Bryan Banister To: gpfsug main discussion list Date: 05/22/2018 04:52 AM Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Sent by: gpfsug-discuss-bounces at spectrumscale.org Quick update. Thanks to a colleague of mine, John Valdes, there is a way to specify the file system + fileset + user with this form: mmedquota -d -u :: It?s just not documented in the man page or shown in the examples. Docs need to be updated! -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 11:00 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ Unfortunately it doesn?t look like there is a way to target a specific quota. So for cluster with many file systems and/or many filesets in each file system, clearing the quota entries affect all quotas in all file systems and all filesets. This means that you have to clear them all and then reapply the explicit quotas that you need to keep. # mmedquota -h Usage: mmedquota -d {-u User ... | -g Group ... | -j Device:Fileset ... } Maybe RFE time, or am I missing some other existing solution? -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Bryan Banister Sent: Tuesday, May 15, 2018 10:36 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ________________________________ That was it! Thanks! # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none e root root GRP 243 0 0 0 none | 248 0 0 0 none default on # mmedquota -d -u bbanister # # mmrepquota -v fpi_test02:root --block-size G *** Report for USR GRP quotas on fpi_test02 Block Limits | File Limits Name fileset type GB quota limit in_doubt grace | files quota limit in_doubt grace entryType root root USR 243 0 0 0 none | 248 0 0 0 none default on bbanister root USR 84 0 0 0 none | 21 0 0 0 none d_fset root root GRP 243 0 0 0 none | 248 0 0 0 none default on Note that " Try disabling and re-enabling default quotas with the -d option for that fileset " didn't fix this issue. Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Peter Serocka Sent: Monday, May 14, 2018 4:52 PM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] How to clear explicitly set quotas Note: External Email ------------------------------------------------- check out the -d option for the mmedquota command: "Reestablish default quota limits for a specific user, group, or fileset that had an explicit quota limit set by a previous invocation of the mmedquota command.? --Peter > On 2018 May 14 Mon, at 22:29, Bryan Banister > wrote: > > Hi all, > > I got myself into a situation where I was trying to enable a default user quota on a fileset and remove the existing quotas for all users in that fileset. But I used the `mmsetquota : --user --block 0:0` command and now it says that I have explicitly set this quota. > > Is there a way to remove a user quota entry so that it will adhere to the default user quota that I now have defined? > > Can?t find anything in man pages, thanks! > -Bryan > > > Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Peter Childs ITS Research Storage Queen Mary, University of London -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From valleru at cbio.mskcc.org Tue May 22 16:42:43 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 22 May 2018 11:42:43 -0400 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Message-ID: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this -?PMR: 24090,L6Q,000. However, According to the ticket ?- they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also ?- According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run ?mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ?( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dwayne.Hart at med.mun.ca Tue May 22 16:45:07 2018 From: Dwayne.Hart at med.mun.ca (Dwayne.Hart at med.mun.ca) Date: Tue, 22 May 2018 15:45:07 +0000 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Message-ID: Hi Lohit, What type of network are you using on the back end to transfer the GPFS traffic? Best, Dwayne From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org Sent: Tuesday, May 22, 2018 1:13 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this - PMR: 24090,L6Q,000. However, According to the ticket - they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also - According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Tue May 22 17:40:26 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 22 May 2018 12:40:26 -0400 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Message-ID: <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> 10G Ethernet. Thanks, Lohit On May 22, 2018, 11:55 AM -0400, Dwayne.Hart at med.mun.ca, wrote: > Hi Lohit, > > What type of network are you using on the back end to transfer the GPFS traffic? > > Best, > Dwayne > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org > Sent: Tuesday, May 22, 2018 1:13 PM > To: gpfsug main discussion list > Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 > > Hello All, > > We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) > Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) > The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. > > I have raised an IBM critical service request about a month ago related to this -?PMR: 24090,L6Q,000. > However, According to the ticket ?- they seemed to feel that it might not be related to GPFS. > Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. > > One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. > Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. > > Also ?- According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run ?mmchconfig release=LATEST command, and that will resolve the issue. > However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. > > Can downgrading GPFS take us back to exactly the previous GPFS config state? > With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? > Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 > > Our previous state: > > 2 Storage clusters - 4.2.3.2 > 1 Compute cluster - 4.2.3.2 ?( remote mounts the above 2 storage clusters ) > > Our current state: > > 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) > 1 Compute cluster - 5.0.0.2 > > Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? > > Any advice on the best steps forward, would greatly help. > > Thanks, > > Lohit > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dwayne.Hart at med.mun.ca Tue May 22 17:54:43 2018 From: Dwayne.Hart at med.mun.ca (Dwayne.Hart at med.mun.ca) Date: Tue, 22 May 2018 16:54:43 +0000 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> , <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> Message-ID: We are having issues with ESS/Mellanox implementation and were curious as to what you were working with from a network perspective. Best, Dwayne ? Dwayne Hart | Systems Administrator IV CHIA, Faculty of Medicine Memorial University of Newfoundland 300 Prince Philip Drive St. John?s, Newfoundland | A1B 3V6 Craig L Dobbin Building | 4M409 T 709 864 6631 On May 22, 2018, at 2:10 PM, "valleru at cbio.mskcc.org" > wrote: 10G Ethernet. Thanks, Lohit On May 22, 2018, 11:55 AM -0400, Dwayne.Hart at med.mun.ca, wrote: Hi Lohit, What type of network are you using on the back end to transfer the GPFS traffic? Best, Dwayne From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org Sent: Tuesday, May 22, 2018 1:13 PM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this - PMR: 24090,L6Q,000. However, According to the ticket - they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also - According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From valleru at cbio.mskcc.org Tue May 22 19:16:28 2018 From: valleru at cbio.mskcc.org (valleru at cbio.mskcc.org) Date: Tue, 22 May 2018 14:16:28 -0400 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> <04bbfb0f-37ba-4277-89b3-867708fb153a@Spark> Message-ID: <7cb337ab-7824-40a6-9bbf-b2cd62ec97cf@Spark> Thank Dwayne. I don?t think, we are facing anything else from network perspective as of now. We were seeing deadlocks initially when we upgraded to 5.0, but it might not be because of network. We also see deadlocks now, but they are mostly caused due to high waiters i believe. I have temporarily disabled deadlocks. Thanks, Lohit On May 22, 2018, 12:54 PM -0400, Dwayne.Hart at med.mun.ca, wrote: > We are having issues with ESS/Mellanox implementation and were curious as to what you were working with from a network perspective. > > Best, > Dwayne > ? > Dwayne Hart | Systems Administrator IV > > CHIA, Faculty of Medicine > Memorial University of Newfoundland > 300 Prince Philip Drive > St. John?s, Newfoundland | A1B 3V6 > Craig L Dobbin Building | 4M409 > T 709 864 6631 > > On May 22, 2018, at 2:10 PM, "valleru at cbio.mskcc.org" wrote: > > > 10G Ethernet. > > > > Thanks, > > Lohit > > > > On May 22, 2018, 11:55 AM -0400, Dwayne.Hart at med.mun.ca, wrote: > > > Hi Lohit, > > > > > > What type of network are you using on the back end to transfer the GPFS traffic? > > > > > > Best, > > > Dwayne > > > > > > From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of valleru at cbio.mskcc.org > > > Sent: Tuesday, May 22, 2018 1:13 PM > > > To: gpfsug main discussion list > > > Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 > > > > > > Hello All, > > > > > > We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) > > > Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) > > > The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. > > > > > > I have raised an IBM critical service request about a month ago related to this -?PMR: 24090,L6Q,000. > > > However, According to the ticket ?- they seemed to feel that it might not be related to GPFS. > > > Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. > > > > > > One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. > > > Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. > > > > > > Also ?- According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run ?mmchconfig release=LATEST command, and that will resolve the issue. > > > However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. > > > > > > Can downgrading GPFS take us back to exactly the previous GPFS config state? > > > With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? > > > Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 > > > > > > Our previous state: > > > > > > 2 Storage clusters - 4.2.3.2 > > > 1 Compute cluster - 4.2.3.2 ?( remote mounts the above 2 storage clusters ) > > > > > > Our current state: > > > > > > 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) > > > 1 Compute cluster - 5.0.0.2 > > > > > > Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? > > > > > > Any advice on the best steps forward, would greatly help. > > > > > > Thanks, > > > > > > Lohit > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From hopii at interia.pl Tue May 22 20:43:52 2018 From: hopii at interia.pl (hopii at interia.pl) Date: Tue, 22 May 2018 21:43:52 +0200 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 In-Reply-To: References: Message-ID: Thank you for reply. Because I didn't already know what to do, was just playing with different options including 'security = ADS' . Anyway, the problem is solved, not sure if it was a bug but the client Centos 7.4 couldn't connect to spectrum scale node RH 7.5, resulting the errors provided before. After client upgrade from Centos 7.4 to latest Centos 7.5, without any changes to configuration, smb with kerberos works perfectly fine. Thank you again, d. Od: gpfsug-discuss-request at spectrumscale.org Do: gpfsug-discuss at spectrumscale.org; Wys?ane: 1:06 Sobota 2018-05-19 Temat: gpfsug-discuss Digest, Vol 76, Issue 71 > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (hopii at interia.pl) > 2. Re: Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (Christof Schmitt) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 May 2018 20:53:57 +0200 > From: hopii at interia.pl > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos > authentication issue > Message-ID: > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. > > Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. > NFS mount with keberos works with no issues as well. > > But I ran out of ideas how to configure SMB using LDAP with kerberos. > > I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. > But error message seems to point to keytab file, which is present on both, server and client nodes. > > I ran into simillar post, dated few days ago, so I'm not the only one. > https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html > > > Below is my configuration and error message, and I'd appreciate any hints or help. > > Thank you, > d. > > > > Error message from /var/adm/ras/log.smbd > > [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) > GENSEC backend 'ntlmssp_resume_ccache' registered > [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) > smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] > [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) > Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR > > > > Cluster nodes > spectrum1.example.com RedHat 7.4 > spectrum2.example.com RedHat 7.4 > spectrum3.example.com RedHat 7.4 > > Protocols nodes: > labs1.example.com > lasb2.example.com > labs3.example.com > > > ssipa.example.com Centos 7.5 > > > > spectrum scale server: > > [root at spectrum1 security]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/labs1.example.com at example.com > 1 host/labs1.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs3.example.com at example.com > 1 host/labs3.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs3.example.com at example.com > 1 cifs/labs3.example.com at example.com > > > > > [root at spectrum1 security]# net conf list > [global] > disable netbios = yes > disable spoolss = yes > printcap cache time = 0 > fileid:algorithm = fsname > fileid:fstype allow = gpfs > syncops:onmeta = no > preferred master = no > client NTLMv2 auth = yes > kernel oplocks = no > level2 oplocks = yes > debug hires timestamp = yes > max log size = 100000 > host msdfs = yes > notify:inotify = yes > wide links = no > log writeable files on exit = yes > ctdb locktime warn threshold = 5000 > auth methods = guest sam winbind > smbd:backgroundqueue = False > read only = no > use sendfile = no > strict locking = auto > posix locking = no > large readwrite = yes > aio read size = 1 > aio write size = 1 > force unknown acl user = yes > store dos attributes = yes > map readonly = yes > map archive = yes > map system = yes > map hidden = yes > ea support = yes > groupdb:backend = tdb > winbind:online check timeout = 30 > winbind max domain connections = 5 > winbind max clients = 10000 > dmapi support = no > unix extensions = no > socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 > strict allocate = yes > tdbsam:map builtin = no > aio_pthread:aio open = yes > dfree cache time = 100 > change notify = yes > max open files = 20000 > time_audit:timeout = 5000 > gencache:stabilize_count = 10000 > server min protocol = SMB2_02 > server max protocol = SMB3_02 > vfs objects = shadow_copy2 syncops gpfs fileid time_audit > smbd profiling level = on > log level = 1 > logging = syslog at 0 file > smbd exit on ip drop = yes > durable handles = no > ctdb:smbxsrv_open_global.tdb = false > mangled names = illegal > include system krb5 conf = no > smbd:async search ask sharemode = yes > gpfs:sharemodes = yes > gpfs:leases = yes > gpfs:dfreequota = yes > gpfs:prealloc = yes > gpfs:hsm = yes > gpfs:winattr = yes > gpfs:merge_writeappend = no > fruit:metadata = stream > fruit:nfs_aces = no > fruit:veto_appledouble = no > readdir_attr:aapl_max_access = false > shadow:snapdir = .snapshots > shadow:fixinodes = yes > shadow:snapdirseverywhere = yes > shadow:sort = desc > nfs4:mode = simple > nfs4:chown = yes > nfs4:acedup = merge > add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport > change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport > delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport > server string = IBM NAS > client use spnego = yes > kerberos method = system keytab > ldap admin dn = cn=Directory Manager > ldap ssl = start tls > ldap suffix = dc=example,dc=com > netbios name = spectrum1 > passdb backend = ldapsam:"ldap://ssipa.example.com" > realm = example.com > security = ADS > dedicated keytab file = /etc/krb5.keytab > password server = ssipa.example.com > idmap:cache = no > idmap config * : read only = no > idmap config * : backend = autorid > idmap config * : range = 10000000-299999999 > idmap config * : rangesize = 1000000 > workgroup = labs1 > ntlm auth = yes > > [share1] > path = /ibm/gpfs1/labs1 > guest ok = no > browseable = yes > comment = jas share > smb encrypt = disabled > > > [root at spectrum1 ~]# mmsmb export list > export path browseable guest ok smb encrypt > share1 /ibm/gpfs1/labs1 yes no disabled > > > > userauth command: > mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com > > > root at spectrum1 ~]# mmuserauth service list > FILE access configuration : LDAP > PARAMETERS VALUES > ------------------------------------------------- > ENABLE_SERVER_TLS true > ENABLE_KERBEROS true > USER_NAME cn=Directory Manager > SERVERS ssipa.example.com > NETBIOS_NAME spectrum1 > BASE_DN dc=example,dc=com > USER_DN none > GROUP_DN none > NETGROUP_DN none > USER_OBJECTCLASS posixAccount > GROUP_OBJECTCLASS posixGroup > USER_NAME_ATTRIB cn > USER_ID_ATTRIB uid > KERBEROS_SERVER ssipa.example.com > KERBEROS_REALM example.com > > OBJECT access not configured > PARAMETERS VALUES > ------------------------------------------------- > > net ads keytab list -> does not show any keys > > > LDAP user information was updated with Samba attributes according to the documentation: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm > > > [root at spectrum1 ~]# pdbedit -L -v > Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 > Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 > No builtin backend found, trying to load plugin > Module 'ldapsam' loaded > db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b > db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 > smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] > StartTLS issued: using a TLS connection > smbldap_open_connection: connection opened > ldap_connect_system: successful connection to the LDAP server > smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] > smbldap_search_paged: search was successful > init_sam_from_ldap: Entry found for user: jas > --------------- > Unix username: jas > NT username: jas > Account Flags: [U ] > User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 > Forcing Primary Group to 'Domain Users' for jas > Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 > Full Name: jas jas > Home Directory: \\spectrum1\jas > HomeDir Drive: > Logon Script: > Profile Path: \\spectrum1\jas\profile > Domain: SPECTRUM1 > Account desc: > Workstations: > Munged dial: > Logon time: 0 > Logoff time: never > Kickoff time: never > Password last set: Thu, 17 May 2018 14:08:01 EDT > Password can change: Thu, 17 May 2018 14:08:01 EDT > Password must change: never > Last bad password : 0 > Bad password count : 0 > Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF > > > > Client keytab file: > [root at test ~]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/test.example.com at example.com > 1 host/test.example.com at example.com > > > > ------------------------------ > > Message: 2 > Date: Fri, 18 May 2018 23:05:56 +0000 > From: "Christof Schmitt" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP > kerberos authentication issue > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 76, Issue 71 > ********************************************** > From alvise.dorigo at psi.ch Wed May 23 08:41:50 2018 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Wed, 23 May 2018 07:41:50 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 In-Reply-To: References: , Message-ID: <83A6EEB0EC738F459A39439733AE804522F15CC5@MBX114.d.ethz.ch> Hi Felix, yes please, configure jumbo frames for both ports. And yes, I'll check the cable (I used an old one, without any label 25G). thanks, A ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of hopii at interia.pl [hopii at interia.pl] Sent: Tuesday, May 22, 2018 9:43 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 Thank you for reply. Because I didn't already know what to do, was just playing with different options including 'security = ADS' . Anyway, the problem is solved, not sure if it was a bug but the client Centos 7.4 couldn't connect to spectrum scale node RH 7.5, resulting the errors provided before. After client upgrade from Centos 7.4 to latest Centos 7.5, without any changes to configuration, smb with kerberos works perfectly fine. Thank you again, d. Od: gpfsug-discuss-request at spectrumscale.org Do: gpfsug-discuss at spectrumscale.org; Wys?ane: 1:06 Sobota 2018-05-19 Temat: gpfsug-discuss Digest, Vol 76, Issue 71 > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (hopii at interia.pl) > 2. Re: Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (Christof Schmitt) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 May 2018 20:53:57 +0200 > From: hopii at interia.pl > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos > authentication issue > Message-ID: > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. > > Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. > NFS mount with keberos works with no issues as well. > > But I ran out of ideas how to configure SMB using LDAP with kerberos. > > I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. > But error message seems to point to keytab file, which is present on both, server and client nodes. > > I ran into simillar post, dated few days ago, so I'm not the only one. > https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html > > > Below is my configuration and error message, and I'd appreciate any hints or help. > > Thank you, > d. > > > > Error message from /var/adm/ras/log.smbd > > [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) > GENSEC backend 'ntlmssp_resume_ccache' registered > [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) > smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] > [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) > Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR > > > > Cluster nodes > spectrum1.example.com RedHat 7.4 > spectrum2.example.com RedHat 7.4 > spectrum3.example.com RedHat 7.4 > > Protocols nodes: > labs1.example.com > lasb2.example.com > labs3.example.com > > > ssipa.example.com Centos 7.5 > > > > spectrum scale server: > > [root at spectrum1 security]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/labs1.example.com at example.com > 1 host/labs1.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs3.example.com at example.com > 1 host/labs3.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs3.example.com at example.com > 1 cifs/labs3.example.com at example.com > > > > > [root at spectrum1 security]# net conf list > [global] > disable netbios = yes > disable spoolss = yes > printcap cache time = 0 > fileid:algorithm = fsname > fileid:fstype allow = gpfs > syncops:onmeta = no > preferred master = no > client NTLMv2 auth = yes > kernel oplocks = no > level2 oplocks = yes > debug hires timestamp = yes > max log size = 100000 > host msdfs = yes > notify:inotify = yes > wide links = no > log writeable files on exit = yes > ctdb locktime warn threshold = 5000 > auth methods = guest sam winbind > smbd:backgroundqueue = False > read only = no > use sendfile = no > strict locking = auto > posix locking = no > large readwrite = yes > aio read size = 1 > aio write size = 1 > force unknown acl user = yes > store dos attributes = yes > map readonly = yes > map archive = yes > map system = yes > map hidden = yes > ea support = yes > groupdb:backend = tdb > winbind:online check timeout = 30 > winbind max domain connections = 5 > winbind max clients = 10000 > dmapi support = no > unix extensions = no > socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 > strict allocate = yes > tdbsam:map builtin = no > aio_pthread:aio open = yes > dfree cache time = 100 > change notify = yes > max open files = 20000 > time_audit:timeout = 5000 > gencache:stabilize_count = 10000 > server min protocol = SMB2_02 > server max protocol = SMB3_02 > vfs objects = shadow_copy2 syncops gpfs fileid time_audit > smbd profiling level = on > log level = 1 > logging = syslog at 0 file > smbd exit on ip drop = yes > durable handles = no > ctdb:smbxsrv_open_global.tdb = false > mangled names = illegal > include system krb5 conf = no > smbd:async search ask sharemode = yes > gpfs:sharemodes = yes > gpfs:leases = yes > gpfs:dfreequota = yes > gpfs:prealloc = yes > gpfs:hsm = yes > gpfs:winattr = yes > gpfs:merge_writeappend = no > fruit:metadata = stream > fruit:nfs_aces = no > fruit:veto_appledouble = no > readdir_attr:aapl_max_access = false > shadow:snapdir = .snapshots > shadow:fixinodes = yes > shadow:snapdirseverywhere = yes > shadow:sort = desc > nfs4:mode = simple > nfs4:chown = yes > nfs4:acedup = merge > add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport > change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport > delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport > server string = IBM NAS > client use spnego = yes > kerberos method = system keytab > ldap admin dn = cn=Directory Manager > ldap ssl = start tls > ldap suffix = dc=example,dc=com > netbios name = spectrum1 > passdb backend = ldapsam:"ldap://ssipa.example.com" > realm = example.com > security = ADS > dedicated keytab file = /etc/krb5.keytab > password server = ssipa.example.com > idmap:cache = no > idmap config * : read only = no > idmap config * : backend = autorid > idmap config * : range = 10000000-299999999 > idmap config * : rangesize = 1000000 > workgroup = labs1 > ntlm auth = yes > > [share1] > path = /ibm/gpfs1/labs1 > guest ok = no > browseable = yes > comment = jas share > smb encrypt = disabled > > > [root at spectrum1 ~]# mmsmb export list > export path browseable guest ok smb encrypt > share1 /ibm/gpfs1/labs1 yes no disabled > > > > userauth command: > mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com > > > root at spectrum1 ~]# mmuserauth service list > FILE access configuration : LDAP > PARAMETERS VALUES > ------------------------------------------------- > ENABLE_SERVER_TLS true > ENABLE_KERBEROS true > USER_NAME cn=Directory Manager > SERVERS ssipa.example.com > NETBIOS_NAME spectrum1 > BASE_DN dc=example,dc=com > USER_DN none > GROUP_DN none > NETGROUP_DN none > USER_OBJECTCLASS posixAccount > GROUP_OBJECTCLASS posixGroup > USER_NAME_ATTRIB cn > USER_ID_ATTRIB uid > KERBEROS_SERVER ssipa.example.com > KERBEROS_REALM example.com > > OBJECT access not configured > PARAMETERS VALUES > ------------------------------------------------- > > net ads keytab list -> does not show any keys > > > LDAP user information was updated with Samba attributes according to the documentation: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm > > > [root at spectrum1 ~]# pdbedit -L -v > Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 > Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 > No builtin backend found, trying to load plugin > Module 'ldapsam' loaded > db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b > db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 > smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] > StartTLS issued: using a TLS connection > smbldap_open_connection: connection opened > ldap_connect_system: successful connection to the LDAP server > smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] > smbldap_search_paged: search was successful > init_sam_from_ldap: Entry found for user: jas > --------------- > Unix username: jas > NT username: jas > Account Flags: [U ] > User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 > Forcing Primary Group to 'Domain Users' for jas > Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 > Full Name: jas jas > Home Directory: \\spectrum1\jas > HomeDir Drive: > Logon Script: > Profile Path: \\spectrum1\jas\profile > Domain: SPECTRUM1 > Account desc: > Workstations: > Munged dial: > Logon time: 0 > Logoff time: never > Kickoff time: never > Password last set: Thu, 17 May 2018 14:08:01 EDT > Password can change: Thu, 17 May 2018 14:08:01 EDT > Password must change: never > Last bad password : 0 > Bad password count : 0 > Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF > > > > Client keytab file: > [root at test ~]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/test.example.com at example.com > 1 host/test.example.com at example.com > > > > ------------------------------ > > Message: 2 > Date: Fri, 18 May 2018 23:05:56 +0000 > From: "Christof Schmitt" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP > kerberos authentication issue > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 76, Issue 71 > ********************************************** > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From alvise.dorigo at psi.ch Wed May 23 08:42:59 2018 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Wed, 23 May 2018 07:42:59 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 In-Reply-To: <83A6EEB0EC738F459A39439733AE804522F15CC5@MBX114.d.ethz.ch> References: , , <83A6EEB0EC738F459A39439733AE804522F15CC5@MBX114.d.ethz.ch> Message-ID: <83A6EEB0EC738F459A39439733AE804522F15CDF@MBX114.d.ethz.ch> ops sorry! wrong window! please remove it... sorry. Alvise Dorigo ________________________________________ From: Dorigo Alvise (PSI) Sent: Wednesday, May 23, 2018 9:41 AM To: gpfsug main discussion list Subject: RE: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 Hi Felix, yes please, configure jumbo frames for both ports. And yes, I'll check the cable (I used an old one, without any label 25G). thanks, A ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of hopii at interia.pl [hopii at interia.pl] Sent: Tuesday, May 22, 2018 9:43 PM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 76, Issue 71 Thank you for reply. Because I didn't already know what to do, was just playing with different options including 'security = ADS' . Anyway, the problem is solved, not sure if it was a bug but the client Centos 7.4 couldn't connect to spectrum scale node RH 7.5, resulting the errors provided before. After client upgrade from Centos 7.4 to latest Centos 7.5, without any changes to configuration, smb with kerberos works perfectly fine. Thank you again, d. Od: gpfsug-discuss-request at spectrumscale.org Do: gpfsug-discuss at spectrumscale.org; Wys?ane: 1:06 Sobota 2018-05-19 Temat: gpfsug-discuss Digest, Vol 76, Issue 71 > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (hopii at interia.pl) > 2. Re: Spectrum Scale CES , SAMBA, LDAP kerberos authentication > issue (Christof Schmitt) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 18 May 2018 20:53:57 +0200 > From: hopii at interia.pl > To: gpfsug-discuss at spectrumscale.org > Subject: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP kerberos > authentication issue > Message-ID: > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > I'm just learning, trying to configure Spectrum Scale: SMB File Authentication using LDAP (IPA) with kerberos, and been struggling with it for a couple of days, without success. > > Users on spectrum cluster and client machine are authenticated properly, so ldap should be fine. > NFS mount with keberos works with no issues as well. > > But I ran out of ideas how to configure SMB using LDAP with kerberos. > > I could messed up with netbios names, as am not sure which one to use, from cluster node, from protocol node, exactly which one. > But error message seems to point to keytab file, which is present on both, server and client nodes. > > I ran into simillar post, dated few days ago, so I'm not the only one. > https://www.mail-archive.com/gpfsug-discuss at spectrumscale.org/msg03919.html > > > Below is my configuration and error message, and I'd appreciate any hints or help. > > Thank you, > d. > > > > Error message from /var/adm/ras/log.smbd > > [2018/05/18 13:51:58.853681, 3] ../auth/gensec/gensec_start.c:918(gensec_register) > GENSEC backend 'ntlmssp_resume_ccache' registered > [2018/05/18 13:51:58.859984, 0] ../source3/librpc/crypto/gse.c:586(gse_init_server) > smb_gss_krb5_import_cred failed with [Unspecified GSS failure. Minor code may provide more information: Keytab MEMORY:cifs_srv_keytab is nonexistent or empty] > [2018/05/18 13:51:58.860151, 1] ../auth/gensec/gensec_start.c:698(gensec_start_mech) > Failed to start GENSEC server mech gse_krb5: NT_STATUS_INTERNAL_ERROR > > > > Cluster nodes > spectrum1.example.com RedHat 7.4 > spectrum2.example.com RedHat 7.4 > spectrum3.example.com RedHat 7.4 > > Protocols nodes: > labs1.example.com > lasb2.example.com > labs3.example.com > > > ssipa.example.com Centos 7.5 > > > > spectrum scale server: > > [root at spectrum1 security]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/labs1.example.com at example.com > 1 host/labs1.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs2.example.com at example.com > 1 host/labs3.example.com at example.com > 1 host/labs3.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs1.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs2.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 nfs/labs3.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs1.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs2.example.com at example.com > 1 cifs/labs3.example.com at example.com > 1 cifs/labs3.example.com at example.com > > > > > [root at spectrum1 security]# net conf list > [global] > disable netbios = yes > disable spoolss = yes > printcap cache time = 0 > fileid:algorithm = fsname > fileid:fstype allow = gpfs > syncops:onmeta = no > preferred master = no > client NTLMv2 auth = yes > kernel oplocks = no > level2 oplocks = yes > debug hires timestamp = yes > max log size = 100000 > host msdfs = yes > notify:inotify = yes > wide links = no > log writeable files on exit = yes > ctdb locktime warn threshold = 5000 > auth methods = guest sam winbind > smbd:backgroundqueue = False > read only = no > use sendfile = no > strict locking = auto > posix locking = no > large readwrite = yes > aio read size = 1 > aio write size = 1 > force unknown acl user = yes > store dos attributes = yes > map readonly = yes > map archive = yes > map system = yes > map hidden = yes > ea support = yes > groupdb:backend = tdb > winbind:online check timeout = 30 > winbind max domain connections = 5 > winbind max clients = 10000 > dmapi support = no > unix extensions = no > socket options = TCP_NODELAY SO_KEEPALIVE TCP_KEEPCNT=4 TCP_KEEPIDLE=240 TCP_KEEPINTVL=15 > strict allocate = yes > tdbsam:map builtin = no > aio_pthread:aio open = yes > dfree cache time = 100 > change notify = yes > max open files = 20000 > time_audit:timeout = 5000 > gencache:stabilize_count = 10000 > server min protocol = SMB2_02 > server max protocol = SMB3_02 > vfs objects = shadow_copy2 syncops gpfs fileid time_audit > smbd profiling level = on > log level = 1 > logging = syslog at 0 file > smbd exit on ip drop = yes > durable handles = no > ctdb:smbxsrv_open_global.tdb = false > mangled names = illegal > include system krb5 conf = no > smbd:async search ask sharemode = yes > gpfs:sharemodes = yes > gpfs:leases = yes > gpfs:dfreequota = yes > gpfs:prealloc = yes > gpfs:hsm = yes > gpfs:winattr = yes > gpfs:merge_writeappend = no > fruit:metadata = stream > fruit:nfs_aces = no > fruit:veto_appledouble = no > readdir_attr:aapl_max_access = false > shadow:snapdir = .snapshots > shadow:fixinodes = yes > shadow:snapdirseverywhere = yes > shadow:sort = desc > nfs4:mode = simple > nfs4:chown = yes > nfs4:acedup = merge > add share command = /usr/lpp/mmfs/bin/mmcesmmccrexport > change share command = /usr/lpp/mmfs/bin/mmcesmmcchexport > delete share command = /usr/lpp/mmfs/bin/mmcesmmcdelexport > server string = IBM NAS > client use spnego = yes > kerberos method = system keytab > ldap admin dn = cn=Directory Manager > ldap ssl = start tls > ldap suffix = dc=example,dc=com > netbios name = spectrum1 > passdb backend = ldapsam:"ldap://ssipa.example.com" > realm = example.com > security = ADS > dedicated keytab file = /etc/krb5.keytab > password server = ssipa.example.com > idmap:cache = no > idmap config * : read only = no > idmap config * : backend = autorid > idmap config * : range = 10000000-299999999 > idmap config * : rangesize = 1000000 > workgroup = labs1 > ntlm auth = yes > > [share1] > path = /ibm/gpfs1/labs1 > guest ok = no > browseable = yes > comment = jas share > smb encrypt = disabled > > > [root at spectrum1 ~]# mmsmb export list > export path browseable guest ok smb encrypt > share1 /ibm/gpfs1/labs1 yes no disabled > > > > userauth command: > mmuserauth service create --type ldap --data-access-method file --servers ssipa.example.com --base-dn dc=example,dc=com --user-name 'cn=Directory Manager' --netbios-name labs1 --enable-server-tls --enable-kerberos --kerberos-server ssipa.example.com --kerberos-realm example.com > > > root at spectrum1 ~]# mmuserauth service list > FILE access configuration : LDAP > PARAMETERS VALUES > ------------------------------------------------- > ENABLE_SERVER_TLS true > ENABLE_KERBEROS true > USER_NAME cn=Directory Manager > SERVERS ssipa.example.com > NETBIOS_NAME spectrum1 > BASE_DN dc=example,dc=com > USER_DN none > GROUP_DN none > NETGROUP_DN none > USER_OBJECTCLASS posixAccount > GROUP_OBJECTCLASS posixGroup > USER_NAME_ATTRIB cn > USER_ID_ATTRIB uid > KERBEROS_SERVER ssipa.example.com > KERBEROS_REALM example.com > > OBJECT access not configured > PARAMETERS VALUES > ------------------------------------------------- > > net ads keytab list -> does not show any keys > > > LDAP user information was updated with Samba attributes according to the documentation: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adm_updateldapsmb.htm > > > [root at spectrum1 ~]# pdbedit -L -v > Can't find include file /var/mmfs/ces/smb.conf.0.0.0.0 > Can't find include file /var/mmfs/ces/smb.conf.internal.0.0.0.0 > No builtin backend found, trying to load plugin > Module 'ldapsam' loaded > db_open_ctdb: opened database 'g_lock.tdb' with dbid 0x4d2a432b > db_open_ctdb: opened database 'secrets.tdb' with dbid 0x7132c184 > smbldap_search_domain_info: Searching for:[(&(objectClass=sambaDomain)(sambaDomainName=SPECTRUM1))] > StartTLS issued: using a TLS connection > smbldap_open_connection: connection opened > ldap_connect_system: successful connection to the LDAP server > smbldap_search_paged: base => [dc=example,dc=com], filter => [(&(uid=*)(objectclass=sambaSamAccount))],scope => [2], pagesize => [1000] > smbldap_search_paged: search was successful > init_sam_from_ldap: Entry found for user: jas > --------------- > Unix username: jas > NT username: jas > Account Flags: [U ] > User SID: S-1-5-21-2394233691-157776895-1049088601-1281201008 > Forcing Primary Group to 'Domain Users' for jas > Primary Group SID: S-1-5-21-2394233691-157776895-1049088601-513 > Full Name: jas jas > Home Directory: \\spectrum1\jas > HomeDir Drive: > Logon Script: > Profile Path: \\spectrum1\jas\profile > Domain: SPECTRUM1 > Account desc: > Workstations: > Munged dial: > Logon time: 0 > Logoff time: never > Kickoff time: never > Password last set: Thu, 17 May 2018 14:08:01 EDT > Password can change: Thu, 17 May 2018 14:08:01 EDT > Password must change: never > Last bad password : 0 > Bad password count : 0 > Logon hours : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF > > > > Client keytab file: > [root at test ~]# klist -k > Keytab name: FILE:/etc/krb5.keytab > KVNO Principal > ---- -------------------------------------------------------------------------- > 1 host/test.example.com at example.com > 1 host/test.example.com at example.com > > > > ------------------------------ > > Message: 2 > Date: Fri, 18 May 2018 23:05:56 +0000 > From: "Christof Schmitt" > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] Spectrum Scale CES , SAMBA, LDAP > kerberos authentication issue > Message-ID: > > > Content-Type: text/plain; charset="us-ascii" > > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 76, Issue 71 > ********************************************** > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From johnbent at gmail.com Wed May 23 10:39:08 2018 From: johnbent at gmail.com (John Bent) Date: Wed, 23 May 2018 03:39:08 -0600 Subject: [gpfsug-discuss] IO500 Call for Submissions Message-ID: IO500 Call for Submissions Deadline: 23 June 2018 AoE The IO500 is now accepting and encouraging submissions for the upcoming IO500 list revealed at ISC 2018 in Frankfurt, Germany. The benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please submit and we look forward to seeing many of you at ISC 2018! Please note that submissions of all size are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below. Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017 and published its first list at SC17. The need for such an initiative has long been known within High Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking. The multi-fold goals of the benchmark suite are as follows: * Maximizing simplicity in running the benchmark suite * Encouraging complexity in tuning for performance * Allowing submitters to highlight their ?hero run? performance numbers * Forcing submitters to simultaneously report performance for challenging IO patterns. Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication. The goals of the community are also multi-fold: * Gather historical data for the sake of analysis and to aid predictions of storage futures * Collect tuning information to share valuable performance optimizations across the community * Encourage vendors and designers to optimize for workloads beyond ?hero runs? * Establish bounded expectations for users, procurers, and administrators Once again, we encourage you to submit (see http://io500.org/submission), to join our community, and to attend our BoF ?The IO-500 and the Virtual Institute of I/O? at ISC 2018 where we will announce the second ever IO500 list. The current list includes results from BeeGPFS, DataWarp, IME, Lustre, and Spectrum Scale. We hope that the next list has even more! We look forward to answering any questions or concerns you might have. Thank you! IO500 Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From alvise.dorigo at psi.ch Thu May 24 09:45:00 2018 From: alvise.dorigo at psi.ch (Dorigo Alvise (PSI)) Date: Thu, 24 May 2018 08:45:00 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system Message-ID: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> Dear members, at PSI I'm trying to integrate the CES service with our AD authentication system. My understanding, after talking to expert people here, is that I should use the RFC2307 model for ID mapping (described here: https://goo.gl/XvqHDH). The problem is that our ID schema is slightly different than that one described in RFC2307. In the RFC the relevant user identification fields are named "uidNumber" and "gidNumber". But in our AD database schema we have: # egrep 'uid_number|gid_number' /etc/sssd/sssd.conf ldap_user_uid_number = msSFU30UidNumber ldap_user_gid_number = msSFU30GidNumber ldap_group_gid_number = msSFU30GidNumber My question is: is it possible to configure CES to look for the custom field labels (those ones listed above) instead the default ones officially described in rfc2307 ? many thanks. Regards, Alvise Dorigo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ivano.Talamo at psi.ch Thu May 24 14:51:56 2018 From: Ivano.Talamo at psi.ch (Ivano Talamo) Date: Thu, 24 May 2018 15:51:56 +0200 Subject: [gpfsug-discuss] Inter-clusters issue with change of the subnet IP Message-ID: <432c8c12-4d36-d8a7-3c79-61b94aa409bf@psi.ch> Hi all, We currently have an issue with our GPFS clusters. Shortly when we removed/added a node to a cluster we changed IP address for the IPoIB subnet and this broke GPFS. The primary IP didn't change. In details our setup is quite standard: one GPFS cluster with CPU nodes only accessing (via remote cluster mount) different storage clusters. Clusters are on an Infiniband fabric plus IPoIB for communication via the subnet parameter. Yesterday it happened that some nodes were added to the CPU cluster with the correct primary IP addresses but incorrect IPoIB ones. Incorrect in the sense that the IPoIB addresses were already in use by some other nodes in the same CPU cluster. This made all the clusters (not only the CPU one) suffer for a lot of errors, gpfs restarting, file systems being unmounted. Removing the wrong nodes brought the clusters to a stable state. But the real strange thing came when one of these node was reinstalled, configured with the correct IPoIB address and added again to the cluster. At this point (when the node tried to mount the remote filesystems) the issue happened again. In the log files we have lines like: 2018-05-24_10:32:45.520+0200: [I] Accepted and connected to 192.168.x.y Where the IP number 192.168.x.y is the old/incorrect one. And looking at mmdiag --network there are a bunch of lines like the following: 192.168.x.z broken 233 -1 0 0 L With the wrong/old IPs. And this appears on all cluster (CPU and storage ones). Is it possible that the other nodes in the clusters use this outdated information when the reinstalled node is brought back into the cluster? Is there any kind of timeout, so that after sometimes this information is purged? Or is there any procedure that we could use to now introduce the nodes? Otherwise we see no other option but to restart GPFS on all the nodes of all clusters one by one to make sure that the incorrect information goes away. Thanks, Ivano From skylar2 at uw.edu Thu May 24 15:16:32 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Thu, 24 May 2018 14:16:32 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> Message-ID: <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> I haven't needed to change the LDAP attributes that CES uses, but I do see --user-id-attrib in the mmuserauth documentation. Unfortunately, I don't see an equivalent one for gidNumber. On Thu, May 24, 2018 at 08:45:00AM +0000, Dorigo Alvise (PSI) wrote: > Dear members, > at PSI I'm trying to integrate the CES service with our AD authentication system. > > My understanding, after talking to expert people here, is that I should use the RFC2307 model for ID mapping (described here: https://goo.gl/XvqHDH). The problem is that our ID schema is slightly different than that one described in RFC2307. In the RFC the relevant user identification fields are named "uidNumber" and "gidNumber". But in our AD database schema we have: > > # egrep 'uid_number|gid_number' /etc/sssd/sssd.conf > ldap_user_uid_number = msSFU30UidNumber > ldap_user_gid_number = msSFU30GidNumber > ldap_group_gid_number = msSFU30GidNumber > > My question is: is it possible to configure CES to look for the custom field labels (those ones listed above) instead the default ones officially described in rfc2307 ? > > many thanks. > Regards, > > Alvise Dorigo > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From jonathan.buzzard at strath.ac.uk Thu May 24 15:46:32 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Thu, 24 May 2018 15:46:32 +0100 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> Message-ID: <1527173192.28106.18.camel@strath.ac.uk> On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote: > I haven't needed to change the LDAP attributes that CES uses, but I > do see --user-id-attrib in the mmuserauth documentation. > Unfortunately, I don't see an equivalent one for gidNumber. > Is it not doing the "Samba thing" where your GID is the GID of your primary Active Directory group? This is usually "Domain Users" but not always. Basically Samba ignores the separate GID field in RFC2307bis, so one imagines the options for changing the LDAP attributes are none existent. I know back in the day this had me stumped for a while because unless you assign a GID number to the users primary group then Winbind does not return anything, aka a "getent passwd" on the user fails. JAB. -- Jonathan A. Buzzard?????????????????????????Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From skylar2 at uw.edu Thu May 24 15:51:09 2018 From: skylar2 at uw.edu (Skylar Thompson) Date: Thu, 24 May 2018 14:51:09 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <1527173192.28106.18.camel@strath.ac.uk> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> <1527173192.28106.18.camel@strath.ac.uk> Message-ID: <20180524145053.osnyosp4qmz4npay@utumno.gs.washington.edu> On Thu, May 24, 2018 at 03:46:32PM +0100, Jonathan Buzzard wrote: > On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote: > > I haven't needed to change the LDAP attributes that CES uses, but I > > do see --user-id-attrib in the mmuserauth documentation. > > Unfortunately, I don't see an equivalent one for gidNumber. > > > > Is it not doing the "Samba thing" where your GID is the GID of your > primary Active Directory group? This is usually "Domain Users" but not > always. > > Basically Samba ignores the separate GID field in RFC2307bis, so one > imagines the options for changing the LDAP attributes are none > existent. > > I know back in the day this had me stumped for a while because unless > you assign a GID number to the users primary group then Winbind does > not return anything, aka a "getent passwd" on the user fails. At least for us, it seems to be using the gidNumber attribute of our users. On the back-end, of course, it is Samba, but I don't know that there are mm* commands available for all of the tunables one can set in smb.conf. -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine From S.J.Thompson at bham.ac.uk Thu May 24 17:46:14 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Thu, 24 May 2018 16:46:14 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <20180524145053.osnyosp4qmz4npay@utumno.gs.washington.edu> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> <20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> <1527173192.28106.18.camel@strath.ac.uk>, <20180524145053.osnyosp4qmz4npay@utumno.gs.washington.edu> Message-ID: You can change them using the normal SMB commands, from the appropriate bin directory, whether this is supported is another matter. We have one parameter set this way but I forgot which. Simkn ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Skylar Thompson [skylar2 at uw.edu] Sent: 24 May 2018 15:51 To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Question concerning integration of CES with AD authentication system On Thu, May 24, 2018 at 03:46:32PM +0100, Jonathan Buzzard wrote: > On Thu, 2018-05-24 at 14:16 +0000, Skylar Thompson wrote: > > I haven't needed to change the LDAP attributes that CES uses, but I > > do see --user-id-attrib in the mmuserauth documentation. > > Unfortunately, I don't see an equivalent one for gidNumber. > > > > Is it not doing the "Samba thing" where your GID is the GID of your > primary Active Directory group? This is usually "Domain Users" but not > always. > > Basically Samba ignores the separate GID field in RFC2307bis, so one > imagines the options for changing the LDAP attributes are none > existent. > > I know back in the day this had me stumped for a while because unless > you assign a GID number to the users primary group then Winbind does > not return anything, aka a "getent passwd" on the user fails. At least for us, it seems to be using the gidNumber attribute of our users. On the back-end, of course, it is Samba, but I don't know that there are mm* commands available for all of the tunables one can set in smb.conf. -- -- Skylar Thompson (skylar2 at u.washington.edu) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From christof.schmitt at us.ibm.com Thu May 24 18:07:02 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 24 May 2018 17:07:02 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <1527173192.28106.18.camel@strath.ac.uk> References: <1527173192.28106.18.camel@strath.ac.uk>, <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch><20180524141632.xuah3dxu4bxx372z@utumno.gs.washington.edu> Message-ID: An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Thu May 24 18:14:28 2018 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 24 May 2018 17:14:28 +0000 Subject: [gpfsug-discuss] Question concerning integration of CES with AD authentication system In-Reply-To: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> References: <83A6EEB0EC738F459A39439733AE804522F1B13B@MBX214.d.ethz.ch> Message-ID: An HTML attachment was scrubbed... URL: From scale at us.ibm.com Fri May 25 08:01:43 2018 From: scale at us.ibm.com (IBM Spectrum Scale) Date: Fri, 25 May 2018 15:01:43 +0800 Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 In-Reply-To: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> References: <7eb36288-4a26-4322-8161-6a2c3fbdec41@Spark> Message-ID: If you didn't run mmchconfig release=LATEST and didn't change the fs version, then you can downgrade either or both of them. Thanks. Regards, The Spectrum Scale (GPFS) team ------------------------------------------------------------------------------------------------------------------ If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team. From: valleru at cbio.mskcc.org To: gpfsug main discussion list Date: 05/22/2018 11:54 PM Subject: [gpfsug-discuss] Critical Hang issues with GPFS 5.0. Downgrading from GPFS 5.0.0-2 to GPFS 4.2.3.2 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello All, We have recently upgraded from GPFS 4.2.3.2 to GPFS 5.0.0-2 about a month ago. We have not yet converted the 4.2.2.2 filesystem version to 5. ( That is we have not run the mmchconfig release=LATEST command) Right after the upgrade, we are seeing many ?ps hangs" across the cluster. All the ?ps hangs? happen when jobs run related to a Java process or many Java threads (example: GATK ) The hangs are pretty random, and have no particular pattern except that we know that it is related to just Java or some jobs reading from directories with about 600000 files. I have raised an IBM critical service request about a month ago related to this - PMR: 24090,L6Q,000. However, According to the ticket - they seemed to feel that it might not be related to GPFS. Although, we are sure that these hangs started to appear only after we upgraded GPFS to GPFS 5.0.0.2 from 4.2.3.2. One of the other reasons we are not able to prove that it is GPFS is because, we are unable to capture any logs/traces from GPFS once the hang happens. Even GPFS trace commands hang, once ?ps hangs? and thus it is getting difficult to get any dumps from GPFS. Also - According to the IBM ticket, they seemed to have a seen a ?ps hang" issue and we have to run mmchconfig release=LATEST command, and that will resolve the issue. However we are not comfortable making the permanent change to Filesystem version 5. and since we don?t see any near solution to these hangs - we are thinking of downgrading to GPFS 4.2.3.2 or the previous state that we know the cluster was stable. Can downgrading GPFS take us back to exactly the previous GPFS config state? With respect to downgrading from 5 to 4.2.3.2 -> is it just that i reinstall all rpms to a previous version? or is there anything else that i need to make sure with respect to GPFS configuration? Because i think that GPFS 5.0 might have updated internal default GPFS configuration parameters , and i am not sure if downgrading GPFS will change them back to what they were in GPFS 4.2.3.2 Our previous state: 2 Storage clusters - 4.2.3.2 1 Compute cluster - 4.2.3.2 ( remote mounts the above 2 storage clusters ) Our current state: 2 Storage clusters - 5.0.0.2 ( filesystem version - 4.2.2.2) 1 Compute cluster - 5.0.0.2 Do i need to downgrade all the clusters to go to the previous state ? or is it ok if we just downgrade the compute cluster to previous version? Any advice on the best steps forward, would greatly help. Thanks, Lohit_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From r.sobey at imperial.ac.uk Fri May 25 13:24:31 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Fri, 25 May 2018 12:24:31 +0000 Subject: [gpfsug-discuss] IPv6 not supported still? Message-ID: Is the FAQ woefully outdated with respect to this when it says IPv6 is not supported for virtually any scenario (GUI, NFS, CES, TCT amongst others). Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Fri May 25 14:24:11 2018 From: knop at us.ibm.com (Felipe Knop) Date: Fri, 25 May 2018 09:24:11 -0400 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Message-ID: All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 -------------- next part -------------- An HTML attachment was scrubbed... URL: From r.sobey at imperial.ac.uk Fri May 25 15:29:16 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Fri, 25 May 2018 14:29:16 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: Message-ID: Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Fri May 25 21:01:56 2018 From: knop at us.ibm.com (Felipe Knop) Date: Fri, 25 May 2018 16:01:56 -0400 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: Message-ID: Richard, As far as I could determine: Protocol servers for Scale can be at RHEL 7.4 today Protocol servers for Scale will be able to be at RHEL 7.5 once the mid-June PTFs are released On ESS, RHEL 7.3 is still the highest level, with support for higher RHEL 7.x levels still being implemented/validated Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Sobey, Richard A" To: gpfsug main discussion list Date: 05/25/2018 10:29 AM Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Fri May 25 21:06:10 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Fri, 25 May 2018 20:06:10 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: , Message-ID: Hi Richard, Ours run on 7.4 without issue. We had one upgrade to 7.5 packages (didn't reboot into new kernel) and it broke, reverting it back to a 7.4 release fixed it, so when support comes along, do it with care! Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sobey, Richard A [r.sobey at imperial.ac.uk] Sent: 25 May 2018 15:29 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From jonathan.buzzard at strath.ac.uk Fri May 25 21:37:05 2018 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Fri, 25 May 2018 21:37:05 +0100 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: Message-ID: <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> On 25/05/18 21:06, Simon Thompson (IT Research Support) wrote: > Hi Richard, > > Ours run on 7.4 without issue. We had one upgrade to 7.5 packages > (didn't reboot into new kernel) and it broke, reverting it back to a > 7.4 release fixed it, so when support comes along, do it with care! > I will at this point chime in that DSS is on 7.4 at the moment, so I am not surprised ESS is just fine too. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From S.J.Thompson at bham.ac.uk Fri May 25 21:42:49 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Fri, 25 May 2018 20:42:49 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> References: , <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> Message-ID: I was talking about protocols. But yes, DSS is also supported and runs fine on 7.4. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Jonathan Buzzard [jonathan.buzzard at strath.ac.uk] Sent: 25 May 2018 21:37 To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 On 25/05/18 21:06, Simon Thompson (IT Research Support) wrote: > Hi Richard, > > Ours run on 7.4 without issue. We had one upgrade to 7.5 packages > (didn't reboot into new kernel) and it broke, reverting it back to a > 7.4 release fixed it, so when support comes along, do it with care! > I will at this point chime in that DSS is on 7.4 at the moment, so I am not surprised ESS is just fine too. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From jonathan at buzzard.me.uk Fri May 25 22:08:54 2018 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 25 May 2018 22:08:54 +0100 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: <06cc2934-e1fb-85ba-e22f-69be0194103f@strath.ac.uk> Message-ID: <4d3aaaad-898d-d27d-04bc-729f01cef868@buzzard.me.uk> On 25/05/18 21:42, Simon Thompson (IT Research Support) wrote: > I was talking about protocols. > > But yes, DSS is also supported and runs fine on 7.4. Sure but I believe protocols will run fine on 7.4. On the downside DSS is still 4.2.x, grrrrrrrr as we have just implemented it double grrrr. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From r.sobey at imperial.ac.uk Sat May 26 08:32:05 2018 From: r.sobey at imperial.ac.uk (Sobey, Richard A) Date: Sat, 26 May 2018 07:32:05 +0000 Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 In-Reply-To: References: , , Message-ID: Thanks All! The faq still seems to imply that 7.3 is the latest supported release. Section A2.5 specifically. Other areas of the FAQ which I've now seen do indeed say 7.4. Have a great weekend. Get Outlook for Android ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Simon Thompson (IT Research Support) Sent: Friday, May 25, 2018 9:06:10 PM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Hi Richard, Ours run on 7.4 without issue. We had one upgrade to 7.5 packages (didn't reboot into new kernel) and it broke, reverting it back to a 7.4 release fixed it, so when support comes along, do it with care! Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Sobey, Richard A [r.sobey at imperial.ac.uk] Sent: 25 May 2018 15:29 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 Hi Felipe What about protocol servers, can they go above 7.3 yet with any version of Scale? From: gpfsug-discuss-bounces at spectrumscale.org On Behalf Of Felipe Knop Sent: 25 May 2018 14:24 To: gpfsug-discuss at spectrumscale.org Subject: [gpfsug-discuss] RHEL 7.5 and kernel 3.10.0-862 All, Folks that have been updated to the 3.10.0-862 kernel (the kernel which ships with RHEL 7.5) as a result of applying kernel security patches may open a PMR to request an efix for Scale versions 4.2 or 5.0 . The efixes will then be provided once the internal tests on RHEL 7.5 have been completed, likely a few days before the 4.2.3.9 and 5.0.1.1 PTFs GA (currently targeted around mid June). Regards, Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Mon May 28 08:59:03 2018 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Mon, 28 May 2018 09:59:03 +0200 Subject: [gpfsug-discuss] User Group Meeting at ISC2018 Frankfurt Message-ID: Greetings: IBM is happy to announce the agenda for the joint "IBM Spectrum Scale and IBM Spectrum LSF User Group Meeting" at ISC in Frankfurt, Germany. We will finish on time to attend the opening reception. As with other user group meetings, the agenda includes user stories, updates on IBM Spectrum Scale & IBM Spectrum LSF, and access to IBM experts and your peers. Please join us! To attend please register here so that we can have an accurate count of attendees: https://www-01.ibm.com/events/wwe/grp/grp308.nsf/Registration.xsp?openform&seminar=AA4A99ES We are still looking for two customers to talk about their experience with Spectrum Scale and/or Spectrum LSF. Please send me a personal mail, if you are interested to talk. Monday June 25th, 2018 - 14:00-17:30 - Conference Room Applaus 14:00-14:15 Welcome Gabor Samu (IBM) / Ulf Troppens (IBM) 14:15-14:45 What is new in Spectrum Scale? Mathias Dietz (IBM) 14:45-15:00 News from Lenovo Storage Michael Hennicke (Lenovo) 15:00-15:15 What is new in ESS? Christopher Maestas (IBM) 15:15-15:35 Customer talk 1 TBD 15:35-15:55 Customer talk 2 TBD 15:55-16:25 What is new in Spectrum Computing? Bill McMillan (IBM) 16:25-16:55 Field Update Olaf Weiser (IBM) 16:55-17:25 Spectrum Scale enhancements for CORAL Sven Oehme (IBM) 17:25-17:30 Wrap-up Gabor Samu (IBM) / Ulf Troppens (IBM) Looking forward to see some of you there. Best, Ulf -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Mon May 28 09:23:00 2018 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Mon, 28 May 2018 10:23:00 +0200 Subject: [gpfsug-discuss] mmapplypolicy --choice-algorithm fast Message-ID: Just found the Spectrum Scale policy "best practices" presentation from the latest UG: http://files.gpfsug.org/presentations/2018/USA/SpectrumScalePolicyBP.pdf which mentions: "mmapplypolicy ? --choice-algorithm fast && ... WEIGHT(0) ? (avoids final sort of all selected files by weight)" and looking at the man-page I see that "fast" "Works together with the parallelized ?g /shared?tmp ?N node?list selection method." I do a daily listing of all files, and avoiding unneccessary sorting would be great. So, what is really needed to avoid sorting for a file-list policy? Just "--choice-algorithm fast"? Also WEIGHT(0) in policy required? Also a ?g /shared?tmp ? -jf -------------- next part -------------- An HTML attachment was scrubbed... URL: From janusz.malka at desy.de Tue May 29 14:30:35 2018 From: janusz.malka at desy.de (Janusz Malka) Date: Tue, 29 May 2018 15:30:35 +0200 (CEST) Subject: [gpfsug-discuss] AFM relation on the fs level Message-ID: <120160874.9373781.1527600635623.JavaMail.zimbra@desy.de> Dear all, Is it possible to build the AFM relation on the file system level ? I mean root file set of one file system as AFM cache and mount point of second as AFM home. Best regards, Janusz -- ------------------------------------------------------------------------- Janusz Tomasz Malka IT-Scientific Computing Deutsches Elektronen-Synchrotron Ein Forschungszentrum der Helmholtz-Gemeinschaft Notkestr. 85 22607 Hamburg Germany phone: +49 40 8998 3818 e-mail: janusz.malka at desy.de ------------------------------------------------------------------------- From vpuvvada at in.ibm.com Wed May 30 04:23:28 2018 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 30 May 2018 08:53:28 +0530 Subject: [gpfsug-discuss] AFM relation on the fs level In-Reply-To: <120160874.9373781.1527600635623.JavaMail.zimbra@desy.de> References: <120160874.9373781.1527600635623.JavaMail.zimbra@desy.de> Message-ID: AFM cannot be enabled at root fileset level today. ~Venkat (vpuvvada at in.ibm.com) From: Janusz Malka To: gpfsug main discussion list Date: 05/29/2018 07:06 PM Subject: [gpfsug-discuss] AFM relation on the fs level Sent by: gpfsug-discuss-bounces at spectrumscale.org Dear all, Is it possible to build the AFM relation on the file system level ? I mean root file set of one file system as AFM cache and mount point of second as AFM home. Best regards, Janusz -- ------------------------------------------------------------------------- Janusz Tomasz Malka IT-Scientific Computing Deutsches Elektronen-Synchrotron Ein Forschungszentrum der Helmholtz-Gemeinschaft Notkestr. 85 22607 Hamburg Germany phone: +49 40 8998 3818 e-mail: janusz.malka at desy.de ------------------------------------------------------------------------- _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 30 12:52:33 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 30 May 2018 11:52:33 +0000 Subject: [gpfsug-discuss] AFM negative file caching Message-ID: Hi All, We have a file-set which is an AFM fileset and contains installed software. We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. /gpfs/apps/somesoftware/v1.2/lib Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 30 12:57:27 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 30 May 2018 11:57:27 +0000 Subject: [gpfsug-discuss] AFM negative file caching Message-ID: <2686836B-9BD3-4B9C-A5D9-7C3EF6E6D69B@bham.ac.uk> p.s. I wasn?t sure if afmDirLookupRefreshInterval and afmFileLookupRefreshInterval would be the right thing if it?s a file/directory that doesn?t exist? Simon From: on behalf of "Simon Thompson (IT Research Support)" Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Wednesday, 30 May 2018 at 12:52 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] AFM negative file caching Hi All, We have a file-set which is an AFM fileset and contains installed software. We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. /gpfs/apps/somesoftware/v1.2/lib Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From peserocka at gmail.com Wed May 30 13:26:46 2018 From: peserocka at gmail.com (Peter Serocka) Date: Wed, 30 May 2018 14:26:46 +0200 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: Message-ID: As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? (Not to get started on using LD_LIBRARY_PATH in the first place?) ? Peter > On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: > > Hi All, > > We have a file-set which is an AFM fileset and contains installed software. > > We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. > > /gpfs/apps/somesoftware/v1.2/lib > > Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. > > Thanks > > Simon > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From david_johnson at brown.edu Wed May 30 13:43:33 2018 From: david_johnson at brown.edu (david_johnson at brown.edu) Date: Wed, 30 May 2018 08:43:33 -0400 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: Message-ID: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka wrote: > > As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? > > (Not to get started on using LD_LIBRARY_PATH in the first place?) > > > ? Peter > >> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: >> >> Hi All, >> >> We have a file-set which is an AFM fileset and contains installed software. >> >> We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. >> >> /gpfs/apps/somesoftware/v1.2/lib >> >> Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. >> >> Thanks >> >> Simon >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From vpuvvada at in.ibm.com Wed May 30 15:29:55 2018 From: vpuvvada at in.ibm.com (Venkateswara R Puvvada) Date: Wed, 30 May 2018 19:59:55 +0530 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> References: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> Message-ID: >I wasn?t sure if afmDirLookupRefreshInterval and afmFileLookupRefreshInterval would be the right thing if it?s a file/directory that doesn?t exist? These refresh intervals applies to all the lookups and not just for negative lookups. For working around in AFM itself, you could try setting these refresh intervals to higher value if cache does not need to validate with home often. ~Venkat (vpuvvada at in.ibm.com) From: david_johnson at brown.edu To: gpfsug main discussion list Date: 05/30/2018 06:14 PM Subject: Re: [gpfsug-discuss] AFM negative file caching Sent by: gpfsug-discuss-bounces at spectrumscale.org Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka wrote: > > As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? > > (Not to get started on using LD_LIBRARY_PATH in the first place?) > > > ? Peter > >> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: >> >> Hi All, >> >> We have a file-set which is an AFM fileset and contains installed software. >> >> We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. >> >> /gpfs/apps/somesoftware/v1.2/lib >> >> Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. >> >> Thanks >> >> Simon >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 30 15:30:40 2018 From: S.J.Thompson at bham.ac.uk (Simon Thompson (IT Research Support)) Date: Wed, 30 May 2018 14:30:40 +0000 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: Message-ID: So we use easybuild to build software and dependency stacks (and modules to do all this), yeah I did wonder about putting it first, but my worry is that other "stuff" installed locally that dumps in there might then break the dependency stack. I was thinking maybe we can create something local with select symlinks and add that to the path ... but I was hoping we could do some sort of negative caching. Simon ?On 30/05/2018, 13:26, "gpfsug-discuss-bounces at spectrumscale.org on behalf of peserocka at gmail.com" wrote: As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? (Not to get started on using LD_LIBRARY_PATH in the first place?) ? Peter > On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) wrote: > > Hi All, > > We have a file-set which is an AFM fileset and contains installed software. > > We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. > > /gpfs/apps/somesoftware/v1.2/lib > > Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. > > Thanks > > Simon > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Sandra.McLaughlin at astrazeneca.com Wed May 30 16:03:32 2018 From: Sandra.McLaughlin at astrazeneca.com (McLaughlin, Sandra M) Date: Wed, 30 May 2018 15:03:32 +0000 Subject: [gpfsug-discuss] AFM negative file caching In-Reply-To: References: <896799F5-B818-42C7-BD34-363CB9D5EEFB@brown.edu> Message-ID: If it?s any help, Simon, I had a very similar problem, and I set afmDirLookupRefreshIntervaland afmFileLookupRefreshInterval to one day on an AFM cache fileset which only had software on it. It did make a difference to the users. And if you are really desperate to push an application upgrade to the cache fileset, there are other ways to do it. Sandra From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Venkateswara R Puvvada Sent: 30 May 2018 15:30 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM negative file caching >I wasn?t sure if afmDirLookupRefreshIntervaland afmFileLookupRefreshIntervalwould be the right thing if it?s a file/directory that doesn?t exist? These refresh intervals applies to all the lookups and not just for negative lookups. For working around in AFM itself, you could try setting these refresh intervals to higher value if cache does not need to validate with home often. ~Venkat (vpuvvada at in.ibm.com) From: david_johnson at brown.edu To: gpfsug main discussion list > Date: 05/30/2018 06:14 PM Subject: Re: [gpfsug-discuss] AFM negative file caching Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Another possible workaround would be to add wrappers for these apps and only add the AFM based gpfs directory to the LD_LIBARY_PATH when about to launch the app. -- ddj Dave Johnson > On May 30, 2018, at 8:26 AM, Peter Serocka > wrote: > > As a quick means, why not adding /usr/lib64 at the beginning of LD_LIBRARY_PATH? > > (Not to get started on using LD_LIBRARY_PATH in the first place?) > > > ? Peter > >> On 2018 May 30 Wed, at 13:52, Simon Thompson (IT Research Support) > wrote: >> >> Hi All, >> >> We have a file-set which is an AFM fileset and contains installed software. >> >> We?ve been experiencing some performance issues with workloads when this is running and think this is down to LD_LIBRARY_PATH being set to the software installed in the AFM cache, e.g. >> >> /gpfs/apps/somesoftware/v1.2/lib >> >> Subsequently when you run (e.g.) ?who? on the system, LD_LIBRARY_PATH is being searched for e.g. libnss_ldap, which is in /usr/lib64. We?re assuming that AFM is checking with home each time the directory is processed (and other sub directories like lib/tls) and that each time AFM is checking for the file?s existence at home. Is there a way to change the negative cache at all on AFM for this one file-set? (e.g as you might with NFS). The file-set only has applications so changes are pretty rare and so a 10 min or so check would be fine with me. >> >> Thanks >> >> Simon >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA. This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_johnson at brown.edu Thu May 31 19:21:42 2018 From: david_johnson at brown.edu (David Johnson) Date: Thu, 31 May 2018 14:21:42 -0400 Subject: [gpfsug-discuss] recommendations for gpfs 5.x GUI and perf/health monitoring collector nodes Message-ID: We are planning to bring up the new ZIMon tools on our 450+ node cluster, and need to purchase new nodes to run the collector federation and GUI function on. What would you choose as a platform for this? ? memory size? ? local disk space ? SSD? shared? ? net attach ? 10Gig? 25Gig? IB? ? CPU horse power ? single or dual socket? I think I remember somebody in Cambridge UG meeting saying 150 nodes per collector as a rule of thumb, so we?re guessing a federation of 4 nodes would do it. Does this include the GUI host(s) or are those separate? Finally, we?re still using client/server based licensing model, do these nodes count as clients? Thanks, ? ddj Dave Johnson Brown University