From Robert.Oesterlin at nuance.com Wed May 1 14:35:21 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 1 May 2019 13:35:21 +0000 Subject: [gpfsug-discuss] PSA: Room Reservations for SC19 are now open Message-ID: It may be 6 months away, but SC19 room reservations fill fast! If you?re thinking about going, reserve a room - no cost to do so for most hotels. You don?t need to register to hold a room. We?ll have a user group meeting on Sunday afternoon 11/17. https://sc19.supercomputing.org/attend/housing/ Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 1 16:22:54 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Wed, 1 May 2019 15:22:54 +0000 Subject: [gpfsug-discuss] PSA: Room Reservations for SC19 are now open In-Reply-To: References: Message-ID: Or for anyone ever who has seen an IBM talk, this is a statement of intent and is not a binding commitment to run the user group on the Sunday... :-) Simon -------- Original Message -------- From: "Robert.Oesterlin at nuance.com" > Date: Wed, 1 May 2019, 14:50 To: gpfsug main discussion list > Subject: [gpfsug-discuss] PSA: Room Reservations for SC19 are now open It may be 6 months away, but SC19 room reservations fill fast! If you?re thinking about going, reserve a room - no cost to do so for most hotels. You don?t need to register to hold a room. We?ll have a user group meeting on Sunday afternoon 11/17. https://sc19.supercomputing.org/attend/housing/ Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Mon May 6 14:19:26 2019 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Mon, 6 May 2019 15:19:26 +0200 Subject: [gpfsug-discuss] Informal Social Gathering - Tue May 7th Message-ID: Some folks asked me about the the usual informal pre-event gathering for those arriving early. Simon sent details via Eventbrite, but it seems that this was easy to miss. As in the past, a few of us usually meet up for an informal gathering the evening before (7th May). (Bring you own money!). We've booked a few tables for this, but please drop a note to me if you plan to attend: Tuesday May 7th, 7pm - 9:30pm The White Hart, 29 Cornwall Road, London, SE1 9TJ www.thewhitehartwaterloo.co.uk (Reservation for "Spectrum Scale User Group") -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Matthias Hartmann Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeep.patil at in.ibm.com Tue May 7 10:31:35 2019 From: sandeep.patil at in.ibm.com (Sandeep Ramesh) Date: Tue, 7 May 2019 15:01:35 +0530 Subject: [gpfsug-discuss] Spectrum Scale Cyber Security Survey // Gentle Reminder Message-ID: Thank You to all who responded and Gentle Reminder to others. The survey will close on 10th May 2019 Spectrum Scale Cyber Security Survey https://www.surveymonkey.com/r/9ZNCZ75 -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.childs at qmul.ac.uk Tue May 7 15:35:26 2019 From: p.childs at qmul.ac.uk (Peter Childs) Date: Tue, 7 May 2019 14:35:26 +0000 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL In-Reply-To: References: Message-ID: <28b67f3b9cf87ff05c9e6bde50fbf8b644920985.camel@qmul.ac.uk> On Sat, 2019-04-06 at 23:50 +0200, Michal Zacek wrote: Hello, we decided to convert NFS4 acl to POSIX (we need share same data between SMB, NFS and GPFS clients), so I created script to convert NFS4 to posix ACL. It is very simple, first I do "chmod -R 770 DIR" and then "setfacl -R ..... DIR". I was surprised that conversion to posix acl has taken more then 2TB of metadata space.There is about one hundred million files at GPFS filesystem. Is this expected behavior? Thanks, Michal Example of NFS4 acl: #NFSv4 ACL #owner:root #group:root special:owner@:rwx-:allow (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED group:ag_cud_96_lab:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED group:ag_cud_96_lab_ro:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED converted to posix acl: # owner: root # group: root user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- group:ag_cud_96_lab:rwx default:group:ag_cud_96_lab:rwx group:ag_cud_96_lab_ro:r-x default:group:ag_cud_96_lab_ro:r-x _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cp.childs%40qmul.ac.uk%7Ce1059833f7ed448b027608d6bad9ffec%7C569df091b01340e386eebd9cb9e25814%7C0%7C1%7C636901842833614488&sdata=ROQ3LKmLZ06pI%2FTfdKZ9oPJx5a2xCUINqBnlIfEKF2Q%3D&reserved=0 I've been trying to get my head round acls, with the plan to implement Cluster Export Services SMB rather than roll your own SMB. I'm not sure that plan is going to work Michal, although it might if your not using the Cluster Export Services version of SMB. Put simply if your running Cluster export services SMB you need to set ACLs in Spectrum Scale to "nfs4" we currently have it set to "all" and it won't let you export the shares until you change it, currently I'm still testing, and have had to write a change to go the other way. If you using linux kernel nfs4 that uses posix, however CES nfs uses ganasha which uses nfs4 acl correctly. It gets slightly more annoying as nfs4-setfacl does not work with Spectrum Scale and you have to use mmputacl which has no recursive flag, I even found a ibm article from a few years ago saying the best way to set acls is to use find, and a temporary file..... The other workaround they suggest is to update acls from windows or nfs to get the right. One thing I think may happen if you do as you've suggested is that you will break any acls under Samba badly. I think the other reason that command is taking up more space than expected is that your giving files acls that never had them to start with. I would love someone to say that I'm wrong, as changing our acl setting is going to be a pain. as while we don't make a lot of use of them we make enough that having to use nfs4 acls all the time is going to be a pain. -- Peter Childs ITS Research Storage Queen Mary, University of London -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 7 16:16:52 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 7 May 2019 11:16:52 -0400 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL In-Reply-To: References: Message-ID: 2TB of extra meta data space for 100M files with ACLS?! I think that would be 20KB per file! Does seem there's some mistake here. Perhaps 2GB ? or 20GB? I don't see how we get to 2TeraBytes! ALSO, IIRC GPFS is supposed to use an ACL scheme where identical ACLs are stored once and each file with the same ACL just has a pointer to that same ACL. So no matter how many files have a particular ACL, you only "pay" once... An ACL is stored more compactly than its printed format, so I'd guess your ordinary ACL with a few users and groups would be less than 200 bytes. From: Michal Zacek Hello, we decided to convert NFS4 acl to POSIX (we need share same data between? SMB, NFS and GPFS clients), so I created script to convert NFS4 to posix ACL. It is very simple, first I do "chmod -R 770 DIR" and then "setfacl -R ..... DIR".? I was surprised that conversion to posix acl has taken more then 2TB of metadata space.There is about one hundred million files at GPFS filesystem. Is this expected behavior? Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue May 7 17:14:49 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 07 May 2019 17:14:49 +0100 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL In-Reply-To: <28b67f3b9cf87ff05c9e6bde50fbf8b644920985.camel@qmul.ac.uk> References: <28b67f3b9cf87ff05c9e6bde50fbf8b644920985.camel@qmul.ac.uk> Message-ID: On Tue, 2019-05-07 at 14:35 +0000, Peter Childs wrote: [SNIP] > It gets slightly more annoying as nfs4-setfacl does not work with > Spectrum Scale and you have to use mmputacl which has no recursive > flag, I even found a ibm article from a few years ago saying the best > way to set acls is to use find, and a temporary file..... The other > workaround they suggest is to update acls from windows or nfs to get > the right. > I am working on making my solution to that production ready. I decided after doing a proof of concept with the Linux nfs4_[get|set]facl commands using the FreeBSD getfacl/setfacl commands as a basis would be better as it could both POSIX and NFSv4 ACL's out the same program. Noting initial version will be something of a bodge where we translate between the existing programs representation of the ACL and the GPFS version as we read/write the ACL's. Longer term the code will need refactoring to use the GPFS structs throughout I feel. Progress depends on my spare time. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From Robert.Oesterlin at nuance.com Wed May 8 15:29:57 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 8 May 2019 14:29:57 +0000 Subject: [gpfsug-discuss] CES IP addresses - multiple subnets, using groups Message-ID: <3825202F-F636-48F7-BC78-3F07764A6FAD@nuance.com> Reference: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_configcesprotocolservipadd.htm I have a 3 CES servers with IP addresses: Node1 10.30.43.14 (netmask 255.255.255.224) export IP 10.30.43.25 Node2 10.30.43.24 (netmask 255.255.255.224) export IP 10.30.43.27 Node3 10.30.43.133 (netmask 255.255.255.224) export IP 10.30.43.135 Which means node 3 is on a different vlan. I want to assign export addresses to them and keep the export IPs on the correct vlan. This looks like it can be done with groups, but I?m not sure if I have the grouping right. I was considering the following: mmces address add --ces-ip 10.30.43.25 --ces-group vlan431 mmces address add --ces-ip 10.30.43.27 --ces-group vlan431 mmces address add --ces-ip 10.30.43.135 --ces-group vlan435 Which should mean nodes in group ?vlan431? will get IPs 10.30.43.25,10.30.43.27 and the node in group ?vlan435? will get IP 10.30.43.135 (and will remain unassigned if that node goes down) Do I have this right? Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Wed May 8 16:58:59 2019 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Wed, 8 May 2019 17:58:59 +0200 Subject: [gpfsug-discuss] CES IP addresses - multiple subnets, using groups In-Reply-To: <3825202F-F636-48F7-BC78-3F07764A6FAD@nuance.com> References: <3825202F-F636-48F7-BC78-3F07764A6FAD@nuance.com> Message-ID: Hi Bob, you also need to specify which ces groups a node can host: mmchnode --ces-group vlan431 -N Node1,Node2 mmchnode --ces-group vlan435 -N Node3 Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Oesterlin, Robert" To: gpfsug main discussion list Date: 08/05/2019 16:31 Subject: [gpfsug-discuss] CES IP addresses - multiple subnets, using groups Sent by: gpfsug-discuss-bounces at spectrumscale.org Reference: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_configcesprotocolservipadd.htm I have a 3 CES servers with IP addresses: Node1 10.30.43.14 (netmask 255.255.255.224) export IP 10.30.43.25 Node2 10.30.43.24 (netmask 255.255.255.224) export IP 10.30.43.27 Node3 10.30.43.133 (netmask 255.255.255.224) export IP 10.30.43.135 Which means node 3 is on a different vlan. I want to assign export addresses to them and keep the export IPs on the correct vlan. This looks like it can be done with groups, but I?m not sure if I have the grouping right. I was considering the following: mmces address add --ces-ip 10.30.43.25 --ces-group vlan431 mmces address add --ces-ip 10.30.43.27 --ces-group vlan431 mmces address add --ces-ip 10.30.43.135 --ces-group vlan435 Which should mean nodes in group ?vlan431? will get IPs 10.30.43.25,10.30.43.27 and the node in group ?vlan435? will get IP 10.30.43.135 (and will remain unassigned if that node goes down) Do I have this right? Bob Oesterlin Sr Principal Storage Engineer, Nuance _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=9dCEbNr27klWay2AcOfvOE1xq50K-CyRUu4qQx4HOlk&m=P11oXJcKzIOkcqnAehRbMinQv-wJOXianaA2njslyC8&s=kxOMu99ZmGV7qT7PBewEhVv1Mb5ry2WgBDXwJmJPCvI&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From xhejtman at ics.muni.cz Wed May 8 17:03:59 2019 From: xhejtman at ics.muni.cz (Lukas Hejtmanek) Date: Wed, 8 May 2019 18:03:59 +0200 Subject: [gpfsug-discuss] gpfs and device number In-Reply-To: References: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> Message-ID: <20190508160359.j4tzg3wpo3cnmp6y@ics.muni.cz> Hi, I use fsid=0 (having one export). It seems there is some incompatibility between gpfs and redhat 3.10.0-957. We have gpfs 5.0.2-1, I can see that 5.0.2-2 is tested. So maybe it is fixed in later gpfs versions. On Sat, Apr 27, 2019 at 10:37:48PM +0300, Tomer Perry wrote: > Hi, > > Please use the fsid option in /etc/exports ( man exports and: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adm_nfslin.htm > ) > Also check > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adv_cnfs.htm > in case you want HA with kernel NFS. > > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: Lukas Hejtmanek > To: gpfsug-discuss at spectrumscale.org > Date: 26/04/2019 15:37 > Subject: [gpfsug-discuss] gpfs and device number > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello, > > I noticed that from time to time, device id of a gpfs volume is not same > across whole gpfs cluster. > > [root at kat1 ~]# stat /gpfs/vol1/ > File: ?/gpfs/vol1/? > Size: 262144 Blocks: 512 IO Block: 262144 > directory > Device: 28h/40d Inode: 3 > > [root at kat2 ~]# stat /gpfs/vol1/ > File: ?/gpfs/vol1/? > Size: 262144 Blocks: 512 IO Block: 262144 > directory > Device: 2bh/43d Inode: 3 > > [root at kat3 ~]# stat /gpfs/vol1/ > File: ?/gpfs/vol1/? > Size: 262144 Blocks: 512 IO Block: 262144 > directory > Device: 2ah/42d Inode: 3 > > this is really bad for kernel NFS as it uses device id for file handles > thus > NFS failover leads to nfs stale handle error. > > Is there a way to force a device number? > > -- > Luk?? Hejtm?nek > > Linux Administrator only because > Full Time Multitasking Ninja > is not an official job title > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=F4TfIKrFl9BVdEAYxZLWlFF-zF-irdwcP9LnGpgiZrs&s=Ice-yo0p955RcTDGPEGwJ-wIwN9F6PvWOpUvR6RMd4M&e= > > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Luk?? Hejtm?nek Linux Administrator only because Full Time Multitasking Ninja is not an official job title From stijn.deweirdt at ugent.be Thu May 9 15:12:10 2019 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 9 May 2019 16:12:10 +0200 Subject: [gpfsug-discuss] advanced filecache math Message-ID: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> hi all, we are looking into some memory issues with gpfs 5.0.2.2, and found following in mmfsadm dump fs: > fileCacheLimit 1000000 desired 1000000 ... > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840) the limit is 1M (we configured that), however, the fileCacheMem mentions 11.7M? this is also reported right after a mmshutdown/startup. how do these 2 relate (again?)? mnay thanks, stijn From Achim.Rehor at de.ibm.com Thu May 9 15:34:31 2019 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Thu, 9 May 2019 16:34:31 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 7182 bytes Desc: not available URL: From stijn.deweirdt at ugent.be Thu May 9 15:38:53 2019 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 9 May 2019 16:38:53 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> Message-ID: <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> hi achim, > you just misinterpreted the term fileCacheLimit. > This is not in byte, but specifies the maxFilesToCache setting : i understand that, but how does the fileCacheLimit relate to the fileCacheMem number? (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we are looking for large numbers that might explain wtf is going on (pardon my french ;) stijn > > UMALLOC limits: > bufferDescLimit 40000 desired 40000 > fileCacheLimit 4000 desired 4000 <=== mFtC > statCacheLimit 1000 desired 1000 <=== mSC > diskAddrBuffLimit 200 desired 200 > > # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" > maxFilesToCache 4000 > maxStatCache 1000 > > Mit freundlichen Gr??en / Kind regards > > *Achim Rehor* > > -------------------------------------------------------------------------------- > Software Technical Support Specialist AIX/ Emea HPC Support > IBM Certified Advanced Technical Expert - Power Systems with AIX > TSCC Software Service, Dept. 7922 > Global Technology Services > -------------------------------------------------------------------------------- > Phone: +49-7034-274-7862 IBM Deutschland > E-Mail: Achim.Rehor at de.ibm.com Am Weiher 24 > 65451 Kelsterbach > Germany > > -------------------------------------------------------------------------------- > IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter > Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, Stefan Lutz, > Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB > 14562 WEEE-Reg.-Nr. DE 99369940 > > > > > > > From: Stijn De Weirdt > To: gpfsug main discussion list > Date: 09/05/2019 16:21 > Subject: [gpfsug-discuss] advanced filecache math > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > -------------------------------------------------------------------------------- > > > > hi all, > > we are looking into some memory issues with gpfs 5.0.2.2, and found > following in mmfsadm dump fs: > > > fileCacheLimit 1000000 desired 1000000 > ... > > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840) > > the limit is 1M (we configured that), however, the fileCacheMem mentions > 11.7M? > > this is also reported right after a mmshutdown/startup. > > how do these 2 relate (again?)? > > mnay thanks, > > stijn > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From stijn.deweirdt at ugent.be Thu May 9 15:48:13 2019 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 9 May 2019 16:48:13 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> Message-ID: <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> seems like we are suffering from http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737 as these are ces nodes, we susepcted something wrong the caches, but it looks like a memleak instead. sorry for the noise (as usual you find the solution right after sending the mail ;) stijn On 5/9/19 4:38 PM, Stijn De Weirdt wrote: > hi achim, > >> you just misinterpreted the term fileCacheLimit. >> This is not in byte, but specifies the maxFilesToCache setting : > i understand that, but how does the fileCacheLimit relate to the > fileCacheMem number? > > > > (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we > are looking for large numbers that might explain wtf is going on > (pardon my french ;) > > stijn > >> >> UMALLOC limits: >> bufferDescLimit 40000 desired 40000 >> fileCacheLimit 4000 desired 4000 <=== mFtC >> statCacheLimit 1000 desired 1000 <=== mSC >> diskAddrBuffLimit 200 desired 200 >> >> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" >> maxFilesToCache 4000 >> maxStatCache 1000 >> >> Mit freundlichen Gr??en / Kind regards >> >> *Achim Rehor* >> >> -------------------------------------------------------------------------------- >> Software Technical Support Specialist AIX/ Emea HPC Support >> IBM Certified Advanced Technical Expert - Power Systems with AIX >> TSCC Software Service, Dept. 7922 >> Global Technology Services >> -------------------------------------------------------------------------------- >> Phone: +49-7034-274-7862 IBM Deutschland >> E-Mail: Achim.Rehor at de.ibm.com Am Weiher 24 >> 65451 Kelsterbach >> Germany >> >> -------------------------------------------------------------------------------- >> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter >> Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, Stefan Lutz, >> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt >> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB >> 14562 WEEE-Reg.-Nr. DE 99369940 >> >> >> >> >> >> >> From: Stijn De Weirdt >> To: gpfsug main discussion list >> Date: 09/05/2019 16:21 >> Subject: [gpfsug-discuss] advanced filecache math >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> -------------------------------------------------------------------------------- >> >> >> >> hi all, >> >> we are looking into some memory issues with gpfs 5.0.2.2, and found >> following in mmfsadm dump fs: >> >> > fileCacheLimit 1000000 desired 1000000 >> ... >> > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840) >> >> the limit is 1M (we configured that), however, the fileCacheMem mentions >> 11.7M? >> >> this is also reported right after a mmshutdown/startup. >> >> how do these 2 relate (again?)? >> >> mnay thanks, >> >> stijn >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From Achim.Rehor at de.ibm.com Thu May 9 17:52:14 2019 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Thu, 9 May 2019 18:52:14 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be><173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: An HTML attachment was scrubbed... URL: From oehmes at gmail.com Thu May 9 18:24:42 2019 From: oehmes at gmail.com (Sven Oehme) Date: Thu, 9 May 2019 18:24:42 +0100 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: Unfortunate more complicated :) The consumption here is an estimate based on 512b inodes, which no newly created filesystem has as all new default to 4k. So if you have 4k inodes you could easily need 2x of the estimated value. Then there are extended attributes, also not added here, etc . So don't take this number as usage, it's really just a rough estimate. Sven On Thu, May 9, 2019, 5:53 PM Achim Rehor wrote: > Sorry for my fast ( and not well thought) answer, before. You obviously > are correct, there is no relation between the setting of maxFilesToCache, > and the > > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + > 2840) > > usage. it is rather a statement of how many metadata may fit in the > remaining structures outside the pagepool. this value does not change at > all, when you modify your mFtC setting. > > There is a really good presentation by Tomer Perry on the User Group > meetings, referring about memory footprint of GPFS under various conditions. > > In your case, you may very well hit the CES nodes memleak you just pointed > out. > > Sorry for my hasty reply earlier ;) > > Achim > > > > From: Stijn De Weirdt > To: gpfsug-discuss at spectrumscale.org > Date: 09/05/2019 16:48 > Subject: Re: [gpfsug-discuss] advanced filecache math > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > seems like we are suffering from > http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737 > > as these are ces nodes, we susepcted something wrong the caches, but it > looks like a memleak instead. > > sorry for the noise (as usual you find the solution right after sending > the mail ;) > > stijn > > On 5/9/19 4:38 PM, Stijn De Weirdt wrote: > > hi achim, > > > >> you just misinterpreted the term fileCacheLimit. > >> This is not in byte, but specifies the maxFilesToCache setting : > > i understand that, but how does the fileCacheLimit relate to the > > fileCacheMem number? > > > > > > > > (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we > > are looking for large numbers that might explain wtf is going on > > (pardon my french ;) > > > > stijn > > > >> > >> UMALLOC limits: > >> bufferDescLimit 40000 desired 40000 > >> fileCacheLimit 4000 desired 4000 <=== mFtC > >> statCacheLimit 1000 desired 1000 <=== mSC > >> diskAddrBuffLimit 200 desired 200 > >> > >> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" > >> maxFilesToCache 4000 > >> maxStatCache 1000 > >> > >> Mit freundlichen Gr??en / Kind regards > >> > >> *Achim Rehor* > >> > >> > -------------------------------------------------------------------------------- > >> Software Technical Support Specialist AIX/ Emea HPC Support > > >> IBM Certified Advanced Technical Expert - Power Systems with AIX > >> TSCC Software Service, Dept. 7922 > >> Global Technology Services > >> > -------------------------------------------------------------------------------- > >> Phone: +49-7034-274-7862 IBM > Deutschland > >> E-Mail: Achim.Rehor at de.ibm.com Am > Weiher 24 > >> 65451 Kelsterbach > >> Germany > >> > >> > -------------------------------------------------------------------------------- > >> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter > >> Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, > Stefan Lutz, > >> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt > >> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht > Stuttgart, HRB > >> 14562 WEEE-Reg.-Nr. DE 99369940 > >> > >> > >> > >> > >> > >> > >> From: Stijn De Weirdt > >> To: gpfsug main discussion list > >> Date: 09/05/2019 16:21 > >> Subject: [gpfsug-discuss] advanced filecache math > >> Sent by: gpfsug-discuss-bounces at spectrumscale.org > >> > >> > -------------------------------------------------------------------------------- > >> > >> > >> > >> hi all, > >> > >> we are looking into some memory issues with gpfs 5.0.2.2, and found > >> following in mmfsadm dump fs: > >> > >> > fileCacheLimit 1000000 desired 1000000 > >> ... > >> > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size > 512 + 2840) > >> > >> the limit is 1M (we configured that), however, the fileCacheMem mentions > >> 11.7M? > >> > >> this is also reported right after a mmshutdown/startup. > >> > >> how do these 2 relate (again?)? > >> > >> mnay thanks, > >> > >> stijn > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> > >> > >> > >> > >> > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjdoherty at yahoo.com Thu May 9 22:07:55 2019 From: jjdoherty at yahoo.com (Jim Doherty) Date: Thu, 9 May 2019 21:07:55 +0000 (UTC) Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: <881377935.34017.1557436075166@mail.yahoo.com> A couple of observations on memory,?? a maxFilesToCache object takes anwhere from 6-10K, so 1 million =~ 6-10 Gig.??? Memory utilized in the mmfsd comes from either the pagepool,? the shared memory segment used by MFTC objects,? the token memory segment used to track MFTC objects,?? and newer is? memory used by AFM.??? If the memory resources are in the mmfsd address space then this will show in the RSS size of the mmfsd.??? Ignore the VMM size,? since the glibc change awhile back to allocate a heap for each thread VMM has become an imaginary number for a large multi-threaded application.?? There have been some memory leaks fixed in Ganesha that will be in? 4.2.3 PTF15 which is available on fixcentral Jim Doherty On Thursday, May 9, 2019, 1:25:03 PM EDT, Sven Oehme wrote: Unfortunate more complicated :) The consumption here is an estimate based on 512b inodes, which no newly created filesystem has as all new default to 4k. So if you have 4k inodes you could easily need 2x of the estimated value.Then there are extended attributes, also not added here, etc .So don't take this number as usage, it's really just a rough estimate. Sven On Thu, May 9, 2019, 5:53 PM Achim Rehor wrote: Sorry for my fast ( and not well thought)answer, before. You obviously are correct, there is no relation betweenthe setting of maxFilesToCache, and the fileCacheMem ? ? 38359956 k ?= 11718554* 3352 bytes (inode size 512 + 2840) usage. it is rather a statement of howmany metadata may fit in the remaining structures outside the pagepool.this value does not change at all, when you modify your mFtC setting. There is a really good presentationby Tomer Perry on the User Group meetings, referring about memory footprintof GPFS under various conditions. In your case, you may very well hitthe CES nodes memleak you just pointed out. Sorry for my hasty reply earlier ;) Achim From: ? ? ??Stijn De Weirdt To: ? ? ??gpfsug-discuss at spectrumscale.org Date: ? ? ??09/05/2019 16:48 Subject: ? ?? ?Re: [gpfsug-discuss]advanced filecache math Sent by: ? ?? ?gpfsug-discuss-bounces at spectrumscale.org seems like we are suffering from http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737 as these are ces nodes, we susepcted something wrong the caches, but it looks like a memleak instead. sorry for the noise (as usual you find the solution right after sending the mail ;) stijn On 5/9/19 4:38 PM, Stijn De Weirdt wrote: > hi achim, > >> you just misinterpreted the term fileCacheLimit. >> This is not in byte, but specifies the maxFilesToCache setting: > i understand that, but how does the fileCacheLimit relate to the > fileCacheMem number? > > > > (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT),so we > are looking for large numbers that might explain wtf is going on > (pardon my french ;) > > stijn > >> >> UMALLOC limits: >> ? ? ?bufferDescLimit ? ? ?40000desired ? ?40000 >> ? ? ?fileCacheLimit ?4000 desired ? ?4000 ? <=== mFtC >> ? ? ?statCacheLimit ?1000 desired ? ?1000 ? <=== mSC >> ? ? ?diskAddrBuffLimit ? ? ?200desired ? ? ?200 >> >> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" >> ? ? maxFilesToCache 4000 >> ? ? maxStatCache 1000 >> >> Mit freundlichen Gr??en / Kind regards >> >> *Achim Rehor* >> >> -------------------------------------------------------------------------------- >> Software Technical Support Specialist AIX/ Emea HPC Support ?? ? ? ? ? ? ? >> IBM Certified Advanced Technical Expert - Power Systems with AIX >> TSCC Software Service, Dept. 7922 >> Global Technology Services >> -------------------------------------------------------------------------------- >> Phone: ? ? ? ? ? ? ?? +49-7034-274-7862 ? ? ? ? ?? ? ? ?IBM Deutschland >> E-Mail: ? ? ? ? ? ? ?? Achim.Rehor at de.ibm.com ? ? ? ?? ? ? ? ?Am Weiher 24 >> ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?? ?65451 Kelsterbach >> ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?? ?Germany >> ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?? >> -------------------------------------------------------------------------------- >> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: MartinJetter >> Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen,Stefan Lutz, >> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt >> Sitz der Gesellschaft: Ehningen / Registergericht: AmtsgerichtStuttgart, HRB >> 14562 WEEE-Reg.-Nr. DE 99369940 >> >> >> >> >> >> >> From: Stijn De Weirdt >> To: gpfsug main discussion list >> Date: 09/05/2019 16:21 >> Subject: [gpfsug-discuss] advanced filecache math >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> -------------------------------------------------------------------------------- >> >> >> >> hi all, >> >> we are looking into some memory issues with gpfs 5.0.2.2, andfound >> following in mmfsadm dump fs: >> >> ?> ? ? fileCacheLimit ? ? 1000000desired ?1000000 >> ... >> ?> ? ? fileCacheMem ? ? 38359956 k?= 11718554 * 3352 bytes (inode size 512 + 2840) >> >> the limit is 1M (we configured that), however, the fileCacheMemmentions >> 11.7M? >> >> this is also reported right after a mmshutdown/startup. >> >> how do these 2 relate (again?)? >> >> mnay thanks, >> >> stijn >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Thu May 9 22:51:37 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Thu, 9 May 2019 21:51:37 +0000 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <881377935.34017.1557436075166@mail.yahoo.com> References: <881377935.34017.1557436075166@mail.yahoo.com>, <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be><173df898-a593-b7a0-a0de-b916011bb50d@ugent.be><02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon May 13 14:11:06 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Mon, 13 May 2019 13:11:06 +0000 Subject: [gpfsug-discuss] IO-500 and POWER9 Message-ID: Hi, I was wondering if anyone has done anything with the IO-500 and POWER9 systems at all? One of the benchmarks (IOR-HARD-READ) always fails. Having slack?d the developers they said: ?It looks like data is not synchronized? and ?maybe a setting in GPFS is missing, e.g. locking, synchronization, ...?? Now I didn?t think there was any way to disable locking in GPFS. We tried some different byte settigns for the read and this made the error go away which apparently indicates ?lockicg issue -> false sharing of blocks?. We found that 1 or 2 nodes = OK. > 2 nodes breaks with 2ppn, > 2 nodes is OK with 1ppn. (We also got some fsstruct errors when running the mdtests ? I have a PMR open for that). Interestingly I ran the test on a bunch of x86 systems, and that ran fine. So ? anyone got any POWER9 (ac922) they could try see if the benchmarks work for them (just run the ior_hard tests is fine)? Or anyone any suggestions? These are all running Red Hat 7.5 and 5.0.2.3 code BTW. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.Turner at lboro.ac.uk Tue May 14 09:47:12 2019 From: A.Turner at lboro.ac.uk (Aaron Turner) Date: Tue, 14 May 2019 08:47:12 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? Message-ID: Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Tue May 14 09:58:07 2019 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Tue, 14 May 2019 08:58:07 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? In-Reply-To: References: Message-ID: Hallo Aaron, the granularity to handle storagecapacity in scale is the disk during createing of the filssystem. These disk are created nsd?s that represent your physical lun?s. Per fs there are a unique count of nsd?s == disk per filesystem. What you want is possible, no problem. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Aaron Turner Gesendet: Dienstag, 14. Mai 2019 10:47 An: gpfsug-discuss at spectrumscale.org Betreff: [gpfsug-discuss] Identifiable groups of disks? Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Tue May 14 10:08:28 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Tue, 14 May 2019 09:08:28 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? Message-ID: When you create the file-system, you create NSD devices (on physical disks ? usually LUNs), and then assign these devices as disks to a file-system. This sounds straight forwards. Note GPFS isn?t really intedned for JBODs unless you have GNR code. Simon From: on behalf of Aaron Turner Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 14 May 2019 at 09:47 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Identifiable groups of disks? Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Tue May 14 10:17:33 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Tue, 14 May 2019 09:17:33 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From A.Turner at lboro.ac.uk Tue May 14 14:13:15 2019 From: A.Turner at lboro.ac.uk (Aaron Turner) Date: Tue, 14 May 2019 13:13:15 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 9 In-Reply-To: References: Message-ID: Thanks, Simon, This is what I thought was the case, and in fact I couldn't see it was not. In reality there -are- JBODs involved, so that was a somewhat hypothetical use case initially. Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: 14 May 2019 12:00 To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 88, Issue 9 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Identifiable groups of disks? (Simon Thompson) 2. Re: Identifiable groups of disks? (Andrew Beattie) ---------------------------------------------------------------------- Message: 1 Date: Tue, 14 May 2019 09:08:28 +0000 From: Simon Thompson To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Identifiable groups of disks? Message-ID: Content-Type: text/plain; charset="utf-8" When you create the file-system, you create NSD devices (on physical disks ? usually LUNs), and then assign these devices as disks to a file-system. This sounds straight forwards. Note GPFS isn?t really intedned for JBODs unless you have GNR code. Simon From: on behalf of Aaron Turner Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 14 May 2019 at 09:47 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Identifiable groups of disks? Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Tue, 14 May 2019 09:17:33 +0000 From: "Andrew Beattie" To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Identifiable groups of disks? Message-ID: Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 88, Issue 9 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 14 18:00:42 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 14 May 2019 13:00:42 -0400 Subject: [gpfsug-discuss] Identifiable groups of disks? In-Reply-To: References: Message-ID: The simple answer is YES. I think the other replies are questioning whether you really want something different or more robust against failures. From: Aaron Turner To: "gpfsug-discuss at spectrumscale.org" Date: 05/14/2019 04:48 AM Subject: [EXTERNAL] [gpfsug-discuss] Identifiable groups of disks? Sent by: gpfsug-discuss-bounces at spectrumscale.org Scenario: one set of JBODS want to create two GPFS file systems want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=OtYY8BVp6eITFG1uShfpYVLZRwNNia-iJUwMXjZyuNc&s=Haef2-lDTRaLo2K-JNaB6xOK9LOgHg8A0Fn6dc6vOMM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Philipp.Rehs at uni-duesseldorf.de Wed May 15 09:48:19 2019 From: Philipp.Rehs at uni-duesseldorf.de (Rehs, Philipp Helo) Date: Wed, 15 May 2019 08:48:19 +0000 Subject: [gpfsug-discuss] Enforce ACLs Message-ID: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 7077 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Wed May 15 10:13:30 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Wed, 15 May 2019 09:13:30 +0000 Subject: [gpfsug-discuss] Enforce ACLs Message-ID: <8FA1923B-9903-4304-876B-2E492E968C52@bham.ac.uk> I *think* this behaviour depends on the file set setting .. Check what "--allow-permission-change" is set to for the file set. I think it needs to be "chmodAndUpdateAcl" Simon ?On 15/05/2019, 09:55, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Philipp.Rehs at uni-duesseldorf.de" wrote: Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de From jfosburg at mdanderson.org Wed May 15 11:42:42 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 15 May 2019 10:42:42 +0000 Subject: [gpfsug-discuss] Enforce ACLs In-Reply-To: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> Message-ID: <73495e917ff74131bd0511c166f385fa@mdanderson.org> I'm not 100% sure this is that it is, but it is most likely your ACL config. If you have to use the nfsv4 ACLs, check in mmlsconfig to make sure you are only using nfsv4 ACLs. I think the options are posix, nfsv4, and both. I would guess you are set to both. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Rehs, Philipp Helo Sent: Wednesday, May 15, 2019 3:48:19 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Enforce ACLs Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Wed May 15 12:14:40 2019 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Wed, 15 May 2019 13:14:40 +0200 Subject: [gpfsug-discuss] Enforce ACLs In-Reply-To: <73495e917ff74131bd0511c166f385fa@mdanderson.org> References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> <73495e917ff74131bd0511c166f385fa@mdanderson.org> Message-ID: Jonathan is mostly right, except that the option is not in mmlsconfig but part of the filesystem configuration (mmlsfs,mmchfs) # mmlsfs objfs -k flag value description ------------------- ------------------------ ----------------------------------- -k nfs4 ACL semantics in effect Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Fosburgh,Jonathan" To: "gpfsug-discuss at spectrumscale.org" Date: 15/05/2019 12:52 Subject: Re: [gpfsug-discuss] Enforce ACLs Sent by: gpfsug-discuss-bounces at spectrumscale.org I'm not 100% sure this is that it is, but it is most likely your ACL config. If you have to use the nfsv4 ACLs, check in mmlsconfig to make sure you are only using nfsv4 ACLs. I think the options are posix, nfsv4, and both. I would guess you are set to both. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Rehs, Philipp Helo Sent: Wednesday, May 15, 2019 3:48:19 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Enforce ACLs Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=9dCEbNr27klWay2AcOfvOE1xq50K-CyRUu4qQx4HOlk&m=T_hndYqE7LOa07-SB6rtf9IPYJT3XiUhUHcCpwbwduM&s=1Xxw6UtKRGh1T4KLYgawTRpI_E_3jHdYnmAy_1rUSrg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 15 12:20:21 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 15 May 2019 12:20:21 +0100 Subject: [gpfsug-discuss] Enforce ACLs In-Reply-To: <73495e917ff74131bd0511c166f385fa@mdanderson.org> References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> <73495e917ff74131bd0511c166f385fa@mdanderson.org> Message-ID: On Wed, 2019-05-15 at 10:42 +0000, Fosburgh,Jonathan wrote: > I'm not 100% sure this is that it is, but it is most likely your ACL > config. If you have to use the nfsv4 ACLs, check in mmlsconfig to > make sure you are only using nfsv4 ACLs. I think the options are > posix, nfsv4, and both. I would guess you are set to both. > I would say the same except the options are actually posix, nfsv4, samba and all and covered by mmlsfs,mmchfs not mmlsconfig. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jfosburg at mdanderson.org Wed May 15 12:24:31 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 15 May 2019 11:24:31 +0000 Subject: [gpfsug-discuss] [EXT] Re: Enforce ACLs In-Reply-To: References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> <73495e917ff74131bd0511c166f385fa@mdanderson.org>, Message-ID: <43a4cc9e539a4e04b70eadf88c7d5457@mdanderson.org> Not bad for having been awake for only half an hour. ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Mathias Dietz Sent: Wednesday, May 15, 2019 6:14:40 AM To: gpfsug main discussion list Subject: [EXT] Re: [gpfsug-discuss] Enforce ACLs WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. Jonathan is mostly right, except that the option is not in mmlsconfig but part of the filesystem configuration (mmlsfs,mmchfs) # mmlsfs objfs -k flag value description ------------------- ------------------------ ----------------------------------- -k nfs4 ACL semantics in effect Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Fosburgh,Jonathan" To: "gpfsug-discuss at spectrumscale.org" Date: 15/05/2019 12:52 Subject: Re: [gpfsug-discuss] Enforce ACLs Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I'm not 100% sure this is that it is, but it is most likely your ACL config. If you have to use the nfsv4 ACLs, check in mmlsconfig to make sure you are only using nfsv4 ACLs. I think the options are posix, nfsv4, and both. I would guess you are set to both. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Rehs, Philipp Helo Sent: Wednesday, May 15, 2019 3:48:19 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Enforce ACLs Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.nickell at inl.gov Thu May 16 17:01:21 2019 From: ben.nickell at inl.gov (Ben G. Nickell) Date: Thu, 16 May 2019 16:01:21 +0000 Subject: [gpfsug-discuss] mmbuild problem Message-ID: First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm not the GPFS guy, but Having a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4. I get the following errors. Any ideas while our GPFS guy tries to get newer software? uname -a Linux hostname 4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux ./mmbuildgpl --build-package -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019. -------------------------------------------------------- Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir = /lib/modules/4.12.14-95.13-default/build kernel source dir = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying rpmbuild... Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... Verifying that tools to build the portability layer exist.... cpp present gcc present g++ present ld present cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1 rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver cleaning (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' rm -f trcid.h ibm_kxi.trclst rm -f install.he; \ for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h DirIds.h; do \ (set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' cleaning (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' rm -f install.he; \ for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \ (set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' cleaning (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build M=/usr/lpp/mmfs/src/gpl-linux clean make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he rm -f -rf .tmp_versions kdump-kern-dwarfs.c rm -f -f gpl-linux.trclst kdump lxtrace rm -f -rf usr make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' for i in ibm-kxi ibm-linux gpl-linux ; do \ (cd $i; echo "installing header files" "(`pwd`)"; \ /usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \ exit $?) || exit 1; \ done installing header files (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' Making directory /usr/lpp/mmfs/src/include/cxi + /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h + /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h + /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h + /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h + /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h + /usr/bin/install cxiGcryptoDefs.h /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + /usr/bin/install cxiSynchNames.h /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + /usr/bin/install cxiMiscNames.h /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' installing header files (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' + /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' installing header files (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Making directory /usr/lpp/mmfs/src/include/gpl-linux + /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h + /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h + /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h + /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h + /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h + /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h + /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h + /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... Pre-kbuild step 2... touch install.he Invoking Kbuild... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:65:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/inode.c:136:3: error: aggregate value used where an integer was expected TRACE5(TRACE_VNODE, 3, TRCID_PRINTINODE_4, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: At top level: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:2800:3: error: unknown type name ?wait_queue_t? wait_queue_t qwaiter; ^ /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: In function ?cxiWaitEventWait?: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3882:3: warning: passing argument 1 of ?init_waitqueue_entry? from incompatible pointer type [enabled by default] init_waitqueue_entry(&waitElement.qwaiter, current); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:78:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3883:3: warning: passing argument 2 of ?__add_wait_queue? from incompatible pointer type [enabled by default] __add_wait_queue(&waitElement.qhead, &waitElement.qwaiter); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:153:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiStartIO?: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2474:13: error: ?struct bio? has no member named ?bi_bdev? bioP->bi_bdev = bdevP; ^ In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiCleanIO?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:81: error: ?struct bio? has no member named ?bi_bdev? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:395:23: note: in definition of macro ?_TRACE_MACRO? { _TR_BEFORE; _ktrc; KTRCOPTCODE; _TR_AFTER; } else NOOP ^ /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:5: note: in expansion of macro ?_TRACE3D? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:432:26: note: in expansion of macro ?TRACE_TRCID_WAITIO_BDEVP_CALL? _TRACE_MACRO(_c, _l, TRACE_##id##_CALL) ^ /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2906:7: note: in expansion of macro ?TRACE3? TRACE3(TRACE_IO, 6, TRCID_WAITIO_BDEVP, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2915:23: error: ?struct bio? has no member named ?bi_error? if (bcP->biop[i]->bi_error) ^ /usr/src/linux-4.12.14-95.13/scripts/Makefile.build:326: recipe for target '/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o' failed make[5]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1 /usr/src/linux-4.12.14-95.13/Makefile:1557: recipe for target '_module_/usr/lpp/mmfs/src/gpl-linux' failed make[4]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2 Makefile:152: recipe for target 'sub-make' failed make[3]: *** [sub-make] Error 2 Makefile:24: recipe for target '__sub-make' failed make[2]: *** [__sub-make] Error 2 make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' makefile:130: recipe for target 'modules' failed make[1]: *** [modules] Error 1 make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' makefile:148: recipe for target 'Modules' failed make: *** [Modules] Error 1 -------------------------------------------------------- mmbuildgpl: Building GPL module failed at Thu May 16 09:28:54 MDT 2019. -------------------------------------------------------- mmbuildgpl: Command failed. Examine previous error messages to determine cause. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 From knop at us.ibm.com Thu May 16 17:12:18 2019 From: knop at us.ibm.com (Felipe Knop) Date: Thu, 16 May 2019 12:12:18 -0400 Subject: [gpfsug-discuss] mmbuild problem In-Reply-To: References: Message-ID: Ben, According to the FAQ ( https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html) SLES 12 SP4 is only supported starting with Scale V5.0.2.3 . |-----+-----------+-----------+--------------------+--------------------| | ?12 | | | ?From V4.2.3.13 in | ?From V4.2.3.13 in | | SP4 | 4.12.14-95| 4.12.14-95| the 4.2 release | the 4.2 release | | | .3-default| .3-default| | | | | | | | | | | | | From V5.0.2.3 or | From V5.0.2.3 or | | | | | later in the 5.0 | later in the 5.0 | | | | | release | release | |-----+-----------+-----------+--------------------+--------------------| Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Ben G. Nickell" To: "gpfsug-discuss at spectrumscale.org" Date: 05/16/2019 12:02 PM Subject: [EXTERNAL] [gpfsug-discuss] mmbuild problem Sent by: gpfsug-discuss-bounces at spectrumscale.org First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm not the GPFS guy, but Having a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4. I get the following errors. Any ideas while our GPFS guy tries to get newer software? uname -a Linux hostname 4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux ./mmbuildgpl --build-package -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019. -------------------------------------------------------- Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir = /lib/modules/4.12.14-95.13-default/build kernel source dir = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying rpmbuild... Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... Verifying that tools to build the portability layer exist.... cpp present gcc present g++ present ld present cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1 rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver cleaning (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' rm -f trcid.h ibm_kxi.trclst rm -f install.he; \ for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h DirIds.h; do \ (set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' cleaning (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' rm -f install.he; \ for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \ (set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' cleaning (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build M=/usr/lpp/mmfs/src/gpl-linux clean make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he rm -f -rf .tmp_versions kdump-kern-dwarfs.c rm -f -f gpl-linux.trclst kdump lxtrace rm -f -rf usr make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' for i in ibm-kxi ibm-linux gpl-linux ; do \ (cd $i; echo "installing header files" "(`pwd`)"; \ /usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \ exit $?) || exit 1; \ done installing header files (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' Making directory /usr/lpp/mmfs/src/include/cxi + /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h + /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h + /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h + /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h + /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h + /usr/bin/install cxiGcryptoDefs.h /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + /usr/bin/install cxiSynchNames.h /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + /usr/bin/install cxiMiscNames.h /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' installing header files (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' + /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' installing header files (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Making directory /usr/lpp/mmfs/src/include/gpl-linux + /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h + /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h + /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h + /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h + /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h + /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h + /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h + /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... Pre-kbuild step 2... touch install.he Invoking Kbuild... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:65:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/inode.c:136:3: error: aggregate value used where an integer was expected TRACE5(TRACE_VNODE, 3, TRCID_PRINTINODE_4, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: At top level: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:2800:3: error: unknown type name ?wait_queue_t? wait_queue_t qwaiter; ^ /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: In function ?cxiWaitEventWait?: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3882:3: warning: passing argument 1 of ?init_waitqueue_entry? from incompatible pointer type [enabled by default] init_waitqueue_entry(&waitElement.qwaiter, current); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:78:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3883:3: warning: passing argument 2 of ?__add_wait_queue? from incompatible pointer type [enabled by default] __add_wait_queue(&waitElement.qhead, &waitElement.qwaiter); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:153:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiStartIO?: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2474:13: error: ?struct bio? has no member named ?bi_bdev? bioP->bi_bdev = bdevP; ^ In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiCleanIO?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:81: error: ?struct bio? has no member named ?bi_bdev? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP-> biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:395:23: note: in definition of macro ?_TRACE_MACRO? { _TR_BEFORE; _ktrc; KTRCOPTCODE; _TR_AFTER; } else NOOP ^ /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:5: note: in expansion of macro ?_TRACE3D? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP-> biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:432:26: note: in expansion of macro ?TRACE_TRCID_WAITIO_BDEVP_CALL? _TRACE_MACRO(_c, _l, TRACE_##id##_CALL) ^ /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2906:7: note: in expansion of macro ?TRACE3? TRACE3(TRACE_IO, 6, TRCID_WAITIO_BDEVP, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2915:23: error: ?struct bio? has no member named ?bi_error? if (bcP->biop[i]->bi_error) ^ /usr/src/linux-4.12.14-95.13/scripts/Makefile.build:326: recipe for target '/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o' failed make[5]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1 /usr/src/linux-4.12.14-95.13/Makefile:1557: recipe for target '_module_/usr/lpp/mmfs/src/gpl-linux' failed make[4]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2 Makefile:152: recipe for target 'sub-make' failed make[3]: *** [sub-make] Error 2 Makefile:24: recipe for target '__sub-make' failed make[2]: *** [__sub-make] Error 2 make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' makefile:130: recipe for target 'modules' failed make[1]: *** [modules] Error 1 make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' makefile:148: recipe for target 'Modules' failed make: *** [Modules] Error 1 -------------------------------------------------------- mmbuildgpl: Building GPL module failed at Thu May 16 09:28:54 MDT 2019. -------------------------------------------------------- mmbuildgpl: Command failed. Examine previous error messages to determine cause. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIF-g&c=jf_iaSHvJObTbx-siA1ZOg&r=oNT2koCZX0xmWlSlLblR9Q&m=WnfLPJrGAP9SlsDZnSceHbB2mqQuXDSofnAOTM7LxtU&s=H8TOSiLsqot1vScrOTBmzisftHF8LaCDIxXfOrAWB0M&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ben.nickell at inl.gov Thu May 16 17:19:54 2019 From: ben.nickell at inl.gov (Ben G. Nickell) Date: Thu, 16 May 2019 16:19:54 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: mmbuild problem In-Reply-To: References: , Message-ID: Thanks for the quick reply Felipe, and also for pointing me at the FAQ. I found the same. The standard version of 5.2.0.3 built fine. We apparently don't know how to get the advanced version, but I don't we are using that anyway, I imagine we could figure out how to get it if we do need it. I just sent this a little too soon, sorry for the noise. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Felipe Knop Sent: Thursday, May 16, 2019 10:12 AM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] mmbuild problem Ben, According to the FAQ (https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html) SLES 12 SP4 is only supported starting with Scale V5.0.2.3 . 12 SP4 4.12.14-95.3-default 4.12.14-95.3-default From V4.2.3.13 in the 4.2 release >From V5.0.2.3 or later in the 5.0 release From V4.2.3.13 in the 4.2 release >From V5.0.2.3 or later in the 5.0 release Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 [Inactive hide details for "Ben G. Nickell" ---05/16/2019 12:02:23 PM---First time poster, hopefully not a simple RTFM question]"Ben G. Nickell" ---05/16/2019 12:02:23 PM---First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm From: "Ben G. Nickell" To: "gpfsug-discuss at spectrumscale.org" Date: 05/16/2019 12:02 PM Subject: [EXTERNAL] [gpfsug-discuss] mmbuild problem Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm not the GPFS guy, but Having a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4. I get the following errors. Any ideas while our GPFS guy tries to get newer software? uname -a Linux hostname 4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux ./mmbuildgpl --build-package -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019. -------------------------------------------------------- Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir = /lib/modules/4.12.14-95.13-default/build kernel source dir = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying rpmbuild... Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... Verifying that tools to build the portability layer exist.... cpp present gcc present g++ present ld present cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1 rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver cleaning (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' rm -f trcid.h ibm_kxi.trclst rm -f install.he; \ for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h DirIds.h; do \ (set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' cleaning (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' rm -f install.he; \ for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \ (set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' cleaning (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build M=/usr/lpp/mmfs/src/gpl-linux clean make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he rm -f -rf .tmp_versions kdump-kern-dwarfs.c rm -f -f gpl-linux.trclst kdump lxtrace rm -f -rf usr make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' for i in ibm-kxi ibm-linux gpl-linux ; do \ (cd $i; echo "installing header files" "(`pwd`)"; \ /usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \ exit $?) || exit 1; \ done installing header files (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' Making directory /usr/lpp/mmfs/src/include/cxi + /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h + /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h + /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h + /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h + /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h + /usr/bin/install cxiGcryptoDefs.h /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + /usr/bin/install cxiSynchNames.h /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + /usr/bin/install cxiMiscNames.h /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' installing header files (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' + /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' installing header files (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Making directory /usr/lpp/mmfs/src/include/gpl-linux + /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h + /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h + /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h + /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h + /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h + /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h + /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h + /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... Pre-kbuild step 2... touch install.he Invoking Kbuild... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:65:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/inode.c:136:3: error: aggregate value used where an integer was expected TRACE5(TRACE_VNODE, 3, TRCID_PRINTINODE_4, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: At top level: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:2800:3: error: unknown type name ?wait_queue_t? wait_queue_t qwaiter; ^ /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: In function ?cxiWaitEventWait?: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3882:3: warning: passing argument 1 of ?init_waitqueue_entry? from incompatible pointer type [enabled by default] init_waitqueue_entry(&waitElement.qwaiter, current); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:78:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3883:3: warning: passing argument 2 of ?__add_wait_queue? from incompatible pointer type [enabled by default] __add_wait_queue(&waitElement.qhead, &waitElement.qwaiter); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:153:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiStartIO?: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2474:13: error: ?struct bio? has no member named ?bi_bdev? bioP->bi_bdev = bdevP; ^ In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiCleanIO?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:81: error: ?struct bio? has no member named ?bi_bdev? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:395:23: note: in definition of macro ?_TRACE_MACRO? { _TR_BEFORE; _ktrc; KTRCOPTCODE; _TR_AFTER; } else NOOP ^ /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:5: note: in expansion of macro ?_TRACE3D? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:432:26: note: in expansion of macro ?TRACE_TRCID_WAITIO_BDEVP_CALL? _TRACE_MACRO(_c, _l, TRACE_##id##_CALL) ^ /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2906:7: note: in expansion of macro ?TRACE3? TRACE3(TRACE_IO, 6, TRCID_WAITIO_BDEVP, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2915:23: error: ?struct bio? has no member named ?bi_error? if (bcP->biop[i]->bi_error) ^ /usr/src/linux-4.12.14-95.13/scripts/Makefile.build:326: recipe for target '/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o' failed make[5]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1 /usr/src/linux-4.12.14-95.13/Makefile:1557: recipe for target '_module_/usr/lpp/mmfs/src/gpl-linux' failed make[4]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2 Makefile:152: recipe for target 'sub-make' failed make[3]: *** [sub-make] Error 2 Makefile:24: recipe for target '__sub-make' failed make[2]: *** [__sub-make] Error 2 make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' makefile:130: recipe for target 'modules' failed make[1]: *** [modules] Error 1 make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' makefile:148: recipe for target 'Modules' failed make: *** [Modules] Error 1 -------------------------------------------------------- mmbuildgpl: Building GPL module failed at Thu May 16 09:28:54 MDT 2019. -------------------------------------------------------- mmbuildgpl: Command failed. Examine previous error messages to determine cause. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From anobre at br.ibm.com Thu May 16 17:36:35 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Thu, 16 May 2019 16:36:35 +0000 Subject: [gpfsug-discuss] mmbuild problem In-Reply-To: References: , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15580071695162.gif Type: image/gif Size: 105 bytes Desc: not available URL: From lgayne at us.ibm.com Thu May 16 18:05:48 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Thu, 16 May 2019 17:05:48 +0000 Subject: [gpfsug-discuss] mmbuild problem In-Reply-To: References: , , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15580071695162.gif Type: image/gif Size: 105 bytes Desc: not available URL: From brianbur at us.ibm.com Fri May 17 16:24:52 2019 From: brianbur at us.ibm.com (Brian Burnette) Date: Fri, 17 May 2019 15:24:52 +0000 Subject: [gpfsug-discuss] IBM Spectrum Scale Non-root Admin Research Message-ID: An HTML attachment was scrubbed... URL: From sadaniel at us.ibm.com Fri May 17 16:37:42 2019 From: sadaniel at us.ibm.com (Steven Daniels) Date: Fri, 17 May 2019 15:37:42 +0000 Subject: [gpfsug-discuss] IBM Spectrum Scale Non-root Admin Research In-Reply-To: References: Message-ID: Brian, We have a number of government clients that have to seek a waiver for each and every Spectrum Scale installation because of the root password-less ssh requirements. The sudo wrappers help but not really. My clients would all like to see the ssh requirement go away and also need to comply with Nessus scans. Different agencies may have custom scan profiles but even passing the standard ones is a good step. I have been discussing this internal with the development team for years. Thanks, Steve Steven A. Daniels Cross-brand Client Architect Senior Certified IT Specialist National Programs Fax and Voice: 3038101229 sadaniel at us.ibm.com http://www.ibm.com From: "Brian Burnette" To: gpfsug-discuss at spectrumscale.org Date: 05/17/2019 09:25 AM Subject: [EXTERNAL] [gpfsug-discuss] IBM Spectrum Scale Non-root Admin Research Sent by: gpfsug-discuss-bounces at spectrumscale.org Hey there Spectrum Scale Users, Are you interested in allowing members of your team to administer parts or all of your Spectrum Scale clusters without the power of root access? Chances are your answer is somewhere between "Yes" and "Definitely, yes, yes, yes!" If so, the Scale Research team would love to sit down with you to better understand the problems you're trying to solve with non-root access and possibly work with you over the coming months to design concepts and prototypes of different solutions. Just reply back and we'll work with you to schedule a time to chat. If you have any other comments, questions, or concerns feel free to let us know. Look forward to talking with you soon Brian Burnette IBM Systems - Spectrum Scale and Discover E-mail: brianbur at us.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=6mf8yZ-lDnfsy3mVONFq1RV1ypXT67SthQnq3D6Ym4Q&m=deWOF7sVb3e9mZabqIi0axMgkZE1FEs99isaMTZQcmw&s=axVOZPNouq3IgCatiR49oOZ0bw5OR0JaECiJuxHzQl0&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: opencits-d.jpg Type: image/jpeg Size: 182862 bytes Desc: not available URL: From l.walid at powerm.ma Sun May 19 05:14:05 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Sun, 19 May 2019 04:14:05 +0000 Subject: [gpfsug-discuss] Introduction Message-ID: Hi, I'm Largou Walid, Technical Architect for Power Maroc, Platinium Business Partner, we specialize in IBM Products (Hardware & Software). I've been using Spectrum Scale for about two years now, we have an upcoming project for HPC for the local Weather Company with an amazing 120 Spectrum Scale Nodes (10.000 CPU), i've worked on CES Services also, and AFM DR for one of our customers. I'm from Casablanca, Morocco, glad to be part of the community. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From l.walid at powerm.ma Sun May 19 20:30:06 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Sun, 19 May 2019 19:30:06 +0000 Subject: [gpfsug-discuss] Active Directory Authentification Message-ID: Hi, I'm planning to integrate Active Directory with our Spectrum Scale, but it seems i'm missing out something, please note that i'm on a 2 protocol nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've tried from the gui the two ways, connect to Active Directory, and the other to LDAP. *Connect to LDAP : * mmuserauth service create --data-access-method 'file' --type 'LDAP' --servers 'powermdomain.powerm.ma:389' --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' 7:26 PM Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server 7:26 PM Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL 7:26 PM pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. 7:26 PM pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) 7:26 PM WARNING: Could not open passdb 7:26 PM File authentication configuration failed. 7:26 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:26 PM Operation Failed 7:26 PM Error: Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) WARNING: Could not open passdb File authentication configuration failed. mmuserauth service create: Command failed. Examine previous error messages to determine cause. *Connect to Active Directory : * mmuserauth service create --data-access-method 'file' --type 'AD' --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains 'powerm.ma (type=stand-alone:ldap_srv=192.168.56.5: range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword )' 7:29 PM mmuserauth service create: Invalid parameter passed for --ldapmap-domain 7:29 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:29 PM Operation Failed 7:29 PM Error: mmuserauth service create: Invalid parameter passed for --ldapmap-domain mmuserauth service create: Command failed. Examine previous error messages to determine cause. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From will.schmied at stjude.org Mon May 20 00:24:15 2019 From: will.schmied at stjude.org (Schmied, Will) Date: Sun, 19 May 2019 23:24:15 +0000 Subject: [gpfsug-discuss] Active Directory Authentification In-Reply-To: References: Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826@stjude.org> Hi Walid, Without knowing any specifics of your environment, the below command is what I have used, successfully across multiple clusters at 4.2.x. The binding account you specify needs to be able to add computers to the domain. mmuserauth service create --data-access-method file --type ad --servers some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master --netbios-name some_ad_computer_name --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" 10000-9999999 is the acceptable range of UID / GID for AD accounts. Thanks, Will From: on behalf of "L.walid (PowerM)" Reply-To: gpfsug main discussion list Date: Sunday, May 19, 2019 at 14:30 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Active Directory Authentification Caution: External Sender Hi, I'm planning to integrate Active Directory with our Spectrum Scale, but it seems i'm missing out something, please note that i'm on a 2 protocol nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've tried from the gui the two ways, connect to Active Directory, and the other to LDAP. Connect to LDAP : mmuserauth service create --data-access-method 'file' --type 'LDAP' --servers 'powermdomain.powerm.ma:389' --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' 7:26 PM Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server 7:26 PM Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL 7:26 PM pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. 7:26 PM pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) 7:26 PM WARNING: Could not open passdb 7:26 PM File authentication configuration failed. 7:26 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:26 PM Operation Failed 7:26 PM Error: Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) WARNING: Could not open passdb File authentication configuration failed. mmuserauth service create: Command failed. Examine previous error messages to determine cause. Connect to Active Directory : mmuserauth service create --data-access-method 'file' --type 'AD' --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains 'powerm.ma(type=stand-alone:ldap_srv=192.168.56.5:range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword)' 7:29 PM mmuserauth service create: Invalid parameter passed for --ldapmap-domain 7:29 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:29 PM Operation Failed 7:29 PM Error: mmuserauth service create: Invalid parameter passed for --ldapmap-domain mmuserauth service create: Command failed. Examine previous error messages to determine cause. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 621 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. ________________________________ Email Disclaimer: www.stjude.org/emaildisclaimer Consultation Disclaimer: www.stjude.org/consultationdisclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.walid at powerm.ma Mon May 20 00:39:31 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Sun, 19 May 2019 23:39:31 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 In-Reply-To: References: Message-ID: Hi, Thanks for the feedback, i have tried the suggested command : mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. [root at scale1 ~]# mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name walid --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'walid' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. i tried both domain qualifier and plain user in the --name parameters but i get Invalid Credentials (knowing that walid is an Administrator in Active Directory) [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma -x -W -D " walid at powerm.ma" -b "dc=powerm,dc=ma" "(sAMAccountName=walid)" Enter LDAP Password: # extended LDIF # # LDAPv3 # base with scope subtree # filter: (sAMAccountName=walid) # requesting: ALL # # Walid, Users, powerm.ma dn: CN=Walid,CN=Users,DC=powerm,DC=ma objectClass: top objectClass: person objectClass: organizationalPerson objectClass: user cn: Walid sn: Largou givenName: Walid distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma instanceType: 4 whenCreated: 20190518224649.0Z whenChanged: 20190520001645.0Z uSNCreated: 12751 memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma uSNChanged: 16404 name: Walid objectGUID:: Le4tH38qy0SfcxaroNGPEg== userAccountControl: 512 badPwdCount: 0 codePage: 0 countryCode: 0 badPasswordTime: 132028055547447029 lastLogoff: 0 lastLogon: 132028055940741392 pwdLastSet: 132026934129698743 primaryGroupID: 513 objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== adminCount: 1 accountExpires: 9223372036854775807 logonCount: 0 sAMAccountName: walid sAMAccountType: 805306368 objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma dSCorePropagationData: 20190518225159.0Z dSCorePropagationData: 16010101000000.0Z lastLogonTimestamp: 132027850050695698 # search reference ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma # search reference ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma # search reference ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma # search result search: 2 result: 0 Success On Sun, 19 May 2019 at 23:31, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Active Directory Authentification (Schmied, Will) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 19 May 2019 23:24:15 +0000 > From: "Schmied, Will" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Active Directory Authentification > Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org> > Content-Type: text/plain; charset="utf-8" > > Hi Walid, > > Without knowing any specifics of your environment, the below command is > what I have used, successfully across multiple clusters at 4.2.x. The > binding account you specify needs to be able to add computers to the domain. > > mmuserauth service create --data-access-method file --type ad --servers > some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master > --netbios-name some_ad_computer_name --unixmap-domains > "DOMAIN_NETBIOS_NAME(10000-9999999)" > > 10000-9999999 is the acceptable range of UID / GID for AD accounts. > > > > Thanks, > Will > > > From: on behalf of "L.walid > (PowerM)" > Reply-To: gpfsug main discussion list > Date: Sunday, May 19, 2019 at 14:30 > To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Active Directory Authentification > > Caution: External Sender > > Hi, > > I'm planning to integrate Active Directory with our Spectrum Scale, but it > seems i'm missing out something, please note that i'm on a 2 protocol nodes > with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've > tried from the gui the two ways, connect to Active Directory, and the other > to LDAP. > > Connect to LDAP : > mmuserauth service create --data-access-method 'file' --type 'LDAP' > --servers 'powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0>' > --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' > --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' > 7:26 PM > Either failed to create a samba domain entry on LDAP server if not present > or could not read the already existing samba domain entry from the LDAP > server > 7:26 PM > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > 7:26 PM > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > 7:26 PM > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > 7:26 PM > WARNING: Could not open passdb > 7:26 PM > File authentication configuration failed. > 7:26 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:26 PM > Operation Failed > 7:26 PM > Error: Either failed to create a samba domain entry on LDAP server if not > present or could not read the already existing samba domain entry from the > LDAP server > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > WARNING: Could not open passdb > File authentication configuration failed. > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > Connect to Active Directory : > mmuserauth service create --data-access-method 'file' --type 'AD' > --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' > --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains ' > powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=tJKajnPMlWowHIAHnoxbceVIbE4t19KiLCaohZRwwYQ%3D&reserved=0 > >(type=stand-alone:ldap_srv=192.168.56.5: > range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword > )' > 7:29 PM > mmuserauth service create: Invalid parameter passed for --ldapmap-domain > 7:29 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:29 PM > Operation Failed > 7:29 PM > Error: mmuserauth service create: Invalid parameter passed for > --ldapmap-domain > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > -- > Best regards, > > > Walid Largou > Senior IT Specialist > > Power Maroc > > Mobile : +212 621 31 98 71 > > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > > https://www.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=qpwCQkujjr3Sq0wCySyjRMGZrp94mvRQAK0iGlh7DqQ%3D&reserved=0 > > > > [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > > ________________________________ > > Email Disclaimer: www.stjude.org/emaildisclaimer > Consultation Disclaimer: www.stjude.org/consultationdisclaimer > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190519/9b579ecf/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 88, Issue 19 > ********************************************** > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From will.schmied at stjude.org Mon May 20 02:45:57 2019 From: will.schmied at stjude.org (Schmied, Will) Date: Mon, 20 May 2019 01:45:57 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 In-Reply-To: References: Message-ID: ?Well not seeing anything odd about the second try (just the username only) except that your NETBIOS domain name needs to be put in place of the placeholder (DOMAIN_NETBIOS_NAME). You can copy from a text file and then paste into the stdin when the command asks for your password. Just a way to be sure no typos are in the password entry. Thanks, Will From: on behalf of "L.walid (PowerM)" Reply-To: gpfsug main discussion list Date: Sunday, May 19, 2019 at 18:39 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 Caution: External Sender Hi, Thanks for the feedback, i have tried the suggested command : mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. [root at scale1 ~]# mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name walid --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'walid' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. i tried both domain qualifier and plain user in the --name parameters but i get Invalid Credentials (knowing that walid is an Administrator in Active Directory) [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma -x -W -D "walid at powerm.ma" -b "dc=powerm,dc=ma" "(sAMAccountName=walid)" Enter LDAP Password: # extended LDIF # # LDAPv3 # base with scope subtree # filter: (sAMAccountName=walid) # requesting: ALL # # Walid, Users, powerm.ma dn: CN=Walid,CN=Users,DC=powerm,DC=ma objectClass: top objectClass: person objectClass: organizationalPerson objectClass: user cn: Walid sn: Largou givenName: Walid distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma instanceType: 4 whenCreated: 20190518224649.0Z whenChanged: 20190520001645.0Z uSNCreated: 12751 memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma uSNChanged: 16404 name: Walid objectGUID:: Le4tH38qy0SfcxaroNGPEg== userAccountControl: 512 badPwdCount: 0 codePage: 0 countryCode: 0 badPasswordTime: 132028055547447029 lastLogoff: 0 lastLogon: 132028055940741392 pwdLastSet: 132026934129698743 primaryGroupID: 513 objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== adminCount: 1 accountExpires: 9223372036854775807 logonCount: 0 sAMAccountName: walid sAMAccountType: 805306368 objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma dSCorePropagationData: 20190518225159.0Z dSCorePropagationData: 16010101000000.0Z lastLogonTimestamp: 132027850050695698 # search reference ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma # search reference ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma # search reference ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma # search result search: 2 result: 0 Success On Sun, 19 May 2019 at 23:31, > wrote: Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Active Directory Authentification (Schmied, Will) ---------------------------------------------------------------------- Message: 1 Date: Sun, 19 May 2019 23:24:15 +0000 From: "Schmied, Will" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Active Directory Authentification Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org> Content-Type: text/plain; charset="utf-8" Hi Walid, Without knowing any specifics of your environment, the below command is what I have used, successfully across multiple clusters at 4.2.x. The binding account you specify needs to be able to add computers to the domain. mmuserauth service create --data-access-method file --type ad --servers some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master --netbios-name some_ad_computer_name --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" 10000-9999999 is the acceptable range of UID / GID for AD accounts. Thanks, Will From: > on behalf of "L.walid (PowerM)" > Reply-To: gpfsug main discussion list > Date: Sunday, May 19, 2019 at 14:30 To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Active Directory Authentification Caution: External Sender Hi, I'm planning to integrate Active Directory with our Spectrum Scale, but it seems i'm missing out something, please note that i'm on a 2 protocol nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've tried from the gui the two ways, connect to Active Directory, and the other to LDAP. Connect to LDAP : mmuserauth service create --data-access-method 'file' --type 'LDAP' --servers 'powermdomain.powerm.ma:389>' --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' 7:26 PM Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server 7:26 PM Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL 7:26 PM pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. 7:26 PM pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389>" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) 7:26 PM WARNING: Could not open passdb 7:26 PM File authentication configuration failed. 7:26 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:26 PM Operation Failed 7:26 PM Error: Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389>" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) WARNING: Could not open passdb File authentication configuration failed. mmuserauth service create: Command failed. Examine previous error messages to determine cause. Connect to Active Directory : mmuserauth service create --data-access-method 'file' --type 'AD' --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains 'powerm.ma>(type=stand-alone:ldap_srv=192.168.56.5:range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword)' 7:29 PM mmuserauth service create: Invalid parameter passed for --ldapmap-domain 7:29 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:29 PM Operation Failed 7:29 PM Error: mmuserauth service create: Invalid parameter passed for --ldapmap-domain mmuserauth service create: Command failed. Examine previous error messages to determine cause. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 621 31 98 71 Email: l.walid at powerm.ma> 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma> [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. ________________________________ Email Disclaimer: www.stjude.org/emaildisclaimer Consultation Disclaimer: www.stjude.org/consultationdisclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: > ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 88, Issue 19 ********************************************** -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 621 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From par at nl.ibm.com Mon May 20 15:45:11 2019 From: par at nl.ibm.com (Par Hettinga-Ayakannu) Date: Mon, 20 May 2019 16:45:11 +0200 Subject: [gpfsug-discuss] Introduction In-Reply-To: References: Message-ID: Hi Largou, Welcome to the community, glad you joined. Best Regards, Par Hettinga, Global SDI Sales Enablement Leader Storage and Software Defined Infrastructure, IBM Systems Tel:+31(0)20-5132194 Mobile:+31(0)6-53359940 email:par at nl.ibm.com From: "L.walid (PowerM)" To: gpfsug-discuss at spectrumscale.org Date: 19/05/2019 06:14 Subject: [gpfsug-discuss] Introduction Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I'm Largou Walid, Technical Architect for Power Maroc, Platinium Business Partner, we specialize in IBM Products (Hardware & Software). I've been using Spectrum Scale for about two?years now, we have an upcoming project for HPC for the local Weather Company with an amazing 120 Spectrum Scale Nodes (10.000 CPU), i've worked on CES Services also, and AFM DR for one of our customers. I'm from Casablanca, Morocco, glad to be part of the community. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile :?+212 621 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute?a commitment by Power Maroc S.A.R.L except where?provided for in a written agreement between you and?Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you?are not the intended recipient of the message, please notify?the sender immediately.[attachment "PastedGraphic-2.png" deleted by Par Hettinga-Ayakannu/Netherlands/IBM] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=aJbIzokCniIn3gptOcLKQA&m=2sXSYflj8LjtwCIsCO2D34AV3EC94GqkwXC_gYthAgk&s=UBmuldWixuYylgIv3yT-6ILUkt7L5UTT6QOaY-NaljI&e= Tenzij hierboven anders aangegeven: / Unless stated otherwise above: IBM Nederland B.V. Gevestigd te Amsterdam Inschrijving Handelsregister Amsterdam Nr. 33054214 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From l.walid at powerm.ma Mon May 20 16:36:08 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Mon, 20 May 2019 15:36:08 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 21 In-Reply-To: References: Message-ID: Hi, I manage to make the command work (basically checking /etc/resolv.conf, /etc/hosts, /etc/nsswitch.conf) : root at scale1 committed]# mmuserauth service create --data-access-method file --type ad --servers X.X.X.X --user-name MYUSER --idmap-role master --netbios-name CESSCALE --unixmap-domains "MYDOMAIN(10000-9999999)" Enter Active Directory User 'spectrum_scale' password: File authentication configuration completed successfully. [root at scale1 committed]# mmuserauth service check Userauth file check on node: scale1 Checking nsswitch file: OK Checking Pre-requisite Packages: OK Checking SRV Records lookup: OK Service 'gpfs-winbind' status: OK Object not configured [root at scale1 committed]# mmuserauth service check --server-reachability Userauth file check on node: scale1 Checking nsswitch file: OK Checking Pre-requisite Packages: OK Checking SRV Records lookup: OK Domain Controller status NETLOGON connection: OK, connection to DC: xxxx Domain join status: OK Machine password status: OK Service 'gpfs-winbind' status: OK Object not configured But unfortunately, even if all the commands seems good, i cannot use user from active directory as owner or to setup ACL on SMB shares (it doesn't recognise AD users), plus the command 'id DOMAIN\USER' gives error cannot find user. Any ideas ? On Mon, 20 May 2019 at 01:46, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: gpfsug-discuss Digest, Vol 88, Issue 19 (Schmied, Will) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 20 May 2019 01:45:57 +0000 > From: "Schmied, Will" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 > Message-ID: > Content-Type: text/plain; charset="utf-8" > > ?Well not seeing anything odd about the second try (just the username > only) except that your NETBIOS domain name needs to be put in place of the > placeholder (DOMAIN_NETBIOS_NAME). > > You can copy from a text file and then paste into the stdin when the > command asks for your password. Just a way to be sure no typos are in the > password entry. > > > > Thanks, > Will > > > From: on behalf of "L.walid > (PowerM)" > Reply-To: gpfsug main discussion list > Date: Sunday, May 19, 2019 at 18:39 > To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 > > Caution: External Sender > > Hi, > > Thanks for the feedback, i have tried the suggested command : > > mmuserauth service create --data-access-method file --type ad --servers > powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> > --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master > --netbios-name scaleces --unixmap-domains > "DOMAIN_NETBIOS_NAME(10000-9999999)" > Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: > Invalid credentials specified for the server powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 > > > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > > [root at scale1 ~]# mmuserauth service create --data-access-method file > --type ad --servers powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> > --user-name walid --idmap-role master --netbios-name scaleces > --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" > Enter Active Directory User 'walid' password: > Invalid credentials specified for the server powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 > > > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > > i tried both domain qualifier and plain user in the --name parameters but > i get Invalid Credentials (knowing that walid is an Administrator in Active > Directory) > > [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> > -x -W -D "walid at powerm.ma" -b "dc=powerm,dc=ma" > "(sAMAccountName=walid)" > Enter LDAP Password: > # extended LDIF > # > # LDAPv3 > # base with scope subtree > # filter: (sAMAccountName=walid) > # requesting: ALL > # > > # Walid, Users, powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 > > > dn: CN=Walid,CN=Users,DC=powerm,DC=ma > objectClass: top > objectClass: person > objectClass: organizationalPerson > objectClass: user > cn: Walid > sn: Largou > givenName: Walid > distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma > instanceType: 4 > whenCreated: 20190518224649.0Z > whenChanged: 20190520001645.0Z > uSNCreated: 12751 > memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma > uSNChanged: 16404 > name: Walid > objectGUID:: Le4tH38qy0SfcxaroNGPEg== > userAccountControl: 512 > badPwdCount: 0 > codePage: 0 > countryCode: 0 > badPasswordTime: 132028055547447029 > lastLogoff: 0 > lastLogon: 132028055940741392 > pwdLastSet: 132026934129698743 > primaryGroupID: 513 > objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== > adminCount: 1 > accountExpires: 9223372036854775807 > logonCount: 0 > sAMAccountName: walid > sAMAccountType: 805306368 > objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma > dSCorePropagationData: 20190518225159.0Z > dSCorePropagationData: 16010101000000.0Z > lastLogonTimestamp: 132027850050695698 > > # search reference > ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FForestDnsZones.powerm.ma%2FDC%3DForestDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=k6CYQeGq2lgAtY1qmVueO9OmK1a9SzGMNGm%2BPlyfwto%3D&reserved=0 > > > > # search reference > ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FDomainDnsZones.powerm.ma%2FDC%3DDomainDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=TFYJ1nBOLaxelI2KZPaoZidLvCOPv6lrD51ZRjEBkqA%3D&reserved=0 > > > > # search reference > ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma%2FCN%3DConfiguration%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=8cVvHhnXPrqogSd8QLP6McEAoGrc2oRIKbtZYBiDz3M%3D&reserved=0 > > > > # search result > search: 2 > result: 0 Success > > > On Sun, 19 May 2019 at 23:31, > wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org gpfsug-discuss at spectrumscale.org> > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 > > > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org gpfsug-discuss-request at spectrumscale.org> > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org gpfsug-discuss-owner at spectrumscale.org> > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Active Directory Authentification (Schmied, Will) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 19 May 2019 23:24:15 +0000 > From: "Schmied, Will" will.schmied at stjude.org>> > To: gpfsug main discussion list gpfsug-discuss at spectrumscale.org>> > Subject: Re: [gpfsug-discuss] Active Directory Authentification > Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org 4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org>> > Content-Type: text/plain; charset="utf-8" > > Hi Walid, > > Without knowing any specifics of your environment, the below command is > what I have used, successfully across multiple clusters at 4.2.x. The > binding account you specify needs to be able to add computers to the domain. > > mmuserauth service create --data-access-method file --type ad --servers > some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master > --netbios-name some_ad_computer_name --unixmap-domains > "DOMAIN_NETBIOS_NAME(10000-9999999)" > > 10000-9999999 is the acceptable range of UID / GID for AD accounts. > > > > Thanks, > Will > > > From: gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "L.walid > (PowerM)" > > Reply-To: gpfsug main discussion list > > Date: Sunday, May 19, 2019 at 14:30 > To: "gpfsug-discuss at spectrumscale.org gpfsug-discuss at spectrumscale.org>" > > Subject: [gpfsug-discuss] Active Directory Authentification > > Caution: External Sender > > Hi, > > I'm planning to integrate Active Directory with our Spectrum Scale, but it > seems i'm missing out something, please note that i'm on a 2 protocol nodes > with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've > tried from the gui the two ways, connect to Active Directory, and the other > to LDAP. > > Connect to LDAP : > mmuserauth service create --data-access-method 'file' --type 'LDAP' > --servers 'powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>' > --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' > --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn > 'cn=users,dc=powerm,dc=ma' > 7:26 PM > Either failed to create a samba domain entry on LDAP server if not present > or could not read the already existing samba domain entry from the LDAP > server > 7:26 PM > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > 7:26 PM > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > 7:26 PM > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > 7:26 PM > WARNING: Could not open passdb > 7:26 PM > File authentication configuration failed. > 7:26 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:26 PM > Operation Failed > 7:26 PM > Error: Either failed to create a samba domain entry on LDAP server if not > present or could not read the already existing samba domain entry from the > LDAP server > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > WARNING: Could not open passdb > File authentication configuration failed. > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > Connect to Active Directory : > mmuserauth service create --data-access-method 'file' --type 'AD' > --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' > --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains ' > powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=tJKajnPMlWowHIAHnoxbceVIbE4t19KiLCaohZRwwYQ%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 > >>(type=s > tand-alone:ldap_srv=192.168.56.5: > range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword > )' > 7:29 PM > mmuserauth service create: Invalid parameter passed for --ldapmap-domain > 7:29 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:29 PM > Operation Failed > 7:29 PM > Error: mmuserauth service create: Invalid parameter passed for > --ldapmap-domain > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > -- > Best regards, > > > Walid Largou > Senior IT Specialist > > Power Maroc > > Mobile : +212 621 31 98 71 > > Email: l.walid at powerm.ma y.largou at powerm.ma> > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > > https://www.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=lFUQnvPlecsmKcAL%2FC4PbmfqyxW0sn5PI%2Bu4aCD5448%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=qpwCQkujjr3Sq0wCySyjRMGZrp94mvRQAK0iGlh7DqQ%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 > >> > > [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > > ________________________________ > > Email Disclaimer: www.stjude.org/emaildisclaimer< > http://www.stjude.org/emaildisclaimer> > Consultation Disclaimer: www.stjude.org/consultationdisclaimer< > http://www.stjude.org/consultationdisclaimer> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190519/9b579ecf/attachment.html > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fpipermail%2Fgpfsug-discuss%2Fattachments%2F20190519%2F9b579ecf%2Fattachment.html&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=DlY%2Bdy25zq2TcPBLwf%2FDQm0cngmIu6FTDzEW9PgTsrc%3D&reserved=0 > >> > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=UNt7Tspdurvw2nLSOYUf3T5pbwfD0xmW91PlwxOJi2Y%3D&reserved=0 > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 > > > > > End of gpfsug-discuss Digest, Vol 88, Issue 19 > ********************************************** > > > -- > Best regards, > > > Walid Largou > Senior IT Specialist > > Power Maroc > > Mobile : +212 621 31 98 71 > > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > > https://www.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 > > > > [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190520/92f25565/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 88, Issue 21 > ********************************************** > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From christof.schmitt at us.ibm.com Mon May 20 19:51:46 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Mon, 20 May 2019 18:51:46 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 21 In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: From truston at mbari.org Mon May 20 21:05:53 2019 From: truston at mbari.org (Todd Ruston) Date: Mon, 20 May 2019 13:05:53 -0700 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question Message-ID: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Greetings all, First post here, so by way of introduction we are a fairly new Spectrum Scale and Archive customer (installed last year and live in production Q1 this year). We have a four node (plus EMS) ESS system with ~520TB of mixed spinning disk and SSD. Client access to the system is via CES (NFS and SMB, running on two protocol nodes), integrated with Active Directory, for a mixed population of Windows, Mac, and Linux clients. A separate pair of nodes run Spectrum Archive, with a TS4500 LTO-8 library behind them. We use the system for general institute data, with the largest data types being HD video, multibeam sonar, and hydrophone data. Video is the currently active data type in production; we will be migrating the rest over time. So far things are running pretty well. Our archive approach is to premigrate data, particularly the large, unchanging data like the above mentioned data types, almost immediately upon landing in the system. Then we migrate those that have not been accessed in a period of time (or manually if space demands require it). We do wish to allow users to recall archived data on demand as needed. Because we have a large contingent of Mac clients (accessing the system via SMB), one issue we want to get ahead of is inadvertent recalls triggered by Mac preview generation, Quick Look, Cover Flow/Gallery view, and the like. Going in we knew this was going to be something we'd need to address, and we anticipated being able to configure Finder to disable preview generation and train users to avoid Quick Look unless they intended to trigger a recall. In our testing however, even with those features disabled/avoided, we have seen Mac clients trigger inadvertent recalls just from CLI 'ls -lshrt' interactions with the system. While brainstorming ways to prevent these inadvertent recalls while still allowing users to initiate recalls on their own when needed, one thought that came to us is we might be able to turn off recalls via SMB (setgpfs:recalls = no via mmsmb), and create a simple self-service web portal that would allow users to browse the Scale file system with a web browser, select files for recall, and initiate the recall from there. The web interface could run on one of the Archive nodes, and the back end of it would simply send a list of selected file paths to ltfsee recall. Before possibly reinventing the wheel, I thought I'd check to see if something like this may already exist, either from IBM, the Scale user community, or a third-party/open source tool that could be leveraged for the purpose. I searched the list archive and didn't find anything, but please let me know if I missed something. And please let me know if you know of something that would fit this need, or other ideas as well. Cheers, -- Todd E. Ruston Information Systems Manager Monterey Bay Aquarium Research Institute (MBARI) 7700 Sandholdt Road, Moss Landing, CA, 95039 Phone 831-775-1997 Fax 831-775-1652 http://www.mbari.org From christof.schmitt at us.ibm.com Mon May 20 21:33:57 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Mon, 20 May 2019 20:33:57 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Intro=2C=09and_Spectrum_Archive_self-s?= =?utf-8?q?ervice_recall_interface_question?= In-Reply-To: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: An HTML attachment was scrubbed... URL: From stockf at us.ibm.com Mon May 20 21:41:16 2019 From: stockf at us.ibm.com (Frederick Stock) Date: Mon, 20 May 2019 20:41:16 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Intro=2C=09and_Spectrum_Archive_self-s?= =?utf-8?q?ervice_recall_interface_question?= In-Reply-To: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: An HTML attachment was scrubbed... URL: From richard.rupp at us.ibm.com Mon May 20 21:48:40 2019 From: richard.rupp at us.ibm.com (RICHARD RUPP) Date: Mon, 20 May 2019 16:48:40 -0400 Subject: [gpfsug-discuss] =?utf-8?q?Intro=2C=09and_Spectrum_Archive_self-s?= =?utf-8?q?ervice_recall_interface_question?= In-Reply-To: References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: I've heard that this works, but I have not tried it myself - https://support.apple.com/en-us/HT208209 Regards, Richard Rupp, Sales Specialist, Phone: 1-347-510-6746 From: "Frederick Stock" To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Date: 05/20/2019 04:41 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question Sent by: gpfsug-discuss-bounces at spectrumscale.org Todd I am not aware of any tool that provides the out of band recall that you propose, though it would be quite useful. However, I wanted to note that as I understand the reason the the Mac client initiates the file recalls is because the Mac SMB client ignores the archive bit, indicating a file does not reside in online storage, in the SMB protocol. To date efforts to have Apple change their SMB client to respect the archive bit have not been successful but if you feel so inclined we would be grateful if you would submit a request to Apple for them to change their SMB client to honor the archive bit and thus avoid file recalls. Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com ----- Original message ----- From: Todd Ruston Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: [EXTERNAL] [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question Date: Mon, May 20, 2019 4:12 PM Greetings all, First post here, so by way of introduction we are a fairly new Spectrum Scale and Archive customer (installed last year and live in production Q1 this year). We have a four node (plus EMS) ESS system with ~520TB of mixed spinning disk and SSD. Client access to the system is via CES (NFS and SMB, running on two protocol nodes), integrated with Active Directory, for a mixed population of Windows, Mac, and Linux clients. A separate pair of nodes run Spectrum Archive, with a TS4500 LTO-8 library behind them. We use the system for general institute data, with the largest data types being HD video, multibeam sonar, and hydrophone data. Video is the currently active data type in production; we will be migrating the rest over time. So far things are running pretty well. Our archive approach is to premigrate data, particularly the large, unchanging data like the above mentioned data types, almost immediately upon landing in the system. Then we migrate those that have not been accessed in a period of time (or manually if space demands require it). We do wish to allow users to recall archived data on demand as needed. Because we have a large contingent of Mac clients (accessing the system via SMB), one issue we want to get ahead of is inadvertent recalls triggered by Mac preview generation, Quick Look, Cover Flow/Gallery view, and the like. Going in we knew this was going to be something we'd need to address, and we anticipated being able to configure Finder to disable preview generation and train users to avoid Quick Look unless they intended to trigger a recall. In our testing however, even with those features disabled/avoided, we have seen Mac clients trigger inadvertent recalls just from CLI 'ls -lshrt' interactions with the system. While brainstorming ways to prevent these inadvertent recalls while still allowing users to initiate recalls on their own when needed, one thought that came to us is we might be able to turn off recalls via SMB (setgpfs:recalls = no via mmsmb), and create a simple self-service web portal that would allow users to browse the Scale file system with a web browser, select files for recall, and initiate the recall from there. The web interface could run on one of the Archive nodes, and the back end of it would simply send a list of selected file paths to ltfsee recall. Before possibly reinventing the wheel, I thought I'd check to see if something like this may already exist, either from IBM, the Scale user community, or a third-party/open source tool that could be leveraged for the purpose. I searched the list archive and didn't find anything, but please let me know if I missed something. And please let me know if you know of something that would fit this need, or other ideas as well. Cheers, -- Todd E. Ruston Information Systems Manager Monterey Bay Aquarium Research Institute (MBARI) 7700 Sandholdt Road, Moss Landing, CA, 95039 Phone 831-775-1997 Fax 831-775-1652 http://www.mbari.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=EXL-jEd1jmdzvOIhT87C7SIqmAS9uhVQ6J3kObct4OY&m=xkYegIiDkaPYiV4_T1Zd0mLhj-2r34rhi8EbFYw_ei8&s=bOxknFCPDWKJdnKbMs-BIU7zXcb0tsLSRw7YDzmRlgA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From truston at mbari.org Mon May 20 22:50:13 2019 From: truston at mbari.org (Todd Ruston) Date: Mon, 20 May 2019 14:50:13 -0700 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: Thanks very much for the replies so far. I had already pinged Apple asking them to honor the offline bit in their SMB implementation. I don't think we carry a whole lot of weight with them, but at least we've put another "vote in the hopper" for the feature. We had tried the settings in the article Richard referenced, but recalls still occurred. Christof's suggestion of parallel SMB exports, one with and one without recall enabled, is one we hadn't thought of and has a lot of promise for our situation. Thanks for the idea! Cheers, - Todd > On May 20, 2019, at 1:48 PM, RICHARD RUPP wrote: > > I've heard that this works, but I have not tried it myself - https://support.apple.com/en-us/HT208209 > > Regards, > > Richard Rupp, Sales Specialist, Phone: 1-347-510-6746 > > > "Frederick Stock" ---05/20/2019 04:41:37 PM---Todd I am not aware of any tool that provides the out of band recall that you propose, though it wou > > From: "Frederick Stock" > To: gpfsug-discuss at spectrumscale.org > Cc: gpfsug-discuss at spectrumscale.org > Date: 05/20/2019 04:41 PM > Subject: [EXTERNAL] Re: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > Todd I am not aware of any tool that provides the out of band recall that you propose, though it would be quite useful. However, I wanted to note that as I understand the reason the the Mac client initiates the file recalls is because the Mac SMB client ignores the archive bit, indicating a file does not reside in online storage, in the SMB protocol. To date efforts to have Apple change their SMB client to respect the archive bit have not been successful but if you feel so inclined we would be grateful if you would submit a request to Apple for them to change their SMB client to honor the archive bit and thus avoid file recalls. > > Fred > __________________________________________________ > Fred Stock | IBM Pittsburgh Lab | 720-430-8821 > stockf at us.ibm.com > > > ----- Original message ----- > From: Todd Ruston > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Cc: > Subject: [EXTERNAL] [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question > Date: Mon, May 20, 2019 4:12 PM > > Greetings all, > > First post here, so by way of introduction we are a fairly new Spectrum Scale and Archive customer (installed last year and live in production Q1 this year). We have a four node (plus EMS) ESS system with ~520TB of mixed spinning disk and SSD. Client access to the system is via CES (NFS and SMB, running on two protocol nodes), integrated with Active Directory, for a mixed population of Windows, Mac, and Linux clients. A separate pair of nodes run Spectrum Archive, with a TS4500 LTO-8 library behind them. > > We use the system for general institute data, with the largest data types being HD video, multibeam sonar, and hydrophone data. Video is the currently active data type in production; we will be migrating the rest over time. So far things are running pretty well. > > Our archive approach is to premigrate data, particularly the large, unchanging data like the above mentioned data types, almost immediately upon landing in the system. Then we migrate those that have not been accessed in a period of time (or manually if space demands require it). We do wish to allow users to recall archived data on demand as needed. > > Because we have a large contingent of Mac clients (accessing the system via SMB), one issue we want to get ahead of is inadvertent recalls triggered by Mac preview generation, Quick Look, Cover Flow/Gallery view, and the like. Going in we knew this was going to be something we'd need to address, and we anticipated being able to configure Finder to disable preview generation and train users to avoid Quick Look unless they intended to trigger a recall. In our testing however, even with those features disabled/avoided, we have seen Mac clients trigger inadvertent recalls just from CLI 'ls -lshrt' interactions with the system. > > While brainstorming ways to prevent these inadvertent recalls while still allowing users to initiate recalls on their own when needed, one thought that came to us is we might be able to turn off recalls via SMB (setgpfs:recalls = no via mmsmb), and create a simple self-service web portal that would allow users to browse the Scale file system with a web browser, select files for recall, and initiate the recall from there. The web interface could run on one of the Archive nodes, and the back end of it would simply send a list of selected file paths to ltfsee recall. > > Before possibly reinventing the wheel, I thought I'd check to see if something like this may already exist, either from IBM, the Scale user community, or a third-party/open source tool that could be leveraged for the purpose. I searched the list archive and didn't find anything, but please let me know if I missed something. And please let me know if you know of something that would fit this need, or other ideas as well. > > Cheers, > > -- > Todd E. Ruston > Information Systems Manager > Monterey Bay Aquarium Research Institute (MBARI) > 7700 Sandholdt Road, Moss Landing, CA, 95039 > Phone 831-775-1997 Fax 831-775-1652 http://www.mbari.org > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.walid at powerm.ma Tue May 21 03:24:58 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Tue, 21 May 2019 02:24:58 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 21 In-Reply-To: References: Message-ID: *Update :* I have the environment working now with the command : mmuserauth service create --data-access-method 'file' --type 'AD' --servers IPADDRESS--user-name USERNAME --netbios-name 'scaleces' --idmap-role 'MASTER' --idmap-range '10000000-11999999' --idmap-range-size '100000' Removing the unix-map solved the issue. Thanks for your help On Mon, 20 May 2019 at 15:36, L.walid (PowerM) wrote: > Hi, > > I manage to make the command work (basically checking /etc/resolv.conf, > /etc/hosts, /etc/nsswitch.conf) : > > root at scale1 committed]# mmuserauth service create --data-access-method > file --type ad --servers X.X.X.X --user-name MYUSER --idmap-role master > --netbios-name CESSCALE --unixmap-domains "MYDOMAIN(10000-9999999)" > Enter Active Directory User 'spectrum_scale' password: > File authentication configuration completed successfully. > > > [root at scale1 committed]# mmuserauth service check > > Userauth file check on node: scale1 > Checking nsswitch file: OK > Checking Pre-requisite Packages: OK > Checking SRV Records lookup: OK > Service 'gpfs-winbind' status: OK > Object not configured > > > [root at scale1 committed]# mmuserauth service check --server-reachability > > Userauth file check on node: scale1 > Checking nsswitch file: OK > Checking Pre-requisite Packages: OK > Checking SRV Records lookup: OK > > Domain Controller status > NETLOGON connection: OK, connection to DC: xxxx > Domain join status: OK > Machine password status: OK > Service 'gpfs-winbind' status: OK > Object not configured > > > But unfortunately, even if all the commands seems good, i cannot use user > from active directory as owner or to setup ACL on SMB shares (it doesn't > recognise AD users), plus the command 'id DOMAIN\USER' gives error cannot > find user. > > Any ideas ? > > > > > On Mon, 20 May 2019 at 01:46, > wrote: > >> Send gpfsug-discuss mailing list submissions to >> gpfsug-discuss at spectrumscale.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> or, via email, send a message with subject or body 'help' to >> gpfsug-discuss-request at spectrumscale.org >> >> You can reach the person managing the list at >> gpfsug-discuss-owner at spectrumscale.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of gpfsug-discuss digest..." >> >> >> Today's Topics: >> >> 1. Re: gpfsug-discuss Digest, Vol 88, Issue 19 (Schmied, Will) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Mon, 20 May 2019 01:45:57 +0000 >> From: "Schmied, Will" >> To: gpfsug main discussion list >> Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 >> Message-ID: >> Content-Type: text/plain; charset="utf-8" >> >> ?Well not seeing anything odd about the second try (just the username >> only) except that your NETBIOS domain name needs to be put in place of the >> placeholder (DOMAIN_NETBIOS_NAME). >> >> You can copy from a text file and then paste into the stdin when the >> command asks for your password. Just a way to be sure no typos are in the >> password entry. >> >> >> >> Thanks, >> Will >> >> >> From: on behalf of "L.walid >> (PowerM)" >> Reply-To: gpfsug main discussion list >> Date: Sunday, May 19, 2019 at 18:39 >> To: "gpfsug-discuss at spectrumscale.org" >> Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 >> >> Caution: External Sender >> >> Hi, >> >> Thanks for the feedback, i have tried the suggested command : >> >> mmuserauth service create --data-access-method file --type ad --servers >> powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> >> --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master >> --netbios-name scaleces --unixmap-domains >> "DOMAIN_NETBIOS_NAME(10000-9999999)" >> Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: >> Invalid credentials specified for the server powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 >> > >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> >> >> >> [root at scale1 ~]# mmuserauth service create --data-access-method file >> --type ad --servers powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> >> --user-name walid --idmap-role master --netbios-name scaleces >> --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" >> Enter Active Directory User 'walid' password: >> Invalid credentials specified for the server powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 >> > >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> >> >> >> i tried both domain qualifier and plain user in the --name parameters but >> i get Invalid Credentials (knowing that walid is an Administrator in Active >> Directory) >> >> [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> >> -x -W -D "walid at powerm.ma" -b "dc=powerm,dc=ma" >> "(sAMAccountName=walid)" >> Enter LDAP Password: >> # extended LDIF >> # >> # LDAPv3 >> # base with scope subtree >> # filter: (sAMAccountName=walid) >> # requesting: ALL >> # >> >> # Walid, Users, powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 >> > >> dn: CN=Walid,CN=Users,DC=powerm,DC=ma >> objectClass: top >> objectClass: person >> objectClass: organizationalPerson >> objectClass: user >> cn: Walid >> sn: Largou >> givenName: Walid >> distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma >> instanceType: 4 >> whenCreated: 20190518224649.0Z >> whenChanged: 20190520001645.0Z >> uSNCreated: 12751 >> memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma >> uSNChanged: 16404 >> name: Walid >> objectGUID:: Le4tH38qy0SfcxaroNGPEg== >> userAccountControl: 512 >> badPwdCount: 0 >> codePage: 0 >> countryCode: 0 >> badPasswordTime: 132028055547447029 >> lastLogoff: 0 >> lastLogon: 132028055940741392 >> pwdLastSet: 132026934129698743 >> primaryGroupID: 513 >> objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== >> adminCount: 1 >> accountExpires: 9223372036854775807 >> logonCount: 0 >> sAMAccountName: walid >> sAMAccountType: 805306368 >> objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma >> dSCorePropagationData: 20190518225159.0Z >> dSCorePropagationData: 16010101000000.0Z >> lastLogonTimestamp: 132027850050695698 >> >> # search reference >> ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FForestDnsZones.powerm.ma%2FDC%3DForestDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=k6CYQeGq2lgAtY1qmVueO9OmK1a9SzGMNGm%2BPlyfwto%3D&reserved=0 >> > >> >> # search reference >> ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FDomainDnsZones.powerm.ma%2FDC%3DDomainDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=TFYJ1nBOLaxelI2KZPaoZidLvCOPv6lrD51ZRjEBkqA%3D&reserved=0 >> > >> >> # search reference >> ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma%2FCN%3DConfiguration%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=8cVvHhnXPrqogSd8QLP6McEAoGrc2oRIKbtZYBiDz3M%3D&reserved=0 >> > >> >> # search result >> search: 2 >> result: 0 Success >> >> >> On Sun, 19 May 2019 at 23:31, > > wrote: >> Send gpfsug-discuss mailing list submissions to >> gpfsug-discuss at spectrumscale.org> gpfsug-discuss at spectrumscale.org> >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 >> > >> or, via email, send a message with subject or body 'help' to >> gpfsug-discuss-request at spectrumscale.org> gpfsug-discuss-request at spectrumscale.org> >> >> You can reach the person managing the list at >> gpfsug-discuss-owner at spectrumscale.org> gpfsug-discuss-owner at spectrumscale.org> >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of gpfsug-discuss digest..." >> >> >> Today's Topics: >> >> 1. Re: Active Directory Authentification (Schmied, Will) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Sun, 19 May 2019 23:24:15 +0000 >> From: "Schmied, Will" > will.schmied at stjude.org>> >> To: gpfsug main discussion list > gpfsug-discuss at spectrumscale.org>> >> Subject: Re: [gpfsug-discuss] Active Directory Authentification >> Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org> 4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org>> >> Content-Type: text/plain; charset="utf-8" >> >> Hi Walid, >> >> Without knowing any specifics of your environment, the below command is >> what I have used, successfully across multiple clusters at 4.2.x. The >> binding account you specify needs to be able to add computers to the domain. >> >> mmuserauth service create --data-access-method file --type ad --servers >> some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master >> --netbios-name some_ad_computer_name --unixmap-domains >> "DOMAIN_NETBIOS_NAME(10000-9999999)" >> >> 10000-9999999 is the acceptable range of UID / GID for AD accounts. >> >> >> >> Thanks, >> Will >> >> >> From: > gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "L.walid >> (PowerM)" > >> Reply-To: gpfsug main discussion list > > >> Date: Sunday, May 19, 2019 at 14:30 >> To: "gpfsug-discuss at spectrumscale.org> gpfsug-discuss at spectrumscale.org>" > > >> Subject: [gpfsug-discuss] Active Directory Authentification >> >> Caution: External Sender >> >> Hi, >> >> I'm planning to integrate Active Directory with our Spectrum Scale, but >> it seems i'm missing out something, please note that i'm on a 2 protocol >> nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest >> version). I've tried from the gui the two ways, connect to Active >> Directory, and the other to LDAP. >> >> Connect to LDAP : >> mmuserauth service create --data-access-method 'file' --type 'LDAP' >> --servers 'powermdomain.powerm.ma:389< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>' >> --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' >> --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn >> 'cn=users,dc=powerm,dc=ma' >> 7:26 PM >> Either failed to create a samba domain entry on LDAP server if not >> present or could not read the already existing samba domain entry from the >> LDAP server >> 7:26 PM >> Detailed message:smbldap_search_domain_info: Adding domain info for >> SCALECES failed with NT_STATUS_UNSUCCESSFUL >> 7:26 PM >> pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the >> domain. We cannot work reliably without it. >> 7:26 PM >> pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" >> did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) >> 7:26 PM >> WARNING: Could not open passdb >> 7:26 PM >> File authentication configuration failed. >> 7:26 PM >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> 7:26 PM >> Operation Failed >> 7:26 PM >> Error: Either failed to create a samba domain entry on LDAP server if not >> present or could not read the already existing samba domain entry from the >> LDAP server >> Detailed message:smbldap_search_domain_info: Adding domain info for >> SCALECES failed with NT_STATUS_UNSUCCESSFUL >> pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the >> domain. We cannot work reliably without it. >> pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" >> did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) >> WARNING: Could not open passdb >> File authentication configuration failed. >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> >> >> Connect to Active Directory : >> mmuserauth service create --data-access-method 'file' --type 'AD' >> --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' >> --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains ' >> powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=tJKajnPMlWowHIAHnoxbceVIbE4t19KiLCaohZRwwYQ%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 >> >>(type=s >> tand-alone:ldap_srv=192.168.56.5: >> range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword >> )' >> 7:29 PM >> mmuserauth service create: Invalid parameter passed for --ldapmap-domain >> 7:29 PM >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> 7:29 PM >> Operation Failed >> 7:29 PM >> Error: mmuserauth service create: Invalid parameter passed for >> --ldapmap-domain >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> -- >> Best regards, >> >> >> Walid Largou >> Senior IT Specialist >> >> Power Maroc >> >> Mobile : +212 621 31 98 71 >> >> Email: l.walid at powerm.ma> y.largou at powerm.ma> >> 320 Bd Zertouni 6th Floor, Casablanca, Morocco >> >> https://www.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=lFUQnvPlecsmKcAL%2FC4PbmfqyxW0sn5PI%2Bu4aCD5448%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=qpwCQkujjr3Sq0wCySyjRMGZrp94mvRQAK0iGlh7DqQ%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 >> >> >> >> [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] >> This message is confidential .Its contents do not constitute a commitment >> by Power Maroc S.A.R.L except where provided for in a written agreement >> between you and Power Maroc S.A.R.L. Any authorized disclosure, use or >> dissemination, either whole or partial, is prohibited. If you are not the >> intended recipient of the message, please notify the sender immediately. >> >> ________________________________ >> >> Email Disclaimer: www.stjude.org/emaildisclaimer< >> http://www.stjude.org/emaildisclaimer> >> Consultation Disclaimer: www.stjude.org/consultationdisclaimer< >> http://www.stjude.org/consultationdisclaimer> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190519/9b579ecf/attachment.html >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fpipermail%2Fgpfsug-discuss%2Fattachments%2F20190519%2F9b579ecf%2Fattachment.html&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=DlY%2Bdy25zq2TcPBLwf%2FDQm0cngmIu6FTDzEW9PgTsrc%3D&reserved=0 >> >> >> >> ------------------------------ >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=UNt7Tspdurvw2nLSOYUf3T5pbwfD0xmW91PlwxOJi2Y%3D&reserved=0 >> > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 >> > >> >> >> End of gpfsug-discuss Digest, Vol 88, Issue 19 >> ********************************************** >> >> >> -- >> Best regards, >> >> >> Walid Largou >> Senior IT Specialist >> >> Power Maroc >> >> Mobile : +212 621 31 98 71 >> >> Email: l.walid at powerm.ma >> 320 Bd Zertouni 6th Floor, Casablanca, Morocco >> >> https://www.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 >> > >> >> [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] >> This message is confidential .Its contents do not constitute a commitment >> by Power Maroc S.A.R.L except where provided for in a written agreement >> between you and Power Maroc S.A.R.L. Any authorized disclosure, use or >> dissemination, either whole or partial, is prohibited. If you are not the >> intended recipient of the message, please notify the sender immediately. >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190520/92f25565/attachment.html >> > >> >> ------------------------------ >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> End of gpfsug-discuss Digest, Vol 88, Issue 21 >> ********************************************** >> > > > -- > Best regards, > > Walid Largou > Senior IT Specialist > Power Maroc > Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > https://www.powerm.ma > > > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From INDULISB at uk.ibm.com Tue May 21 10:34:42 2019 From: INDULISB at uk.ibm.com (Indulis Bernsteins1) Date: Tue, 21 May 2019 10:34:42 +0100 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: Message-ID: Have you tried looking at Spectrum Archive setting instead of Spectrum Scale? You can set both the size of the "stub file" that remains behind when a file is migrated, and also the amount of data which would need to be read before a recall is triggered. This might catch enough of your recall storms... or at least help! IBM Spectrum Archive Enterprise Edition V1.3.0: Installation and Configuration Guide http://www.redbooks.ibm.com/abstracts/sg248333.html?Open 7.14.3 Read Starts Recalls: Early trigger for recalling a migrated file IBM Spectrum Archive EE can define a stub size for migrated files so that the stub size initial bytes of a migrated file are kept on disk while the entire file is migrated to tape. The migrated file bytes that are kept on the disk are called the stub. Reading from the stub does not trigger a recall of the rest of the file. After the file is read beyond the stub, the recall is triggered. The recall might take a long time while the entire file is read from tape because a tape mount might be required, and it takes time to position the tape before data can be recalled from tape. When Read Start Recalls (RSR) is enabled for a file, the first read from the stub file triggers a recall of the complete file in the background (asynchronous). Reads from the stubs are still possible while the rest of the file is being recalled. After the rest of the file is recalled to disks, reads from any file part are possible. With the Preview Size (PS) value, a preview size can be set to define the initial file part size for which any reads from the resident file part does not trigger a recall. Typically, the PS value is large enough to see whether a recall of the rest of the file is required without triggering a recall for reading from every stub. This process is important to prevent unintended massive recalls. The PS value can be set only smaller than or equal to the stub size. This feature is useful, for example, when playing migrated video files. While the initial stub size part of a video file is played, the rest of the video file can be recalled to prevent a pause when it plays beyond the stub size. You must set the stub size and preview size to be large enough to buffer the time that is required to recall the file from tape without triggering recall storms. Use the following dsmmigfs command options to set both the stub size and preview size of the file system being managed by IBM Spectrum Archive EE: dsmmigfs Update -STUBsize dsmmigfs Update -PREViewsize The value for the STUBsize is a multiple of the IBM Spectrum Scale file system?s block size. this value can be obtained by running the mmlsfs . The PREViewsize parameter must be equal to or less than the STUBsize value. Both parameters take a positive integer in bytes. Regards, Indulis Bernsteins Systems Architect IBM New Generation Storage Phone: +44 792 008 6548 E-mail: INDULISB at UK.IBM.COM Jackson House, Sibson Rd Sale, Cheshire M33 7RR United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10045 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10249 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10012 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10031 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 11771 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Tue May 21 11:30:09 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 21 May 2019 11:30:09 +0100 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: <7d068877a726fa5bd0703fcdd12fdc881f62711b.camel@strath.ac.uk> On Mon, 2019-05-20 at 20:33 +0000, Christof Schmitt wrote: > SMB clients know the state of the files through a OFFLINE bit that is > part of the metadata that is available through the SMB protocol. The > Windows Explorer in particular honors this bit and avoids reading > file data for previews, but the MacOS Finder seems to ignore it and > read file data for previews anyway, triggering recalls. > > The best way would be fixing this on the Mac clients to simply not > read file data for previews for OFFLINE files. So far requests to > Apple support to implement this behavior were unsuccessful, but it > might still be worthwhile to keep pushing this request. > In the interim would it be possible for the SMB server to detect the client OS and only allow recalls from say Windows. At least this would be in "our" control unlike getting Apple to change the finder.app behaviour. Then tell MacOS users to use Windows if they want to recall files and pin the blame squarely on Apple to your users. I note that Linux is no better at honouring the offline bit in the SMB protocol than MacOS. Oh the irony of Windows being the only main stream IS handling HSM'ed files properly! JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From christophe.darras at atempo.com Tue May 21 14:07:02 2019 From: christophe.darras at atempo.com (Christophe Darras) Date: Tue, 21 May 2019 13:07:02 +0000 Subject: [gpfsug-discuss] Spectrum Scale GPFS User Group Message-ID: Hello all, I would like to thank you for welcoming me on this group! My name is Christophe Darras (Chris), based in London and in charge of Atempo for North Europe. We are developing solutions of DATA MANAGEMENT for Spectrum Scale*: automated data migration and high performance backup, but also Archiving/retrieving/moving large data sets. Kindest Regards, Chris *and other File Systems and large NAS Christophe DARRAS Head of North Europe, Middle East & South Africa Cell. : +44 7555 993 529 -------------- next part -------------- An HTML attachment was scrubbed... URL: From truston at mbari.org Tue May 21 18:59:05 2019 From: truston at mbari.org (Todd Ruston) Date: Tue, 21 May 2019 10:59:05 -0700 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: Message-ID: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> Hi Indulis, Yes, thanks for the reminder. I'd come across that, and our system is currently set to a stub size of zero (the default, I presume). I'd intended to ask in my original query whether anyone had experimented and found an optimal value that prevents most common inadvertent recalls by Macs. I know that will likely vary by file type, but since we have a broad mix of file types I figure a value that covers the majority of cases without being excessively large is the best we could implement. Our system is using 16MiB blocks, with 1024 subblocks. Is stub size bounded by full blocks, or subblocks? In other words, would we need to set the stub value to increments of 16MiB, or 16KiB? Cheers, - Todd > On May 21, 2019, at 2:34 AM, Indulis Bernsteins1 wrote: > > Have you tried looking at Spectrum Archive setting instead of Spectrum Scale? > > You can set both the size of the "stub file" that remains behind when a file is migrated, and also the amount of data which would need to be read before a recall is triggered. This might catch enough of your recall storms... or at least help! > > IBM Spectrum Archive Enterprise Edition V1.3.0: Installation and Configuration Guide > http://www.redbooks.ibm.com/abstracts/sg248333.html?Open > > 7.14.3 Read Starts Recalls: Early trigger for recalling a migrated file > IBM Spectrum Archive EE can define a stub size for migrated files so that the stub size initial > bytes of a migrated file are kept on disk while the entire file is migrated to tape. The migrated > file bytes that are kept on the disk are called the stub. Reading from the stub does not trigger > a recall of the rest of the file. After the file is read beyond the stub, the recall is triggered. The > recall might take a long time while the entire file is read from tape because a tape mount > might be required, and it takes time to position the tape before data can be recalled from tape. > When Read Start Recalls (RSR) is enabled for a file, the first read from the stub file triggers a > recall of the complete file in the background (asynchronous). Reads from the stubs are still > possible while the rest of the file is being recalled. After the rest of the file is recalled to disks, > reads from any file part are possible. > With the Preview Size (PS) value, a preview size can be set to define the initial file part size > for which any reads from the resident file part does not trigger a recall. Typically, the PS value > is large enough to see whether a recall of the rest of the file is required without triggering a > recall for reading from every stub. This process is important to prevent unintended massive > recalls. The PS value can be set only smaller than or equal to the stub size. > This feature is useful, for example, when playing migrated video files. While the initial stub > size part of a video file is played, the rest of the video file can be recalled to prevent a pause > when it plays beyond the stub size. You must set the stub size and preview size to be large > enough to buffer the time that is required to recall the file from tape without triggering recall > storms. > Use the following dsmmigfs command options to set both the stub size and preview size of > the file system being managed by IBM Spectrum Archive EE: > dsmmigfs Update -STUBsize > dsmmigfs Update -PREViewsize > The value for the STUBsize is a multiple of the IBM Spectrum Scale file system?s block size. > this value can be obtained by running the mmlsfs . The PREViewsize parameter > must be equal to or less than the STUBsize value. Both parameters take a positive integer in > bytes. > > Regards, > > Indulis Bernsteins > Systems Architect > IBM New Generation Storage > Phone: +44 792 008 6548 > E-mail: INDULISB at UK.IBM.COM > > > Jackson House, Sibson Rd > Sale, Cheshire M33 7RR > United Kingdom > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Tue May 21 19:34:12 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 21 May 2019 20:34:12 +0200 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> References: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> Message-ID: It?s a multiple of full blocks. -jf tir. 21. mai 2019 kl. 20:06 skrev Todd Ruston : > Hi Indulis, > > Yes, thanks for the reminder. I'd come across that, and our system is > currently set to a stub size of zero (the default, I presume). I'd intended > to ask in my original query whether anyone had experimented and found an > optimal value that prevents most common inadvertent recalls by Macs. I know > that will likely vary by file type, but since we have a broad mix of file > types I figure a value that covers the majority of cases without being > excessively large is the best we could implement. > > Our system is using 16MiB blocks, with 1024 subblocks. Is stub size > bounded by full blocks, or subblocks? In other words, would we need to set > the stub value to increments of 16MiB, or 16KiB? > > Cheers, > > - Todd > > > On May 21, 2019, at 2:34 AM, Indulis Bernsteins1 > wrote: > > Have you tried looking at Spectrum Archive setting instead of Spectrum > Scale? > > You can set both the size of the "stub file" that remains behind when a > file is migrated, and also the amount of data which would need to be read > before a recall is triggered. This might catch enough of your recall > storms... or at least help! > > *IBM Spectrum Archive Enterprise Edition V1.3.0: Installation and > Configuration Guide* > http://www.redbooks.ibm.com/abstracts/sg248333.html?Open > > *7.14.3 Read Starts Recalls: Early trigger for recalling a migrated file* > IBM Spectrum Archive EE can define a stub size for migrated files so that > the stub size initial > bytes of a migrated file are kept on disk while the entire file is > migrated to tape. The migrated > file bytes that are kept on the disk are called the *stub*. Reading from > the stub does not trigger > a recall of the rest of the file. After the file is read beyond the stub, > the recall is triggered. The > recall might take a long time while the entire file is read from tape > because a tape mount > might be required, and it takes time to position the tape before data can > be recalled from tape. > When Read Start Recalls (RSR) is enabled for a file, the first read from > the stub file triggers a > recall of the complete file in the background (asynchronous). Reads from > the stubs are still > possible while the rest of the file is being recalled. After the rest of > the file is recalled to disks, > reads from any file part are possible. > With the Preview Size (PS) value, a preview size can be set to define the > initial file part size > for which any reads from the resident file part does not trigger a recall. > Typically, the PS value > is large enough to see whether a recall of the rest of the file is > required without triggering a > recall for reading from every stub. This process is important to prevent > unintended massive > recalls. The PS value can be set only smaller than or equal to the stub > size. > This feature is useful, for example, when playing migrated video files. > While the initial stub > size part of a video file is played, the rest of the video file can be > recalled to prevent a pause > when it plays beyond the stub size. You must set the stub size and preview > size to be large > enough to buffer the time that is required to recall the file from tape > without triggering recall > storms. > Use the following *dsmmigfs *command options to set both the stub size > and preview size of > the file system being managed by IBM Spectrum Archive EE: > *dsmmigfs Update -STUBsize* > *dsmmigfs Update -PREViewsize* > The value for the *STUBsize *is a multiple of the IBM Spectrum Scale file > system?s block size. > this value can be obtained by running the *mmlsfs *. The *PREViewsize > *parameter > must be equal to or less than the *STUBsize *value. Both parameters take > a positive integer in > bytes. > > Regards, > > *Indulis Bernsteins* > Systems Architect > IBM New Generation Storage > > ------------------------------ > *Phone:* +44 792 008 6548 > * E-mail:* *INDULISB at UK.IBM.COM * > [image: Description: Description: IBM] > > Jackson House, Sibson Rd > Sale, Cheshire M33 7RR > United Kingdom > Attachment.png> > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 21 19:40:56 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 21 May 2019 14:40:56 -0400 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> Message-ID: https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.0/com.ibm.itsm.hsmul.doc/c_mig_stub_size.html Trust but verify. And try it before you buy it. (Personally, I would have guessed sub-block, doc says otherwise, but I'd try it nevertheless.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Tue May 21 19:59:14 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Tue, 21 May 2019 18:59:14 +0000 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: <7d068877a726fa5bd0703fcdd12fdc881f62711b.camel@strath.ac.uk> References: <7d068877a726fa5bd0703fcdd12fdc881f62711b.camel@strath.ac.uk>, <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Wed May 22 09:50:22 2019 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Wed, 22 May 2019 10:50:22 +0200 Subject: [gpfsug-discuss] Save the date - User Meeting along ISC Frankfurt Message-ID: Greetings: IBM will host a joint "IBM Spectrum Scale and IBM Spectrum LSF User Meeting" at ISC. As with other user group meetings, the agenda will include user stories, updates on IBM Spectrum Scale & IBM Spectrum LSF, and access to IBM experts and your peers. We are still looking for customers to talk about their experience with Spectrum Scale and/or Spectrum LSF. Please send me a personal mail, if you are interested to talk. The meeting is planned for: Monday June 17th, 2019 - 1pm-5.30pm ISC Frankfurt, Germany I will send more details later. Best, Ulf -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Matthias Hartmann Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From INDULISB at uk.ibm.com Wed May 22 11:19:55 2019 From: INDULISB at uk.ibm.com (Indulis Bernsteins1) Date: Wed, 22 May 2019 11:19:55 +0100 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: Message-ID: There was some horrible way to do the same thing in previous versions of Spectrum Archive using the policy engine, which was more granular than the dsmmigfs command is now. I will ask one of the Scale developers if the developers might think about allowing multiples of the sub-block size, as this would make sense- 16 MiB is a very big stub to leave behind! Regards, Indulis Bernsteins Systems Architect IBM New Generation Storage Phone: +44 792 008 6548 E-mail: INDULISB at UK.IBM.COM Jackson House, Sibson Rd Sale, Cheshire M33 7RR United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10045 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10249 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10012 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10031 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 11771 bytes Desc: not available URL: From l.walid at powerm.ma Thu May 23 00:59:40 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Wed, 22 May 2019 23:59:40 +0000 Subject: [gpfsug-discuss] SMB share size on disk Windows Message-ID: Hi, We are contacting you regarding a behavior observed for our customer gpfs smb shares. When we try to view the file/folder properties, the values reported are significantly different from the folder/size and the folder/file size on disk. We tried to reproduce with creating a simple text file of 1ko and when we check the properties of the file it was a 1Mo on disk! I tried changing the block size of the fs from 4M to 256k , but still the same results Thank you -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From l.walid at powerm.ma Thu May 23 02:00:17 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Thu, 23 May 2019 01:00:17 +0000 Subject: [gpfsug-discuss] SMB share size on disk Windows In-Reply-To: References: Message-ID: Hi Everyone, Through some research, i found it's a normal behavior related to Samba "allocation roundup size" , since CES SMB is based on Samba that explains the behavior. (Windows assumes that the default size for a block is 1M). As such, i found somewhere else that changing this parameter can decrease performance, so if possible to advise on this. For the block size on the filesystem i would still go with 256k since it's the recommended for File Serving use cases. Thank you References : https://lists.samba.org/archive/samba-technical/2016-July/115166.html On Wed, May 22, 2019 at 11:59 PM L.walid (PowerM) wrote: > Hi, > > We are contacting you regarding a behavior observed for our customer gpfs > smb shares. When we try to view the file/folder properties, the values > reported are significantly different from the folder/size and the > folder/file size on disk. > > We tried to reproduce with creating a simple text file of 1ko and when we > check the properties of the file it was a 1Mo on disk! > > I tried changing the block size of the fs from 4M to 256k , but still the > same results > > Thank you > -- > Best regards, > > Walid Largou > Senior IT Specialist > Power Maroc > Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > https://www.powerm.ma > > > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From christof.schmitt at us.ibm.com Thu May 23 05:00:46 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 23 May 2019 04:00:46 +0000 Subject: [gpfsug-discuss] SMB share size on disk Windows In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: From oluwasijibomi.saula at ndsu.edu Thu May 23 18:40:03 2019 From: oluwasijibomi.saula at ndsu.edu (Saula, Oluwasijibomi) Date: Thu, 23 May 2019 17:40:03 +0000 Subject: [gpfsug-discuss] Reason for shutdown: Reset old shared segment In-Reply-To: References: Message-ID: Hey Folks, I got a strange message one of my HPC cluster nodes that I'm hoping to understand better: "Reason for shutdown: Reset old shared segment" 2019-05-23_11:47:07.328-0500: [I] This node has a valid standard license 2019-05-23_11:47:07.327-0500: [I] Initializing the fast condition variables at 0x555557115300 ... 2019-05-23_11:47:07.328-0500: [I] mmfsd initializing. {Version: 5.0.0.0 Built: Dec 10 2017 16:59:21} ... 2019-05-23_11:47:07.328-0500: [I] Cleaning old shared memory ... 2019-05-23_11:47:07.328-0500: [N] mmfsd is shutting down. 2019-05-23_11:47:07.328-0500: [N] Reason for shutdown: Reset old shared segment Shortly after the GPFS is back up without any intervention: 2019-05-23_11:47:52.685-0500: [N] Remounted gpfs1 2019-05-23_11:47:52.691-0500: [N] mmfsd ready I'm supposing this has to do with memory usage??... Thanks, Siji Saula HPC System Administrator Center for Computationally Assisted Science & Technology NORTH DAKOTA STATE UNIVERSITY Research 2 Building ? Room 220B Dept 4100, PO Box 6050 / Fargo, ND 58108-6050 p:701.231.7749 www.ccast.ndsu.edu | www.ndsu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Thu May 23 19:16:33 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Thu, 23 May 2019 14:16:33 -0400 Subject: [gpfsug-discuss] Reason for shutdown: Reset old shared segment In-Reply-To: References: Message-ID: (Somewhat educated guess.) Somehow a previous incarnation of the mmfsd daemon was killed, but left its shared segment laying about. When GPFS is restarted, it discovers the old segment and deallocates it, etc, etc... Then the safest, easiest thing to do after going down that error recover path is to quit and (re)start GPFS as if none of that ever happened. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpergamin at ddn.com Wed May 29 12:54:46 2019 From: rpergamin at ddn.com (Ran Pergamin) Date: Wed, 29 May 2019 11:54:46 +0000 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Message-ID: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Hi All, My customer has some nodes in the cluster which current have their second IB port disabled. Spectrum scale 4.2.3 update 13. Port 1 is defined in verbs port, yet sysmoncon monitor and reports error on port 2 despite not being used. I found an old listing claiming it will be solved in in 4.2.3-update5, yet nothing in 4.2.3-update7 release notes, about it. https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html Filters in sensor file say filters are not support + apply to ALL nodes, so no relevant where I need to ignore it. Any idea how can I disable the check of sensor on mlx4_0/2 on some of the nodes ? Node name: cff003-ib0.chemfarm Node status: DEGRADED Status Change: 2019-05-29 12:29:49 Component Status Status Change Reasons ------------------------------------------------------------------------------------------------------------------------------------------------- GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small NETWORK DEGRADED 2019-05-29 12:29:49 ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), ib_rdma_nic_unrecognized(mlx4_0/2) ib0 HEALTHY 2019-05-29 12:29:49 - mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, ib_rdma_nic_down, ib_rdma_nic_unrecognized FILESYSTEM HEALTHY 2019-05-29 12:29:48 - apps HEALTHY 2019-05-29 12:29:48 - data HEALTHY 2019-05-29 12:29:48 - PERFMON HEALTHY 2019-05-29 12:29:33 - THRESHOLD HEALTHY 2019-05-29 12:29:18 - Thanks ! Regards, Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From spectrumscale at kiranghag.com Wed May 29 13:14:17 2019 From: spectrumscale at kiranghag.com (KG) Date: Wed, 29 May 2019 17:44:17 +0530 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. In-Reply-To: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> References: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Message-ID: This is a per node setting so you should be able to set correct port for each node (mmchconfig -N) On Wed, May 29, 2019 at 5:24 PM Ran Pergamin wrote: > Hi All, > > My customer has some nodes in the cluster which current have their second > IB port disabled. > Spectrum scale 4.2.3 update 13. > > Port 1 is defined in verbs port, yet sysmoncon monitor and reports error > on port 2 despite not being used. > > I found an old listing claiming it will be solved in in 4.2.3-update5, yet > nothing in 4.2.3-update7 release notes, about it. > > > https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html > > Filters in sensor file say filters are not support + apply to ALL nodes, > so no relevant where I need to ignore it. > > Any idea how can I disable the check of sensor on mlx4_0/2 on some of the > nodes ? > > > > Node name: cff003-ib0.chemfarm > > Node status: DEGRADED > > Status Change: 2019-05-29 12:29:49 > > > > Component Status Status Change Reasons > > > ------------------------------------------------------------------------------------------------------------------------------------------------- > > GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small > > NETWORK DEGRADED 2019-05-29 12:29:49 > ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), > ib_rdma_nic_unrecognized(mlx4_0/2) > > ib0 HEALTHY 2019-05-29 12:29:49 - > > mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - > > * mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, > ib_rdma_nic_down, ib_rdma_nic_unrecognized* > > FILESYSTEM HEALTHY 2019-05-29 12:29:48 - > > apps HEALTHY 2019-05-29 12:29:48 - > > data HEALTHY 2019-05-29 12:29:48 - > > PERFMON HEALTHY 2019-05-29 12:29:33 - > > THRESHOLD HEALTHY 2019-05-29 12:29:18 - > > > > > Thanks ! > > Regards, > Ran > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Wed May 29 13:19:51 2019 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Wed, 29 May 2019 14:19:51 +0200 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. In-Reply-To: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> References: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Message-ID: Hi Ran, please double check that port 2 config is not yet active for the running mmfsd daemon. When changing the verbsPorts, the daemon keeps using the old value until a restart is done. mmdiag --config | grep verbsPorts Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: Ran Pergamin To: gpfsug main discussion list Date: 29/05/2019 13:54 Subject: [EXTERNAL] [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, My customer has some nodes in the cluster which current have their second IB port disabled. Spectrum scale 4.2.3 update 13. Port 1 is defined in verbs port, yet sysmoncon monitor and reports error on port 2 despite not being used. I found an old listing claiming it will be solved in in 4.2.3-update5, yet nothing in 4.2.3-update7 release notes, about it. https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html Filters in sensor file say filters are not support + apply to ALL nodes, so no relevant where I need to ignore it. Any idea how can I disable the check of sensor on mlx4_0/2 on some of the nodes ? Node name: cff003-ib0.chemfarm Node status: DEGRADED Status Change: 2019-05-29 12:29:49 Component Status Status Change Reasons ------------------------------------------------------------------------------------------------------------------------------------------------- GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small NETWORK DEGRADED 2019-05-29 12:29:49 ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), ib_rdma_nic_unrecognized(mlx4_0/2) ib0 HEALTHY 2019-05-29 12:29:49 - mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, ib_rdma_nic_down, ib_rdma_nic_unrecognized FILESYSTEM HEALTHY 2019-05-29 12:29:48 - apps HEALTHY 2019-05-29 12:29:48 - data HEALTHY 2019-05-29 12:29:48 - PERFMON HEALTHY 2019-05-29 12:29:33 - THRESHOLD HEALTHY 2019-05-29 12:29:18 - Thanks ! Regards, Ran _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=9dCEbNr27klWay2AcOfvOE1xq50K-CyRUu4qQx4HOlk&m=nFF5UhMPmV8schGYYE3L6ZG86b1SiY3-eXi4mz3CQxE&s=Y2emO_gUxLk44-GrE4_tOeQKWZsH1fZgNP4tELnjx_g&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpergamin at ddn.com Wed May 29 13:26:40 2019 From: rpergamin at ddn.com (Ran Pergamin) Date: Wed, 29 May 2019 12:26:40 +0000 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. In-Reply-To: References: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Message-ID: Thanks All. Solved it. The other port Link Layer was in autosense rather than IB. Once changed the Link Layer to IB the false report cleared. I assume that?s the auth fix that was applied. Regards, Ran From: on behalf of Mathias Dietz Reply-To: gpfsug main discussion list Date: Wednesday, 29 May 2019 at 15:20 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Hi Ran, please double check that port 2 config is not yet active for the running mmfsd daemon. When changing the verbsPorts, the daemon keeps using the old value until a restart is done. mmdiag --config | grep verbsPorts Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: Ran Pergamin To: gpfsug main discussion list Date: 29/05/2019 13:54 Subject: [EXTERNAL] [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi All, My customer has some nodes in the cluster which current have their second IB port disabled. Spectrum scale 4.2.3 update 13. Port 1 is defined in verbs port, yet sysmoncon monitor and reports error on port 2 despite not being used. I found an old listing claiming it will be solved in in 4.2.3-update5, yet nothing in 4.2.3-update7 release notes, about it. https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html Filters in sensor file say filters are not support + apply to ALL nodes, so no relevant where I need to ignore it. Any idea how can I disable the check of sensor on mlx4_0/2 on some of the nodes ? Node name: cff003-ib0.chemfarm Node status: DEGRADED Status Change: 2019-05-29 12:29:49 Component Status Status Change Reasons ------------------------------------------------------------------------------------------------------------------------------------------------- GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small NETWORK DEGRADED 2019-05-29 12:29:49 ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), ib_rdma_nic_unrecognized(mlx4_0/2) ib0 HEALTHY 2019-05-29 12:29:49 - mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, ib_rdma_nic_down, ib_rdma_nic_unrecognized FILESYSTEM HEALTHY 2019-05-29 12:29:48 - apps HEALTHY 2019-05-29 12:29:48 - data HEALTHY 2019-05-29 12:29:48 - PERFMON HEALTHY 2019-05-29 12:29:33 - THRESHOLD HEALTHY 2019-05-29 12:29:18 - Thanks ! Regards, Ran _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweil at wustl.edu Fri May 31 19:56:38 2019 From: mweil at wustl.edu (Weil, Matthew) Date: Fri, 31 May 2019 18:56:38 +0000 Subject: [gpfsug-discuss] Gateway role on a NSD server Message-ID: Hello all, How important is it to separate these two roles.? planning on using AFM and I am wondering if we should have the gateways on different nodes than the NSDs.? Any opinions?? What about fail overs and maintenance?? Could one role effect the other? Thanks Matt From cblack at nygenome.org Fri May 31 20:09:46 2019 From: cblack at nygenome.org (Christopher Black) Date: Fri, 31 May 2019 19:09:46 +0000 Subject: [gpfsug-discuss] Gateway role on a NSD server Message-ID: <59BC2553-2F56-4863-A353-C2E2062DA92D@nygenome.org> We've done it both ways. You will get better performance and fewer challenges of ensuring processes and memory don't step on eachother if afm gateway node is not also doing nsd server work. However, using an nsd server that mounts two filesystems (one via mmremotefs from another cluster) did work. Best, Chris ?On 5/31/19, 2:56 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Weil, Matthew" wrote: Hello all, How important is it to separate these two roles. planning on using AFM and I am wondering if we should have the gateways on different nodes than the NSDs. Any opinions? What about fail overs and maintenance? Could one role effect the other? Thanks Matt _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=C9X8xNkG_lwP_-eFHTGejw&r=DopWM-bvfskhBn2zeglfyyw5U2pumni6m_QzQFYFepU&m=ZRGpE3XENwtAlhLHRmvswDiYLgHX5WHNzqGhdZmqMCw&s=23djes6DK8Uzh7SLRQwUA-KphzsnVONiU4ieADwQwMA&e= ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. From Robert.Oesterlin at nuance.com Wed May 1 14:35:21 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 1 May 2019 13:35:21 +0000 Subject: [gpfsug-discuss] PSA: Room Reservations for SC19 are now open Message-ID: It may be 6 months away, but SC19 room reservations fill fast! If you?re thinking about going, reserve a room - no cost to do so for most hotels. You don?t need to register to hold a room. We?ll have a user group meeting on Sunday afternoon 11/17. https://sc19.supercomputing.org/attend/housing/ Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 1 16:22:54 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Wed, 1 May 2019 15:22:54 +0000 Subject: [gpfsug-discuss] PSA: Room Reservations for SC19 are now open In-Reply-To: References: Message-ID: Or for anyone ever who has seen an IBM talk, this is a statement of intent and is not a binding commitment to run the user group on the Sunday... :-) Simon -------- Original Message -------- From: "Robert.Oesterlin at nuance.com" > Date: Wed, 1 May 2019, 14:50 To: gpfsug main discussion list > Subject: [gpfsug-discuss] PSA: Room Reservations for SC19 are now open It may be 6 months away, but SC19 room reservations fill fast! If you?re thinking about going, reserve a room - no cost to do so for most hotels. You don?t need to register to hold a room. We?ll have a user group meeting on Sunday afternoon 11/17. https://sc19.supercomputing.org/attend/housing/ Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Mon May 6 14:19:26 2019 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Mon, 6 May 2019 15:19:26 +0200 Subject: [gpfsug-discuss] Informal Social Gathering - Tue May 7th Message-ID: Some folks asked me about the the usual informal pre-event gathering for those arriving early. Simon sent details via Eventbrite, but it seems that this was easy to miss. As in the past, a few of us usually meet up for an informal gathering the evening before (7th May). (Bring you own money!). We've booked a few tables for this, but please drop a note to me if you plan to attend: Tuesday May 7th, 7pm - 9:30pm The White Hart, 29 Cornwall Road, London, SE1 9TJ www.thewhitehartwaterloo.co.uk (Reservation for "Spectrum Scale User Group") -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Matthias Hartmann Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeep.patil at in.ibm.com Tue May 7 10:31:35 2019 From: sandeep.patil at in.ibm.com (Sandeep Ramesh) Date: Tue, 7 May 2019 15:01:35 +0530 Subject: [gpfsug-discuss] Spectrum Scale Cyber Security Survey // Gentle Reminder Message-ID: Thank You to all who responded and Gentle Reminder to others. The survey will close on 10th May 2019 Spectrum Scale Cyber Security Survey https://www.surveymonkey.com/r/9ZNCZ75 -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.childs at qmul.ac.uk Tue May 7 15:35:26 2019 From: p.childs at qmul.ac.uk (Peter Childs) Date: Tue, 7 May 2019 14:35:26 +0000 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL In-Reply-To: References: Message-ID: <28b67f3b9cf87ff05c9e6bde50fbf8b644920985.camel@qmul.ac.uk> On Sat, 2019-04-06 at 23:50 +0200, Michal Zacek wrote: Hello, we decided to convert NFS4 acl to POSIX (we need share same data between SMB, NFS and GPFS clients), so I created script to convert NFS4 to posix ACL. It is very simple, first I do "chmod -R 770 DIR" and then "setfacl -R ..... DIR". I was surprised that conversion to posix acl has taken more then 2TB of metadata space.There is about one hundred million files at GPFS filesystem. Is this expected behavior? Thanks, Michal Example of NFS4 acl: #NFSv4 ACL #owner:root #group:root special:owner@:rwx-:allow (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED group:ag_cud_96_lab:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED group:ag_cud_96_lab_ro:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED converted to posix acl: # owner: root # group: root user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- group:ag_cud_96_lab:rwx default:group:ag_cud_96_lab:rwx group:ag_cud_96_lab_ro:r-x default:group:ag_cud_96_lab_ro:r-x _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cp.childs%40qmul.ac.uk%7Ce1059833f7ed448b027608d6bad9ffec%7C569df091b01340e386eebd9cb9e25814%7C0%7C1%7C636901842833614488&sdata=ROQ3LKmLZ06pI%2FTfdKZ9oPJx5a2xCUINqBnlIfEKF2Q%3D&reserved=0 I've been trying to get my head round acls, with the plan to implement Cluster Export Services SMB rather than roll your own SMB. I'm not sure that plan is going to work Michal, although it might if your not using the Cluster Export Services version of SMB. Put simply if your running Cluster export services SMB you need to set ACLs in Spectrum Scale to "nfs4" we currently have it set to "all" and it won't let you export the shares until you change it, currently I'm still testing, and have had to write a change to go the other way. If you using linux kernel nfs4 that uses posix, however CES nfs uses ganasha which uses nfs4 acl correctly. It gets slightly more annoying as nfs4-setfacl does not work with Spectrum Scale and you have to use mmputacl which has no recursive flag, I even found a ibm article from a few years ago saying the best way to set acls is to use find, and a temporary file..... The other workaround they suggest is to update acls from windows or nfs to get the right. One thing I think may happen if you do as you've suggested is that you will break any acls under Samba badly. I think the other reason that command is taking up more space than expected is that your giving files acls that never had them to start with. I would love someone to say that I'm wrong, as changing our acl setting is going to be a pain. as while we don't make a lot of use of them we make enough that having to use nfs4 acls all the time is going to be a pain. -- Peter Childs ITS Research Storage Queen Mary, University of London -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 7 16:16:52 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 7 May 2019 11:16:52 -0400 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL In-Reply-To: References: Message-ID: 2TB of extra meta data space for 100M files with ACLS?! I think that would be 20KB per file! Does seem there's some mistake here. Perhaps 2GB ? or 20GB? I don't see how we get to 2TeraBytes! ALSO, IIRC GPFS is supposed to use an ACL scheme where identical ACLs are stored once and each file with the same ACL just has a pointer to that same ACL. So no matter how many files have a particular ACL, you only "pay" once... An ACL is stored more compactly than its printed format, so I'd guess your ordinary ACL with a few users and groups would be less than 200 bytes. From: Michal Zacek Hello, we decided to convert NFS4 acl to POSIX (we need share same data between? SMB, NFS and GPFS clients), so I created script to convert NFS4 to posix ACL. It is very simple, first I do "chmod -R 770 DIR" and then "setfacl -R ..... DIR".? I was surprised that conversion to posix acl has taken more then 2TB of metadata space.There is about one hundred million files at GPFS filesystem. Is this expected behavior? Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue May 7 17:14:49 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 07 May 2019 17:14:49 +0100 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL In-Reply-To: <28b67f3b9cf87ff05c9e6bde50fbf8b644920985.camel@qmul.ac.uk> References: <28b67f3b9cf87ff05c9e6bde50fbf8b644920985.camel@qmul.ac.uk> Message-ID: On Tue, 2019-05-07 at 14:35 +0000, Peter Childs wrote: [SNIP] > It gets slightly more annoying as nfs4-setfacl does not work with > Spectrum Scale and you have to use mmputacl which has no recursive > flag, I even found a ibm article from a few years ago saying the best > way to set acls is to use find, and a temporary file..... The other > workaround they suggest is to update acls from windows or nfs to get > the right. > I am working on making my solution to that production ready. I decided after doing a proof of concept with the Linux nfs4_[get|set]facl commands using the FreeBSD getfacl/setfacl commands as a basis would be better as it could both POSIX and NFSv4 ACL's out the same program. Noting initial version will be something of a bodge where we translate between the existing programs representation of the ACL and the GPFS version as we read/write the ACL's. Longer term the code will need refactoring to use the GPFS structs throughout I feel. Progress depends on my spare time. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From Robert.Oesterlin at nuance.com Wed May 8 15:29:57 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 8 May 2019 14:29:57 +0000 Subject: [gpfsug-discuss] CES IP addresses - multiple subnets, using groups Message-ID: <3825202F-F636-48F7-BC78-3F07764A6FAD@nuance.com> Reference: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_configcesprotocolservipadd.htm I have a 3 CES servers with IP addresses: Node1 10.30.43.14 (netmask 255.255.255.224) export IP 10.30.43.25 Node2 10.30.43.24 (netmask 255.255.255.224) export IP 10.30.43.27 Node3 10.30.43.133 (netmask 255.255.255.224) export IP 10.30.43.135 Which means node 3 is on a different vlan. I want to assign export addresses to them and keep the export IPs on the correct vlan. This looks like it can be done with groups, but I?m not sure if I have the grouping right. I was considering the following: mmces address add --ces-ip 10.30.43.25 --ces-group vlan431 mmces address add --ces-ip 10.30.43.27 --ces-group vlan431 mmces address add --ces-ip 10.30.43.135 --ces-group vlan435 Which should mean nodes in group ?vlan431? will get IPs 10.30.43.25,10.30.43.27 and the node in group ?vlan435? will get IP 10.30.43.135 (and will remain unassigned if that node goes down) Do I have this right? Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Wed May 8 16:58:59 2019 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Wed, 8 May 2019 17:58:59 +0200 Subject: [gpfsug-discuss] CES IP addresses - multiple subnets, using groups In-Reply-To: <3825202F-F636-48F7-BC78-3F07764A6FAD@nuance.com> References: <3825202F-F636-48F7-BC78-3F07764A6FAD@nuance.com> Message-ID: Hi Bob, you also need to specify which ces groups a node can host: mmchnode --ces-group vlan431 -N Node1,Node2 mmchnode --ces-group vlan435 -N Node3 Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Oesterlin, Robert" To: gpfsug main discussion list Date: 08/05/2019 16:31 Subject: [gpfsug-discuss] CES IP addresses - multiple subnets, using groups Sent by: gpfsug-discuss-bounces at spectrumscale.org Reference: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_configcesprotocolservipadd.htm I have a 3 CES servers with IP addresses: Node1 10.30.43.14 (netmask 255.255.255.224) export IP 10.30.43.25 Node2 10.30.43.24 (netmask 255.255.255.224) export IP 10.30.43.27 Node3 10.30.43.133 (netmask 255.255.255.224) export IP 10.30.43.135 Which means node 3 is on a different vlan. I want to assign export addresses to them and keep the export IPs on the correct vlan. This looks like it can be done with groups, but I?m not sure if I have the grouping right. I was considering the following: mmces address add --ces-ip 10.30.43.25 --ces-group vlan431 mmces address add --ces-ip 10.30.43.27 --ces-group vlan431 mmces address add --ces-ip 10.30.43.135 --ces-group vlan435 Which should mean nodes in group ?vlan431? will get IPs 10.30.43.25,10.30.43.27 and the node in group ?vlan435? will get IP 10.30.43.135 (and will remain unassigned if that node goes down) Do I have this right? Bob Oesterlin Sr Principal Storage Engineer, Nuance _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=9dCEbNr27klWay2AcOfvOE1xq50K-CyRUu4qQx4HOlk&m=P11oXJcKzIOkcqnAehRbMinQv-wJOXianaA2njslyC8&s=kxOMu99ZmGV7qT7PBewEhVv1Mb5ry2WgBDXwJmJPCvI&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From xhejtman at ics.muni.cz Wed May 8 17:03:59 2019 From: xhejtman at ics.muni.cz (Lukas Hejtmanek) Date: Wed, 8 May 2019 18:03:59 +0200 Subject: [gpfsug-discuss] gpfs and device number In-Reply-To: References: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> Message-ID: <20190508160359.j4tzg3wpo3cnmp6y@ics.muni.cz> Hi, I use fsid=0 (having one export). It seems there is some incompatibility between gpfs and redhat 3.10.0-957. We have gpfs 5.0.2-1, I can see that 5.0.2-2 is tested. So maybe it is fixed in later gpfs versions. On Sat, Apr 27, 2019 at 10:37:48PM +0300, Tomer Perry wrote: > Hi, > > Please use the fsid option in /etc/exports ( man exports and: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adm_nfslin.htm > ) > Also check > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adv_cnfs.htm > in case you want HA with kernel NFS. > > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: Lukas Hejtmanek > To: gpfsug-discuss at spectrumscale.org > Date: 26/04/2019 15:37 > Subject: [gpfsug-discuss] gpfs and device number > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello, > > I noticed that from time to time, device id of a gpfs volume is not same > across whole gpfs cluster. > > [root at kat1 ~]# stat /gpfs/vol1/ > File: ?/gpfs/vol1/? > Size: 262144 Blocks: 512 IO Block: 262144 > directory > Device: 28h/40d Inode: 3 > > [root at kat2 ~]# stat /gpfs/vol1/ > File: ?/gpfs/vol1/? > Size: 262144 Blocks: 512 IO Block: 262144 > directory > Device: 2bh/43d Inode: 3 > > [root at kat3 ~]# stat /gpfs/vol1/ > File: ?/gpfs/vol1/? > Size: 262144 Blocks: 512 IO Block: 262144 > directory > Device: 2ah/42d Inode: 3 > > this is really bad for kernel NFS as it uses device id for file handles > thus > NFS failover leads to nfs stale handle error. > > Is there a way to force a device number? > > -- > Luk?? Hejtm?nek > > Linux Administrator only because > Full Time Multitasking Ninja > is not an official job title > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=F4TfIKrFl9BVdEAYxZLWlFF-zF-irdwcP9LnGpgiZrs&s=Ice-yo0p955RcTDGPEGwJ-wIwN9F6PvWOpUvR6RMd4M&e= > > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Luk?? Hejtm?nek Linux Administrator only because Full Time Multitasking Ninja is not an official job title From stijn.deweirdt at ugent.be Thu May 9 15:12:10 2019 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 9 May 2019 16:12:10 +0200 Subject: [gpfsug-discuss] advanced filecache math Message-ID: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> hi all, we are looking into some memory issues with gpfs 5.0.2.2, and found following in mmfsadm dump fs: > fileCacheLimit 1000000 desired 1000000 ... > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840) the limit is 1M (we configured that), however, the fileCacheMem mentions 11.7M? this is also reported right after a mmshutdown/startup. how do these 2 relate (again?)? mnay thanks, stijn From Achim.Rehor at de.ibm.com Thu May 9 15:34:31 2019 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Thu, 9 May 2019 16:34:31 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 7182 bytes Desc: not available URL: From stijn.deweirdt at ugent.be Thu May 9 15:38:53 2019 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 9 May 2019 16:38:53 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> Message-ID: <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> hi achim, > you just misinterpreted the term fileCacheLimit. > This is not in byte, but specifies the maxFilesToCache setting : i understand that, but how does the fileCacheLimit relate to the fileCacheMem number? (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we are looking for large numbers that might explain wtf is going on (pardon my french ;) stijn > > UMALLOC limits: > bufferDescLimit 40000 desired 40000 > fileCacheLimit 4000 desired 4000 <=== mFtC > statCacheLimit 1000 desired 1000 <=== mSC > diskAddrBuffLimit 200 desired 200 > > # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" > maxFilesToCache 4000 > maxStatCache 1000 > > Mit freundlichen Gr??en / Kind regards > > *Achim Rehor* > > -------------------------------------------------------------------------------- > Software Technical Support Specialist AIX/ Emea HPC Support > IBM Certified Advanced Technical Expert - Power Systems with AIX > TSCC Software Service, Dept. 7922 > Global Technology Services > -------------------------------------------------------------------------------- > Phone: +49-7034-274-7862 IBM Deutschland > E-Mail: Achim.Rehor at de.ibm.com Am Weiher 24 > 65451 Kelsterbach > Germany > > -------------------------------------------------------------------------------- > IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter > Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, Stefan Lutz, > Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB > 14562 WEEE-Reg.-Nr. DE 99369940 > > > > > > > From: Stijn De Weirdt > To: gpfsug main discussion list > Date: 09/05/2019 16:21 > Subject: [gpfsug-discuss] advanced filecache math > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > -------------------------------------------------------------------------------- > > > > hi all, > > we are looking into some memory issues with gpfs 5.0.2.2, and found > following in mmfsadm dump fs: > > > fileCacheLimit 1000000 desired 1000000 > ... > > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840) > > the limit is 1M (we configured that), however, the fileCacheMem mentions > 11.7M? > > this is also reported right after a mmshutdown/startup. > > how do these 2 relate (again?)? > > mnay thanks, > > stijn > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From stijn.deweirdt at ugent.be Thu May 9 15:48:13 2019 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 9 May 2019 16:48:13 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> Message-ID: <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> seems like we are suffering from http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737 as these are ces nodes, we susepcted something wrong the caches, but it looks like a memleak instead. sorry for the noise (as usual you find the solution right after sending the mail ;) stijn On 5/9/19 4:38 PM, Stijn De Weirdt wrote: > hi achim, > >> you just misinterpreted the term fileCacheLimit. >> This is not in byte, but specifies the maxFilesToCache setting : > i understand that, but how does the fileCacheLimit relate to the > fileCacheMem number? > > > > (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we > are looking for large numbers that might explain wtf is going on > (pardon my french ;) > > stijn > >> >> UMALLOC limits: >> bufferDescLimit 40000 desired 40000 >> fileCacheLimit 4000 desired 4000 <=== mFtC >> statCacheLimit 1000 desired 1000 <=== mSC >> diskAddrBuffLimit 200 desired 200 >> >> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" >> maxFilesToCache 4000 >> maxStatCache 1000 >> >> Mit freundlichen Gr??en / Kind regards >> >> *Achim Rehor* >> >> -------------------------------------------------------------------------------- >> Software Technical Support Specialist AIX/ Emea HPC Support >> IBM Certified Advanced Technical Expert - Power Systems with AIX >> TSCC Software Service, Dept. 7922 >> Global Technology Services >> -------------------------------------------------------------------------------- >> Phone: +49-7034-274-7862 IBM Deutschland >> E-Mail: Achim.Rehor at de.ibm.com Am Weiher 24 >> 65451 Kelsterbach >> Germany >> >> -------------------------------------------------------------------------------- >> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter >> Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, Stefan Lutz, >> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt >> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB >> 14562 WEEE-Reg.-Nr. DE 99369940 >> >> >> >> >> >> >> From: Stijn De Weirdt >> To: gpfsug main discussion list >> Date: 09/05/2019 16:21 >> Subject: [gpfsug-discuss] advanced filecache math >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> -------------------------------------------------------------------------------- >> >> >> >> hi all, >> >> we are looking into some memory issues with gpfs 5.0.2.2, and found >> following in mmfsadm dump fs: >> >> > fileCacheLimit 1000000 desired 1000000 >> ... >> > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840) >> >> the limit is 1M (we configured that), however, the fileCacheMem mentions >> 11.7M? >> >> this is also reported right after a mmshutdown/startup. >> >> how do these 2 relate (again?)? >> >> mnay thanks, >> >> stijn >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From Achim.Rehor at de.ibm.com Thu May 9 17:52:14 2019 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Thu, 9 May 2019 18:52:14 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be><173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: An HTML attachment was scrubbed... URL: From oehmes at gmail.com Thu May 9 18:24:42 2019 From: oehmes at gmail.com (Sven Oehme) Date: Thu, 9 May 2019 18:24:42 +0100 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: Unfortunate more complicated :) The consumption here is an estimate based on 512b inodes, which no newly created filesystem has as all new default to 4k. So if you have 4k inodes you could easily need 2x of the estimated value. Then there are extended attributes, also not added here, etc . So don't take this number as usage, it's really just a rough estimate. Sven On Thu, May 9, 2019, 5:53 PM Achim Rehor wrote: > Sorry for my fast ( and not well thought) answer, before. You obviously > are correct, there is no relation between the setting of maxFilesToCache, > and the > > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + > 2840) > > usage. it is rather a statement of how many metadata may fit in the > remaining structures outside the pagepool. this value does not change at > all, when you modify your mFtC setting. > > There is a really good presentation by Tomer Perry on the User Group > meetings, referring about memory footprint of GPFS under various conditions. > > In your case, you may very well hit the CES nodes memleak you just pointed > out. > > Sorry for my hasty reply earlier ;) > > Achim > > > > From: Stijn De Weirdt > To: gpfsug-discuss at spectrumscale.org > Date: 09/05/2019 16:48 > Subject: Re: [gpfsug-discuss] advanced filecache math > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > seems like we are suffering from > http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737 > > as these are ces nodes, we susepcted something wrong the caches, but it > looks like a memleak instead. > > sorry for the noise (as usual you find the solution right after sending > the mail ;) > > stijn > > On 5/9/19 4:38 PM, Stijn De Weirdt wrote: > > hi achim, > > > >> you just misinterpreted the term fileCacheLimit. > >> This is not in byte, but specifies the maxFilesToCache setting : > > i understand that, but how does the fileCacheLimit relate to the > > fileCacheMem number? > > > > > > > > (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we > > are looking for large numbers that might explain wtf is going on > > (pardon my french ;) > > > > stijn > > > >> > >> UMALLOC limits: > >> bufferDescLimit 40000 desired 40000 > >> fileCacheLimit 4000 desired 4000 <=== mFtC > >> statCacheLimit 1000 desired 1000 <=== mSC > >> diskAddrBuffLimit 200 desired 200 > >> > >> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" > >> maxFilesToCache 4000 > >> maxStatCache 1000 > >> > >> Mit freundlichen Gr??en / Kind regards > >> > >> *Achim Rehor* > >> > >> > -------------------------------------------------------------------------------- > >> Software Technical Support Specialist AIX/ Emea HPC Support > > >> IBM Certified Advanced Technical Expert - Power Systems with AIX > >> TSCC Software Service, Dept. 7922 > >> Global Technology Services > >> > -------------------------------------------------------------------------------- > >> Phone: +49-7034-274-7862 IBM > Deutschland > >> E-Mail: Achim.Rehor at de.ibm.com Am > Weiher 24 > >> 65451 Kelsterbach > >> Germany > >> > >> > -------------------------------------------------------------------------------- > >> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter > >> Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, > Stefan Lutz, > >> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt > >> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht > Stuttgart, HRB > >> 14562 WEEE-Reg.-Nr. DE 99369940 > >> > >> > >> > >> > >> > >> > >> From: Stijn De Weirdt > >> To: gpfsug main discussion list > >> Date: 09/05/2019 16:21 > >> Subject: [gpfsug-discuss] advanced filecache math > >> Sent by: gpfsug-discuss-bounces at spectrumscale.org > >> > >> > -------------------------------------------------------------------------------- > >> > >> > >> > >> hi all, > >> > >> we are looking into some memory issues with gpfs 5.0.2.2, and found > >> following in mmfsadm dump fs: > >> > >> > fileCacheLimit 1000000 desired 1000000 > >> ... > >> > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size > 512 + 2840) > >> > >> the limit is 1M (we configured that), however, the fileCacheMem mentions > >> 11.7M? > >> > >> this is also reported right after a mmshutdown/startup. > >> > >> how do these 2 relate (again?)? > >> > >> mnay thanks, > >> > >> stijn > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> > >> > >> > >> > >> > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjdoherty at yahoo.com Thu May 9 22:07:55 2019 From: jjdoherty at yahoo.com (Jim Doherty) Date: Thu, 9 May 2019 21:07:55 +0000 (UTC) Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: <881377935.34017.1557436075166@mail.yahoo.com> A couple of observations on memory,?? a maxFilesToCache object takes anwhere from 6-10K, so 1 million =~ 6-10 Gig.??? Memory utilized in the mmfsd comes from either the pagepool,? the shared memory segment used by MFTC objects,? the token memory segment used to track MFTC objects,?? and newer is? memory used by AFM.??? If the memory resources are in the mmfsd address space then this will show in the RSS size of the mmfsd.??? Ignore the VMM size,? since the glibc change awhile back to allocate a heap for each thread VMM has become an imaginary number for a large multi-threaded application.?? There have been some memory leaks fixed in Ganesha that will be in? 4.2.3 PTF15 which is available on fixcentral Jim Doherty On Thursday, May 9, 2019, 1:25:03 PM EDT, Sven Oehme wrote: Unfortunate more complicated :) The consumption here is an estimate based on 512b inodes, which no newly created filesystem has as all new default to 4k. So if you have 4k inodes you could easily need 2x of the estimated value.Then there are extended attributes, also not added here, etc .So don't take this number as usage, it's really just a rough estimate. Sven On Thu, May 9, 2019, 5:53 PM Achim Rehor wrote: Sorry for my fast ( and not well thought)answer, before. You obviously are correct, there is no relation betweenthe setting of maxFilesToCache, and the fileCacheMem ? ? 38359956 k ?= 11718554* 3352 bytes (inode size 512 + 2840) usage. it is rather a statement of howmany metadata may fit in the remaining structures outside the pagepool.this value does not change at all, when you modify your mFtC setting. There is a really good presentationby Tomer Perry on the User Group meetings, referring about memory footprintof GPFS under various conditions. In your case, you may very well hitthe CES nodes memleak you just pointed out. Sorry for my hasty reply earlier ;) Achim From: ? ? ??Stijn De Weirdt To: ? ? ??gpfsug-discuss at spectrumscale.org Date: ? ? ??09/05/2019 16:48 Subject: ? ?? ?Re: [gpfsug-discuss]advanced filecache math Sent by: ? ?? ?gpfsug-discuss-bounces at spectrumscale.org seems like we are suffering from http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737 as these are ces nodes, we susepcted something wrong the caches, but it looks like a memleak instead. sorry for the noise (as usual you find the solution right after sending the mail ;) stijn On 5/9/19 4:38 PM, Stijn De Weirdt wrote: > hi achim, > >> you just misinterpreted the term fileCacheLimit. >> This is not in byte, but specifies the maxFilesToCache setting: > i understand that, but how does the fileCacheLimit relate to the > fileCacheMem number? > > > > (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT),so we > are looking for large numbers that might explain wtf is going on > (pardon my french ;) > > stijn > >> >> UMALLOC limits: >> ? ? ?bufferDescLimit ? ? ?40000desired ? ?40000 >> ? ? ?fileCacheLimit ?4000 desired ? ?4000 ? <=== mFtC >> ? ? ?statCacheLimit ?1000 desired ? ?1000 ? <=== mSC >> ? ? ?diskAddrBuffLimit ? ? ?200desired ? ? ?200 >> >> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" >> ? ? maxFilesToCache 4000 >> ? ? maxStatCache 1000 >> >> Mit freundlichen Gr??en / Kind regards >> >> *Achim Rehor* >> >> -------------------------------------------------------------------------------- >> Software Technical Support Specialist AIX/ Emea HPC Support ?? ? ? ? ? ? ? >> IBM Certified Advanced Technical Expert - Power Systems with AIX >> TSCC Software Service, Dept. 7922 >> Global Technology Services >> -------------------------------------------------------------------------------- >> Phone: ? ? ? ? ? ? ?? +49-7034-274-7862 ? ? ? ? ?? ? ? ?IBM Deutschland >> E-Mail: ? ? ? ? ? ? ?? Achim.Rehor at de.ibm.com ? ? ? ?? ? ? ? ?Am Weiher 24 >> ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?? ?65451 Kelsterbach >> ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?? ?Germany >> ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?? >> -------------------------------------------------------------------------------- >> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: MartinJetter >> Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen,Stefan Lutz, >> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt >> Sitz der Gesellschaft: Ehningen / Registergericht: AmtsgerichtStuttgart, HRB >> 14562 WEEE-Reg.-Nr. DE 99369940 >> >> >> >> >> >> >> From: Stijn De Weirdt >> To: gpfsug main discussion list >> Date: 09/05/2019 16:21 >> Subject: [gpfsug-discuss] advanced filecache math >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> -------------------------------------------------------------------------------- >> >> >> >> hi all, >> >> we are looking into some memory issues with gpfs 5.0.2.2, andfound >> following in mmfsadm dump fs: >> >> ?> ? ? fileCacheLimit ? ? 1000000desired ?1000000 >> ... >> ?> ? ? fileCacheMem ? ? 38359956 k?= 11718554 * 3352 bytes (inode size 512 + 2840) >> >> the limit is 1M (we configured that), however, the fileCacheMemmentions >> 11.7M? >> >> this is also reported right after a mmshutdown/startup. >> >> how do these 2 relate (again?)? >> >> mnay thanks, >> >> stijn >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Thu May 9 22:51:37 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Thu, 9 May 2019 21:51:37 +0000 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <881377935.34017.1557436075166@mail.yahoo.com> References: <881377935.34017.1557436075166@mail.yahoo.com>, <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be><173df898-a593-b7a0-a0de-b916011bb50d@ugent.be><02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon May 13 14:11:06 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Mon, 13 May 2019 13:11:06 +0000 Subject: [gpfsug-discuss] IO-500 and POWER9 Message-ID: Hi, I was wondering if anyone has done anything with the IO-500 and POWER9 systems at all? One of the benchmarks (IOR-HARD-READ) always fails. Having slack?d the developers they said: ?It looks like data is not synchronized? and ?maybe a setting in GPFS is missing, e.g. locking, synchronization, ...?? Now I didn?t think there was any way to disable locking in GPFS. We tried some different byte settigns for the read and this made the error go away which apparently indicates ?lockicg issue -> false sharing of blocks?. We found that 1 or 2 nodes = OK. > 2 nodes breaks with 2ppn, > 2 nodes is OK with 1ppn. (We also got some fsstruct errors when running the mdtests ? I have a PMR open for that). Interestingly I ran the test on a bunch of x86 systems, and that ran fine. So ? anyone got any POWER9 (ac922) they could try see if the benchmarks work for them (just run the ior_hard tests is fine)? Or anyone any suggestions? These are all running Red Hat 7.5 and 5.0.2.3 code BTW. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.Turner at lboro.ac.uk Tue May 14 09:47:12 2019 From: A.Turner at lboro.ac.uk (Aaron Turner) Date: Tue, 14 May 2019 08:47:12 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? Message-ID: Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Tue May 14 09:58:07 2019 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Tue, 14 May 2019 08:58:07 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? In-Reply-To: References: Message-ID: Hallo Aaron, the granularity to handle storagecapacity in scale is the disk during createing of the filssystem. These disk are created nsd?s that represent your physical lun?s. Per fs there are a unique count of nsd?s == disk per filesystem. What you want is possible, no problem. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Aaron Turner Gesendet: Dienstag, 14. Mai 2019 10:47 An: gpfsug-discuss at spectrumscale.org Betreff: [gpfsug-discuss] Identifiable groups of disks? Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Tue May 14 10:08:28 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Tue, 14 May 2019 09:08:28 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? Message-ID: When you create the file-system, you create NSD devices (on physical disks ? usually LUNs), and then assign these devices as disks to a file-system. This sounds straight forwards. Note GPFS isn?t really intedned for JBODs unless you have GNR code. Simon From: on behalf of Aaron Turner Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 14 May 2019 at 09:47 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Identifiable groups of disks? Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Tue May 14 10:17:33 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Tue, 14 May 2019 09:17:33 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From A.Turner at lboro.ac.uk Tue May 14 14:13:15 2019 From: A.Turner at lboro.ac.uk (Aaron Turner) Date: Tue, 14 May 2019 13:13:15 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 9 In-Reply-To: References: Message-ID: Thanks, Simon, This is what I thought was the case, and in fact I couldn't see it was not. In reality there -are- JBODs involved, so that was a somewhat hypothetical use case initially. Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: 14 May 2019 12:00 To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 88, Issue 9 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Identifiable groups of disks? (Simon Thompson) 2. Re: Identifiable groups of disks? (Andrew Beattie) ---------------------------------------------------------------------- Message: 1 Date: Tue, 14 May 2019 09:08:28 +0000 From: Simon Thompson To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Identifiable groups of disks? Message-ID: Content-Type: text/plain; charset="utf-8" When you create the file-system, you create NSD devices (on physical disks ? usually LUNs), and then assign these devices as disks to a file-system. This sounds straight forwards. Note GPFS isn?t really intedned for JBODs unless you have GNR code. Simon From: on behalf of Aaron Turner Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 14 May 2019 at 09:47 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Identifiable groups of disks? Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Tue, 14 May 2019 09:17:33 +0000 From: "Andrew Beattie" To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Identifiable groups of disks? Message-ID: Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 88, Issue 9 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 14 18:00:42 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 14 May 2019 13:00:42 -0400 Subject: [gpfsug-discuss] Identifiable groups of disks? In-Reply-To: References: Message-ID: The simple answer is YES. I think the other replies are questioning whether you really want something different or more robust against failures. From: Aaron Turner To: "gpfsug-discuss at spectrumscale.org" Date: 05/14/2019 04:48 AM Subject: [EXTERNAL] [gpfsug-discuss] Identifiable groups of disks? Sent by: gpfsug-discuss-bounces at spectrumscale.org Scenario: one set of JBODS want to create two GPFS file systems want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=OtYY8BVp6eITFG1uShfpYVLZRwNNia-iJUwMXjZyuNc&s=Haef2-lDTRaLo2K-JNaB6xOK9LOgHg8A0Fn6dc6vOMM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Philipp.Rehs at uni-duesseldorf.de Wed May 15 09:48:19 2019 From: Philipp.Rehs at uni-duesseldorf.de (Rehs, Philipp Helo) Date: Wed, 15 May 2019 08:48:19 +0000 Subject: [gpfsug-discuss] Enforce ACLs Message-ID: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 7077 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Wed May 15 10:13:30 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Wed, 15 May 2019 09:13:30 +0000 Subject: [gpfsug-discuss] Enforce ACLs Message-ID: <8FA1923B-9903-4304-876B-2E492E968C52@bham.ac.uk> I *think* this behaviour depends on the file set setting .. Check what "--allow-permission-change" is set to for the file set. I think it needs to be "chmodAndUpdateAcl" Simon ?On 15/05/2019, 09:55, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Philipp.Rehs at uni-duesseldorf.de" wrote: Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de From jfosburg at mdanderson.org Wed May 15 11:42:42 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 15 May 2019 10:42:42 +0000 Subject: [gpfsug-discuss] Enforce ACLs In-Reply-To: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> Message-ID: <73495e917ff74131bd0511c166f385fa@mdanderson.org> I'm not 100% sure this is that it is, but it is most likely your ACL config. If you have to use the nfsv4 ACLs, check in mmlsconfig to make sure you are only using nfsv4 ACLs. I think the options are posix, nfsv4, and both. I would guess you are set to both. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Rehs, Philipp Helo Sent: Wednesday, May 15, 2019 3:48:19 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Enforce ACLs Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Wed May 15 12:14:40 2019 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Wed, 15 May 2019 13:14:40 +0200 Subject: [gpfsug-discuss] Enforce ACLs In-Reply-To: <73495e917ff74131bd0511c166f385fa@mdanderson.org> References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> <73495e917ff74131bd0511c166f385fa@mdanderson.org> Message-ID: Jonathan is mostly right, except that the option is not in mmlsconfig but part of the filesystem configuration (mmlsfs,mmchfs) # mmlsfs objfs -k flag value description ------------------- ------------------------ ----------------------------------- -k nfs4 ACL semantics in effect Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Fosburgh,Jonathan" To: "gpfsug-discuss at spectrumscale.org" Date: 15/05/2019 12:52 Subject: Re: [gpfsug-discuss] Enforce ACLs Sent by: gpfsug-discuss-bounces at spectrumscale.org I'm not 100% sure this is that it is, but it is most likely your ACL config. If you have to use the nfsv4 ACLs, check in mmlsconfig to make sure you are only using nfsv4 ACLs. I think the options are posix, nfsv4, and both. I would guess you are set to both. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Rehs, Philipp Helo Sent: Wednesday, May 15, 2019 3:48:19 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Enforce ACLs Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=9dCEbNr27klWay2AcOfvOE1xq50K-CyRUu4qQx4HOlk&m=T_hndYqE7LOa07-SB6rtf9IPYJT3XiUhUHcCpwbwduM&s=1Xxw6UtKRGh1T4KLYgawTRpI_E_3jHdYnmAy_1rUSrg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 15 12:20:21 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 15 May 2019 12:20:21 +0100 Subject: [gpfsug-discuss] Enforce ACLs In-Reply-To: <73495e917ff74131bd0511c166f385fa@mdanderson.org> References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> <73495e917ff74131bd0511c166f385fa@mdanderson.org> Message-ID: On Wed, 2019-05-15 at 10:42 +0000, Fosburgh,Jonathan wrote: > I'm not 100% sure this is that it is, but it is most likely your ACL > config. If you have to use the nfsv4 ACLs, check in mmlsconfig to > make sure you are only using nfsv4 ACLs. I think the options are > posix, nfsv4, and both. I would guess you are set to both. > I would say the same except the options are actually posix, nfsv4, samba and all and covered by mmlsfs,mmchfs not mmlsconfig. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jfosburg at mdanderson.org Wed May 15 12:24:31 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 15 May 2019 11:24:31 +0000 Subject: [gpfsug-discuss] [EXT] Re: Enforce ACLs In-Reply-To: References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> <73495e917ff74131bd0511c166f385fa@mdanderson.org>, Message-ID: <43a4cc9e539a4e04b70eadf88c7d5457@mdanderson.org> Not bad for having been awake for only half an hour. ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Mathias Dietz Sent: Wednesday, May 15, 2019 6:14:40 AM To: gpfsug main discussion list Subject: [EXT] Re: [gpfsug-discuss] Enforce ACLs WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. Jonathan is mostly right, except that the option is not in mmlsconfig but part of the filesystem configuration (mmlsfs,mmchfs) # mmlsfs objfs -k flag value description ------------------- ------------------------ ----------------------------------- -k nfs4 ACL semantics in effect Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Fosburgh,Jonathan" To: "gpfsug-discuss at spectrumscale.org" Date: 15/05/2019 12:52 Subject: Re: [gpfsug-discuss] Enforce ACLs Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I'm not 100% sure this is that it is, but it is most likely your ACL config. If you have to use the nfsv4 ACLs, check in mmlsconfig to make sure you are only using nfsv4 ACLs. I think the options are posix, nfsv4, and both. I would guess you are set to both. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Rehs, Philipp Helo Sent: Wednesday, May 15, 2019 3:48:19 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Enforce ACLs Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.nickell at inl.gov Thu May 16 17:01:21 2019 From: ben.nickell at inl.gov (Ben G. Nickell) Date: Thu, 16 May 2019 16:01:21 +0000 Subject: [gpfsug-discuss] mmbuild problem Message-ID: First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm not the GPFS guy, but Having a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4. I get the following errors. Any ideas while our GPFS guy tries to get newer software? uname -a Linux hostname 4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux ./mmbuildgpl --build-package -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019. -------------------------------------------------------- Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir = /lib/modules/4.12.14-95.13-default/build kernel source dir = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying rpmbuild... Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... Verifying that tools to build the portability layer exist.... cpp present gcc present g++ present ld present cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1 rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver cleaning (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' rm -f trcid.h ibm_kxi.trclst rm -f install.he; \ for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h DirIds.h; do \ (set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' cleaning (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' rm -f install.he; \ for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \ (set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' cleaning (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build M=/usr/lpp/mmfs/src/gpl-linux clean make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he rm -f -rf .tmp_versions kdump-kern-dwarfs.c rm -f -f gpl-linux.trclst kdump lxtrace rm -f -rf usr make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' for i in ibm-kxi ibm-linux gpl-linux ; do \ (cd $i; echo "installing header files" "(`pwd`)"; \ /usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \ exit $?) || exit 1; \ done installing header files (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' Making directory /usr/lpp/mmfs/src/include/cxi + /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h + /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h + /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h + /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h + /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h + /usr/bin/install cxiGcryptoDefs.h /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + /usr/bin/install cxiSynchNames.h /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + /usr/bin/install cxiMiscNames.h /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' installing header files (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' + /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' installing header files (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Making directory /usr/lpp/mmfs/src/include/gpl-linux + /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h + /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h + /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h + /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h + /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h + /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h + /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h + /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... Pre-kbuild step 2... touch install.he Invoking Kbuild... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:65:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/inode.c:136:3: error: aggregate value used where an integer was expected TRACE5(TRACE_VNODE, 3, TRCID_PRINTINODE_4, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: At top level: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:2800:3: error: unknown type name ?wait_queue_t? wait_queue_t qwaiter; ^ /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: In function ?cxiWaitEventWait?: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3882:3: warning: passing argument 1 of ?init_waitqueue_entry? from incompatible pointer type [enabled by default] init_waitqueue_entry(&waitElement.qwaiter, current); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:78:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3883:3: warning: passing argument 2 of ?__add_wait_queue? from incompatible pointer type [enabled by default] __add_wait_queue(&waitElement.qhead, &waitElement.qwaiter); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:153:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiStartIO?: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2474:13: error: ?struct bio? has no member named ?bi_bdev? bioP->bi_bdev = bdevP; ^ In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiCleanIO?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:81: error: ?struct bio? has no member named ?bi_bdev? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:395:23: note: in definition of macro ?_TRACE_MACRO? { _TR_BEFORE; _ktrc; KTRCOPTCODE; _TR_AFTER; } else NOOP ^ /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:5: note: in expansion of macro ?_TRACE3D? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:432:26: note: in expansion of macro ?TRACE_TRCID_WAITIO_BDEVP_CALL? _TRACE_MACRO(_c, _l, TRACE_##id##_CALL) ^ /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2906:7: note: in expansion of macro ?TRACE3? TRACE3(TRACE_IO, 6, TRCID_WAITIO_BDEVP, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2915:23: error: ?struct bio? has no member named ?bi_error? if (bcP->biop[i]->bi_error) ^ /usr/src/linux-4.12.14-95.13/scripts/Makefile.build:326: recipe for target '/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o' failed make[5]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1 /usr/src/linux-4.12.14-95.13/Makefile:1557: recipe for target '_module_/usr/lpp/mmfs/src/gpl-linux' failed make[4]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2 Makefile:152: recipe for target 'sub-make' failed make[3]: *** [sub-make] Error 2 Makefile:24: recipe for target '__sub-make' failed make[2]: *** [__sub-make] Error 2 make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' makefile:130: recipe for target 'modules' failed make[1]: *** [modules] Error 1 make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' makefile:148: recipe for target 'Modules' failed make: *** [Modules] Error 1 -------------------------------------------------------- mmbuildgpl: Building GPL module failed at Thu May 16 09:28:54 MDT 2019. -------------------------------------------------------- mmbuildgpl: Command failed. Examine previous error messages to determine cause. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 From knop at us.ibm.com Thu May 16 17:12:18 2019 From: knop at us.ibm.com (Felipe Knop) Date: Thu, 16 May 2019 12:12:18 -0400 Subject: [gpfsug-discuss] mmbuild problem In-Reply-To: References: Message-ID: Ben, According to the FAQ ( https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html) SLES 12 SP4 is only supported starting with Scale V5.0.2.3 . |-----+-----------+-----------+--------------------+--------------------| | ?12 | | | ?From V4.2.3.13 in | ?From V4.2.3.13 in | | SP4 | 4.12.14-95| 4.12.14-95| the 4.2 release | the 4.2 release | | | .3-default| .3-default| | | | | | | | | | | | | From V5.0.2.3 or | From V5.0.2.3 or | | | | | later in the 5.0 | later in the 5.0 | | | | | release | release | |-----+-----------+-----------+--------------------+--------------------| Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Ben G. Nickell" To: "gpfsug-discuss at spectrumscale.org" Date: 05/16/2019 12:02 PM Subject: [EXTERNAL] [gpfsug-discuss] mmbuild problem Sent by: gpfsug-discuss-bounces at spectrumscale.org First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm not the GPFS guy, but Having a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4. I get the following errors. Any ideas while our GPFS guy tries to get newer software? uname -a Linux hostname 4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux ./mmbuildgpl --build-package -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019. -------------------------------------------------------- Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir = /lib/modules/4.12.14-95.13-default/build kernel source dir = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying rpmbuild... Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... Verifying that tools to build the portability layer exist.... cpp present gcc present g++ present ld present cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1 rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver cleaning (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' rm -f trcid.h ibm_kxi.trclst rm -f install.he; \ for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h DirIds.h; do \ (set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' cleaning (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' rm -f install.he; \ for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \ (set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' cleaning (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build M=/usr/lpp/mmfs/src/gpl-linux clean make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he rm -f -rf .tmp_versions kdump-kern-dwarfs.c rm -f -f gpl-linux.trclst kdump lxtrace rm -f -rf usr make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' for i in ibm-kxi ibm-linux gpl-linux ; do \ (cd $i; echo "installing header files" "(`pwd`)"; \ /usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \ exit $?) || exit 1; \ done installing header files (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' Making directory /usr/lpp/mmfs/src/include/cxi + /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h + /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h + /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h + /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h + /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h + /usr/bin/install cxiGcryptoDefs.h /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + /usr/bin/install cxiSynchNames.h /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + /usr/bin/install cxiMiscNames.h /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' installing header files (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' + /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' installing header files (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Making directory /usr/lpp/mmfs/src/include/gpl-linux + /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h + /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h + /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h + /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h + /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h + /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h + /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h + /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... Pre-kbuild step 2... touch install.he Invoking Kbuild... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:65:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/inode.c:136:3: error: aggregate value used where an integer was expected TRACE5(TRACE_VNODE, 3, TRCID_PRINTINODE_4, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: At top level: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:2800:3: error: unknown type name ?wait_queue_t? wait_queue_t qwaiter; ^ /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: In function ?cxiWaitEventWait?: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3882:3: warning: passing argument 1 of ?init_waitqueue_entry? from incompatible pointer type [enabled by default] init_waitqueue_entry(&waitElement.qwaiter, current); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:78:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3883:3: warning: passing argument 2 of ?__add_wait_queue? from incompatible pointer type [enabled by default] __add_wait_queue(&waitElement.qhead, &waitElement.qwaiter); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:153:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiStartIO?: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2474:13: error: ?struct bio? has no member named ?bi_bdev? bioP->bi_bdev = bdevP; ^ In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiCleanIO?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:81: error: ?struct bio? has no member named ?bi_bdev? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP-> biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:395:23: note: in definition of macro ?_TRACE_MACRO? { _TR_BEFORE; _ktrc; KTRCOPTCODE; _TR_AFTER; } else NOOP ^ /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:5: note: in expansion of macro ?_TRACE3D? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP-> biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:432:26: note: in expansion of macro ?TRACE_TRCID_WAITIO_BDEVP_CALL? _TRACE_MACRO(_c, _l, TRACE_##id##_CALL) ^ /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2906:7: note: in expansion of macro ?TRACE3? TRACE3(TRACE_IO, 6, TRCID_WAITIO_BDEVP, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2915:23: error: ?struct bio? has no member named ?bi_error? if (bcP->biop[i]->bi_error) ^ /usr/src/linux-4.12.14-95.13/scripts/Makefile.build:326: recipe for target '/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o' failed make[5]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1 /usr/src/linux-4.12.14-95.13/Makefile:1557: recipe for target '_module_/usr/lpp/mmfs/src/gpl-linux' failed make[4]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2 Makefile:152: recipe for target 'sub-make' failed make[3]: *** [sub-make] Error 2 Makefile:24: recipe for target '__sub-make' failed make[2]: *** [__sub-make] Error 2 make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' makefile:130: recipe for target 'modules' failed make[1]: *** [modules] Error 1 make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' makefile:148: recipe for target 'Modules' failed make: *** [Modules] Error 1 -------------------------------------------------------- mmbuildgpl: Building GPL module failed at Thu May 16 09:28:54 MDT 2019. -------------------------------------------------------- mmbuildgpl: Command failed. Examine previous error messages to determine cause. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIF-g&c=jf_iaSHvJObTbx-siA1ZOg&r=oNT2koCZX0xmWlSlLblR9Q&m=WnfLPJrGAP9SlsDZnSceHbB2mqQuXDSofnAOTM7LxtU&s=H8TOSiLsqot1vScrOTBmzisftHF8LaCDIxXfOrAWB0M&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ben.nickell at inl.gov Thu May 16 17:19:54 2019 From: ben.nickell at inl.gov (Ben G. Nickell) Date: Thu, 16 May 2019 16:19:54 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: mmbuild problem In-Reply-To: References: , Message-ID: Thanks for the quick reply Felipe, and also for pointing me at the FAQ. I found the same. The standard version of 5.2.0.3 built fine. We apparently don't know how to get the advanced version, but I don't we are using that anyway, I imagine we could figure out how to get it if we do need it. I just sent this a little too soon, sorry for the noise. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Felipe Knop Sent: Thursday, May 16, 2019 10:12 AM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] mmbuild problem Ben, According to the FAQ (https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html) SLES 12 SP4 is only supported starting with Scale V5.0.2.3 . 12 SP4 4.12.14-95.3-default 4.12.14-95.3-default From V4.2.3.13 in the 4.2 release >From V5.0.2.3 or later in the 5.0 release From V4.2.3.13 in the 4.2 release >From V5.0.2.3 or later in the 5.0 release Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 [Inactive hide details for "Ben G. Nickell" ---05/16/2019 12:02:23 PM---First time poster, hopefully not a simple RTFM question]"Ben G. Nickell" ---05/16/2019 12:02:23 PM---First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm From: "Ben G. Nickell" To: "gpfsug-discuss at spectrumscale.org" Date: 05/16/2019 12:02 PM Subject: [EXTERNAL] [gpfsug-discuss] mmbuild problem Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm not the GPFS guy, but Having a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4. I get the following errors. Any ideas while our GPFS guy tries to get newer software? uname -a Linux hostname 4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux ./mmbuildgpl --build-package -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019. -------------------------------------------------------- Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir = /lib/modules/4.12.14-95.13-default/build kernel source dir = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying rpmbuild... Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... Verifying that tools to build the portability layer exist.... cpp present gcc present g++ present ld present cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1 rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver cleaning (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' rm -f trcid.h ibm_kxi.trclst rm -f install.he; \ for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h DirIds.h; do \ (set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' cleaning (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' rm -f install.he; \ for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \ (set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' cleaning (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build M=/usr/lpp/mmfs/src/gpl-linux clean make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he rm -f -rf .tmp_versions kdump-kern-dwarfs.c rm -f -f gpl-linux.trclst kdump lxtrace rm -f -rf usr make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' for i in ibm-kxi ibm-linux gpl-linux ; do \ (cd $i; echo "installing header files" "(`pwd`)"; \ /usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \ exit $?) || exit 1; \ done installing header files (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' Making directory /usr/lpp/mmfs/src/include/cxi + /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h + /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h + /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h + /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h + /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h + /usr/bin/install cxiGcryptoDefs.h /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + /usr/bin/install cxiSynchNames.h /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + /usr/bin/install cxiMiscNames.h /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' installing header files (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' + /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' installing header files (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Making directory /usr/lpp/mmfs/src/include/gpl-linux + /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h + /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h + /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h + /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h + /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h + /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h + /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h + /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... Pre-kbuild step 2... touch install.he Invoking Kbuild... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:65:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/inode.c:136:3: error: aggregate value used where an integer was expected TRACE5(TRACE_VNODE, 3, TRCID_PRINTINODE_4, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: At top level: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:2800:3: error: unknown type name ?wait_queue_t? wait_queue_t qwaiter; ^ /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: In function ?cxiWaitEventWait?: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3882:3: warning: passing argument 1 of ?init_waitqueue_entry? from incompatible pointer type [enabled by default] init_waitqueue_entry(&waitElement.qwaiter, current); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:78:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3883:3: warning: passing argument 2 of ?__add_wait_queue? from incompatible pointer type [enabled by default] __add_wait_queue(&waitElement.qhead, &waitElement.qwaiter); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:153:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiStartIO?: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2474:13: error: ?struct bio? has no member named ?bi_bdev? bioP->bi_bdev = bdevP; ^ In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiCleanIO?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:81: error: ?struct bio? has no member named ?bi_bdev? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:395:23: note: in definition of macro ?_TRACE_MACRO? { _TR_BEFORE; _ktrc; KTRCOPTCODE; _TR_AFTER; } else NOOP ^ /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:5: note: in expansion of macro ?_TRACE3D? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:432:26: note: in expansion of macro ?TRACE_TRCID_WAITIO_BDEVP_CALL? _TRACE_MACRO(_c, _l, TRACE_##id##_CALL) ^ /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2906:7: note: in expansion of macro ?TRACE3? TRACE3(TRACE_IO, 6, TRCID_WAITIO_BDEVP, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2915:23: error: ?struct bio? has no member named ?bi_error? if (bcP->biop[i]->bi_error) ^ /usr/src/linux-4.12.14-95.13/scripts/Makefile.build:326: recipe for target '/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o' failed make[5]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1 /usr/src/linux-4.12.14-95.13/Makefile:1557: recipe for target '_module_/usr/lpp/mmfs/src/gpl-linux' failed make[4]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2 Makefile:152: recipe for target 'sub-make' failed make[3]: *** [sub-make] Error 2 Makefile:24: recipe for target '__sub-make' failed make[2]: *** [__sub-make] Error 2 make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' makefile:130: recipe for target 'modules' failed make[1]: *** [modules] Error 1 make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' makefile:148: recipe for target 'Modules' failed make: *** [Modules] Error 1 -------------------------------------------------------- mmbuildgpl: Building GPL module failed at Thu May 16 09:28:54 MDT 2019. -------------------------------------------------------- mmbuildgpl: Command failed. Examine previous error messages to determine cause. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From anobre at br.ibm.com Thu May 16 17:36:35 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Thu, 16 May 2019 16:36:35 +0000 Subject: [gpfsug-discuss] mmbuild problem In-Reply-To: References: , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15580071695162.gif Type: image/gif Size: 105 bytes Desc: not available URL: From lgayne at us.ibm.com Thu May 16 18:05:48 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Thu, 16 May 2019 17:05:48 +0000 Subject: [gpfsug-discuss] mmbuild problem In-Reply-To: References: , , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15580071695162.gif Type: image/gif Size: 105 bytes Desc: not available URL: From brianbur at us.ibm.com Fri May 17 16:24:52 2019 From: brianbur at us.ibm.com (Brian Burnette) Date: Fri, 17 May 2019 15:24:52 +0000 Subject: [gpfsug-discuss] IBM Spectrum Scale Non-root Admin Research Message-ID: An HTML attachment was scrubbed... URL: From sadaniel at us.ibm.com Fri May 17 16:37:42 2019 From: sadaniel at us.ibm.com (Steven Daniels) Date: Fri, 17 May 2019 15:37:42 +0000 Subject: [gpfsug-discuss] IBM Spectrum Scale Non-root Admin Research In-Reply-To: References: Message-ID: Brian, We have a number of government clients that have to seek a waiver for each and every Spectrum Scale installation because of the root password-less ssh requirements. The sudo wrappers help but not really. My clients would all like to see the ssh requirement go away and also need to comply with Nessus scans. Different agencies may have custom scan profiles but even passing the standard ones is a good step. I have been discussing this internal with the development team for years. Thanks, Steve Steven A. Daniels Cross-brand Client Architect Senior Certified IT Specialist National Programs Fax and Voice: 3038101229 sadaniel at us.ibm.com http://www.ibm.com From: "Brian Burnette" To: gpfsug-discuss at spectrumscale.org Date: 05/17/2019 09:25 AM Subject: [EXTERNAL] [gpfsug-discuss] IBM Spectrum Scale Non-root Admin Research Sent by: gpfsug-discuss-bounces at spectrumscale.org Hey there Spectrum Scale Users, Are you interested in allowing members of your team to administer parts or all of your Spectrum Scale clusters without the power of root access? Chances are your answer is somewhere between "Yes" and "Definitely, yes, yes, yes!" If so, the Scale Research team would love to sit down with you to better understand the problems you're trying to solve with non-root access and possibly work with you over the coming months to design concepts and prototypes of different solutions. Just reply back and we'll work with you to schedule a time to chat. If you have any other comments, questions, or concerns feel free to let us know. Look forward to talking with you soon Brian Burnette IBM Systems - Spectrum Scale and Discover E-mail: brianbur at us.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=6mf8yZ-lDnfsy3mVONFq1RV1ypXT67SthQnq3D6Ym4Q&m=deWOF7sVb3e9mZabqIi0axMgkZE1FEs99isaMTZQcmw&s=axVOZPNouq3IgCatiR49oOZ0bw5OR0JaECiJuxHzQl0&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: opencits-d.jpg Type: image/jpeg Size: 182862 bytes Desc: not available URL: From l.walid at powerm.ma Sun May 19 05:14:05 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Sun, 19 May 2019 04:14:05 +0000 Subject: [gpfsug-discuss] Introduction Message-ID: Hi, I'm Largou Walid, Technical Architect for Power Maroc, Platinium Business Partner, we specialize in IBM Products (Hardware & Software). I've been using Spectrum Scale for about two years now, we have an upcoming project for HPC for the local Weather Company with an amazing 120 Spectrum Scale Nodes (10.000 CPU), i've worked on CES Services also, and AFM DR for one of our customers. I'm from Casablanca, Morocco, glad to be part of the community. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From l.walid at powerm.ma Sun May 19 20:30:06 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Sun, 19 May 2019 19:30:06 +0000 Subject: [gpfsug-discuss] Active Directory Authentification Message-ID: Hi, I'm planning to integrate Active Directory with our Spectrum Scale, but it seems i'm missing out something, please note that i'm on a 2 protocol nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've tried from the gui the two ways, connect to Active Directory, and the other to LDAP. *Connect to LDAP : * mmuserauth service create --data-access-method 'file' --type 'LDAP' --servers 'powermdomain.powerm.ma:389' --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' 7:26 PM Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server 7:26 PM Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL 7:26 PM pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. 7:26 PM pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) 7:26 PM WARNING: Could not open passdb 7:26 PM File authentication configuration failed. 7:26 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:26 PM Operation Failed 7:26 PM Error: Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) WARNING: Could not open passdb File authentication configuration failed. mmuserauth service create: Command failed. Examine previous error messages to determine cause. *Connect to Active Directory : * mmuserauth service create --data-access-method 'file' --type 'AD' --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains 'powerm.ma (type=stand-alone:ldap_srv=192.168.56.5: range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword )' 7:29 PM mmuserauth service create: Invalid parameter passed for --ldapmap-domain 7:29 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:29 PM Operation Failed 7:29 PM Error: mmuserauth service create: Invalid parameter passed for --ldapmap-domain mmuserauth service create: Command failed. Examine previous error messages to determine cause. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From will.schmied at stjude.org Mon May 20 00:24:15 2019 From: will.schmied at stjude.org (Schmied, Will) Date: Sun, 19 May 2019 23:24:15 +0000 Subject: [gpfsug-discuss] Active Directory Authentification In-Reply-To: References: Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826@stjude.org> Hi Walid, Without knowing any specifics of your environment, the below command is what I have used, successfully across multiple clusters at 4.2.x. The binding account you specify needs to be able to add computers to the domain. mmuserauth service create --data-access-method file --type ad --servers some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master --netbios-name some_ad_computer_name --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" 10000-9999999 is the acceptable range of UID / GID for AD accounts. Thanks, Will From: on behalf of "L.walid (PowerM)" Reply-To: gpfsug main discussion list Date: Sunday, May 19, 2019 at 14:30 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Active Directory Authentification Caution: External Sender Hi, I'm planning to integrate Active Directory with our Spectrum Scale, but it seems i'm missing out something, please note that i'm on a 2 protocol nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've tried from the gui the two ways, connect to Active Directory, and the other to LDAP. Connect to LDAP : mmuserauth service create --data-access-method 'file' --type 'LDAP' --servers 'powermdomain.powerm.ma:389' --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' 7:26 PM Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server 7:26 PM Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL 7:26 PM pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. 7:26 PM pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) 7:26 PM WARNING: Could not open passdb 7:26 PM File authentication configuration failed. 7:26 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:26 PM Operation Failed 7:26 PM Error: Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) WARNING: Could not open passdb File authentication configuration failed. mmuserauth service create: Command failed. Examine previous error messages to determine cause. Connect to Active Directory : mmuserauth service create --data-access-method 'file' --type 'AD' --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains 'powerm.ma(type=stand-alone:ldap_srv=192.168.56.5:range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword)' 7:29 PM mmuserauth service create: Invalid parameter passed for --ldapmap-domain 7:29 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:29 PM Operation Failed 7:29 PM Error: mmuserauth service create: Invalid parameter passed for --ldapmap-domain mmuserauth service create: Command failed. Examine previous error messages to determine cause. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 621 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. ________________________________ Email Disclaimer: www.stjude.org/emaildisclaimer Consultation Disclaimer: www.stjude.org/consultationdisclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.walid at powerm.ma Mon May 20 00:39:31 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Sun, 19 May 2019 23:39:31 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 In-Reply-To: References: Message-ID: Hi, Thanks for the feedback, i have tried the suggested command : mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. [root at scale1 ~]# mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name walid --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'walid' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. i tried both domain qualifier and plain user in the --name parameters but i get Invalid Credentials (knowing that walid is an Administrator in Active Directory) [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma -x -W -D " walid at powerm.ma" -b "dc=powerm,dc=ma" "(sAMAccountName=walid)" Enter LDAP Password: # extended LDIF # # LDAPv3 # base with scope subtree # filter: (sAMAccountName=walid) # requesting: ALL # # Walid, Users, powerm.ma dn: CN=Walid,CN=Users,DC=powerm,DC=ma objectClass: top objectClass: person objectClass: organizationalPerson objectClass: user cn: Walid sn: Largou givenName: Walid distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma instanceType: 4 whenCreated: 20190518224649.0Z whenChanged: 20190520001645.0Z uSNCreated: 12751 memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma uSNChanged: 16404 name: Walid objectGUID:: Le4tH38qy0SfcxaroNGPEg== userAccountControl: 512 badPwdCount: 0 codePage: 0 countryCode: 0 badPasswordTime: 132028055547447029 lastLogoff: 0 lastLogon: 132028055940741392 pwdLastSet: 132026934129698743 primaryGroupID: 513 objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== adminCount: 1 accountExpires: 9223372036854775807 logonCount: 0 sAMAccountName: walid sAMAccountType: 805306368 objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma dSCorePropagationData: 20190518225159.0Z dSCorePropagationData: 16010101000000.0Z lastLogonTimestamp: 132027850050695698 # search reference ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma # search reference ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma # search reference ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma # search result search: 2 result: 0 Success On Sun, 19 May 2019 at 23:31, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Active Directory Authentification (Schmied, Will) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 19 May 2019 23:24:15 +0000 > From: "Schmied, Will" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Active Directory Authentification > Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org> > Content-Type: text/plain; charset="utf-8" > > Hi Walid, > > Without knowing any specifics of your environment, the below command is > what I have used, successfully across multiple clusters at 4.2.x. The > binding account you specify needs to be able to add computers to the domain. > > mmuserauth service create --data-access-method file --type ad --servers > some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master > --netbios-name some_ad_computer_name --unixmap-domains > "DOMAIN_NETBIOS_NAME(10000-9999999)" > > 10000-9999999 is the acceptable range of UID / GID for AD accounts. > > > > Thanks, > Will > > > From: on behalf of "L.walid > (PowerM)" > Reply-To: gpfsug main discussion list > Date: Sunday, May 19, 2019 at 14:30 > To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Active Directory Authentification > > Caution: External Sender > > Hi, > > I'm planning to integrate Active Directory with our Spectrum Scale, but it > seems i'm missing out something, please note that i'm on a 2 protocol nodes > with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've > tried from the gui the two ways, connect to Active Directory, and the other > to LDAP. > > Connect to LDAP : > mmuserauth service create --data-access-method 'file' --type 'LDAP' > --servers 'powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0>' > --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' > --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' > 7:26 PM > Either failed to create a samba domain entry on LDAP server if not present > or could not read the already existing samba domain entry from the LDAP > server > 7:26 PM > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > 7:26 PM > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > 7:26 PM > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > 7:26 PM > WARNING: Could not open passdb > 7:26 PM > File authentication configuration failed. > 7:26 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:26 PM > Operation Failed > 7:26 PM > Error: Either failed to create a samba domain entry on LDAP server if not > present or could not read the already existing samba domain entry from the > LDAP server > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > WARNING: Could not open passdb > File authentication configuration failed. > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > Connect to Active Directory : > mmuserauth service create --data-access-method 'file' --type 'AD' > --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' > --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains ' > powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=tJKajnPMlWowHIAHnoxbceVIbE4t19KiLCaohZRwwYQ%3D&reserved=0 > >(type=stand-alone:ldap_srv=192.168.56.5: > range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword > )' > 7:29 PM > mmuserauth service create: Invalid parameter passed for --ldapmap-domain > 7:29 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:29 PM > Operation Failed > 7:29 PM > Error: mmuserauth service create: Invalid parameter passed for > --ldapmap-domain > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > -- > Best regards, > > > Walid Largou > Senior IT Specialist > > Power Maroc > > Mobile : +212 621 31 98 71 > > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > > https://www.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=qpwCQkujjr3Sq0wCySyjRMGZrp94mvRQAK0iGlh7DqQ%3D&reserved=0 > > > > [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > > ________________________________ > > Email Disclaimer: www.stjude.org/emaildisclaimer > Consultation Disclaimer: www.stjude.org/consultationdisclaimer > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190519/9b579ecf/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 88, Issue 19 > ********************************************** > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From will.schmied at stjude.org Mon May 20 02:45:57 2019 From: will.schmied at stjude.org (Schmied, Will) Date: Mon, 20 May 2019 01:45:57 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 In-Reply-To: References: Message-ID: ?Well not seeing anything odd about the second try (just the username only) except that your NETBIOS domain name needs to be put in place of the placeholder (DOMAIN_NETBIOS_NAME). You can copy from a text file and then paste into the stdin when the command asks for your password. Just a way to be sure no typos are in the password entry. Thanks, Will From: on behalf of "L.walid (PowerM)" Reply-To: gpfsug main discussion list Date: Sunday, May 19, 2019 at 18:39 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 Caution: External Sender Hi, Thanks for the feedback, i have tried the suggested command : mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. [root at scale1 ~]# mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name walid --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'walid' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. i tried both domain qualifier and plain user in the --name parameters but i get Invalid Credentials (knowing that walid is an Administrator in Active Directory) [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma -x -W -D "walid at powerm.ma" -b "dc=powerm,dc=ma" "(sAMAccountName=walid)" Enter LDAP Password: # extended LDIF # # LDAPv3 # base with scope subtree # filter: (sAMAccountName=walid) # requesting: ALL # # Walid, Users, powerm.ma dn: CN=Walid,CN=Users,DC=powerm,DC=ma objectClass: top objectClass: person objectClass: organizationalPerson objectClass: user cn: Walid sn: Largou givenName: Walid distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma instanceType: 4 whenCreated: 20190518224649.0Z whenChanged: 20190520001645.0Z uSNCreated: 12751 memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma uSNChanged: 16404 name: Walid objectGUID:: Le4tH38qy0SfcxaroNGPEg== userAccountControl: 512 badPwdCount: 0 codePage: 0 countryCode: 0 badPasswordTime: 132028055547447029 lastLogoff: 0 lastLogon: 132028055940741392 pwdLastSet: 132026934129698743 primaryGroupID: 513 objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== adminCount: 1 accountExpires: 9223372036854775807 logonCount: 0 sAMAccountName: walid sAMAccountType: 805306368 objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma dSCorePropagationData: 20190518225159.0Z dSCorePropagationData: 16010101000000.0Z lastLogonTimestamp: 132027850050695698 # search reference ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma # search reference ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma # search reference ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma # search result search: 2 result: 0 Success On Sun, 19 May 2019 at 23:31, > wrote: Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Active Directory Authentification (Schmied, Will) ---------------------------------------------------------------------- Message: 1 Date: Sun, 19 May 2019 23:24:15 +0000 From: "Schmied, Will" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Active Directory Authentification Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org> Content-Type: text/plain; charset="utf-8" Hi Walid, Without knowing any specifics of your environment, the below command is what I have used, successfully across multiple clusters at 4.2.x. The binding account you specify needs to be able to add computers to the domain. mmuserauth service create --data-access-method file --type ad --servers some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master --netbios-name some_ad_computer_name --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" 10000-9999999 is the acceptable range of UID / GID for AD accounts. Thanks, Will From: > on behalf of "L.walid (PowerM)" > Reply-To: gpfsug main discussion list > Date: Sunday, May 19, 2019 at 14:30 To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Active Directory Authentification Caution: External Sender Hi, I'm planning to integrate Active Directory with our Spectrum Scale, but it seems i'm missing out something, please note that i'm on a 2 protocol nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've tried from the gui the two ways, connect to Active Directory, and the other to LDAP. Connect to LDAP : mmuserauth service create --data-access-method 'file' --type 'LDAP' --servers 'powermdomain.powerm.ma:389>' --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' 7:26 PM Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server 7:26 PM Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL 7:26 PM pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. 7:26 PM pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389>" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) 7:26 PM WARNING: Could not open passdb 7:26 PM File authentication configuration failed. 7:26 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:26 PM Operation Failed 7:26 PM Error: Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389>" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) WARNING: Could not open passdb File authentication configuration failed. mmuserauth service create: Command failed. Examine previous error messages to determine cause. Connect to Active Directory : mmuserauth service create --data-access-method 'file' --type 'AD' --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains 'powerm.ma>(type=stand-alone:ldap_srv=192.168.56.5:range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword)' 7:29 PM mmuserauth service create: Invalid parameter passed for --ldapmap-domain 7:29 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:29 PM Operation Failed 7:29 PM Error: mmuserauth service create: Invalid parameter passed for --ldapmap-domain mmuserauth service create: Command failed. Examine previous error messages to determine cause. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 621 31 98 71 Email: l.walid at powerm.ma> 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma> [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. ________________________________ Email Disclaimer: www.stjude.org/emaildisclaimer Consultation Disclaimer: www.stjude.org/consultationdisclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: > ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 88, Issue 19 ********************************************** -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 621 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From par at nl.ibm.com Mon May 20 15:45:11 2019 From: par at nl.ibm.com (Par Hettinga-Ayakannu) Date: Mon, 20 May 2019 16:45:11 +0200 Subject: [gpfsug-discuss] Introduction In-Reply-To: References: Message-ID: Hi Largou, Welcome to the community, glad you joined. Best Regards, Par Hettinga, Global SDI Sales Enablement Leader Storage and Software Defined Infrastructure, IBM Systems Tel:+31(0)20-5132194 Mobile:+31(0)6-53359940 email:par at nl.ibm.com From: "L.walid (PowerM)" To: gpfsug-discuss at spectrumscale.org Date: 19/05/2019 06:14 Subject: [gpfsug-discuss] Introduction Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I'm Largou Walid, Technical Architect for Power Maroc, Platinium Business Partner, we specialize in IBM Products (Hardware & Software). I've been using Spectrum Scale for about two?years now, we have an upcoming project for HPC for the local Weather Company with an amazing 120 Spectrum Scale Nodes (10.000 CPU), i've worked on CES Services also, and AFM DR for one of our customers. I'm from Casablanca, Morocco, glad to be part of the community. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile :?+212 621 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute?a commitment by Power Maroc S.A.R.L except where?provided for in a written agreement between you and?Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you?are not the intended recipient of the message, please notify?the sender immediately.[attachment "PastedGraphic-2.png" deleted by Par Hettinga-Ayakannu/Netherlands/IBM] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=aJbIzokCniIn3gptOcLKQA&m=2sXSYflj8LjtwCIsCO2D34AV3EC94GqkwXC_gYthAgk&s=UBmuldWixuYylgIv3yT-6ILUkt7L5UTT6QOaY-NaljI&e= Tenzij hierboven anders aangegeven: / Unless stated otherwise above: IBM Nederland B.V. Gevestigd te Amsterdam Inschrijving Handelsregister Amsterdam Nr. 33054214 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From l.walid at powerm.ma Mon May 20 16:36:08 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Mon, 20 May 2019 15:36:08 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 21 In-Reply-To: References: Message-ID: Hi, I manage to make the command work (basically checking /etc/resolv.conf, /etc/hosts, /etc/nsswitch.conf) : root at scale1 committed]# mmuserauth service create --data-access-method file --type ad --servers X.X.X.X --user-name MYUSER --idmap-role master --netbios-name CESSCALE --unixmap-domains "MYDOMAIN(10000-9999999)" Enter Active Directory User 'spectrum_scale' password: File authentication configuration completed successfully. [root at scale1 committed]# mmuserauth service check Userauth file check on node: scale1 Checking nsswitch file: OK Checking Pre-requisite Packages: OK Checking SRV Records lookup: OK Service 'gpfs-winbind' status: OK Object not configured [root at scale1 committed]# mmuserauth service check --server-reachability Userauth file check on node: scale1 Checking nsswitch file: OK Checking Pre-requisite Packages: OK Checking SRV Records lookup: OK Domain Controller status NETLOGON connection: OK, connection to DC: xxxx Domain join status: OK Machine password status: OK Service 'gpfs-winbind' status: OK Object not configured But unfortunately, even if all the commands seems good, i cannot use user from active directory as owner or to setup ACL on SMB shares (it doesn't recognise AD users), plus the command 'id DOMAIN\USER' gives error cannot find user. Any ideas ? On Mon, 20 May 2019 at 01:46, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: gpfsug-discuss Digest, Vol 88, Issue 19 (Schmied, Will) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 20 May 2019 01:45:57 +0000 > From: "Schmied, Will" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 > Message-ID: > Content-Type: text/plain; charset="utf-8" > > ?Well not seeing anything odd about the second try (just the username > only) except that your NETBIOS domain name needs to be put in place of the > placeholder (DOMAIN_NETBIOS_NAME). > > You can copy from a text file and then paste into the stdin when the > command asks for your password. Just a way to be sure no typos are in the > password entry. > > > > Thanks, > Will > > > From: on behalf of "L.walid > (PowerM)" > Reply-To: gpfsug main discussion list > Date: Sunday, May 19, 2019 at 18:39 > To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 > > Caution: External Sender > > Hi, > > Thanks for the feedback, i have tried the suggested command : > > mmuserauth service create --data-access-method file --type ad --servers > powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> > --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master > --netbios-name scaleces --unixmap-domains > "DOMAIN_NETBIOS_NAME(10000-9999999)" > Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: > Invalid credentials specified for the server powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 > > > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > > [root at scale1 ~]# mmuserauth service create --data-access-method file > --type ad --servers powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> > --user-name walid --idmap-role master --netbios-name scaleces > --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" > Enter Active Directory User 'walid' password: > Invalid credentials specified for the server powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 > > > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > > i tried both domain qualifier and plain user in the --name parameters but > i get Invalid Credentials (knowing that walid is an Administrator in Active > Directory) > > [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> > -x -W -D "walid at powerm.ma" -b "dc=powerm,dc=ma" > "(sAMAccountName=walid)" > Enter LDAP Password: > # extended LDIF > # > # LDAPv3 > # base with scope subtree > # filter: (sAMAccountName=walid) > # requesting: ALL > # > > # Walid, Users, powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 > > > dn: CN=Walid,CN=Users,DC=powerm,DC=ma > objectClass: top > objectClass: person > objectClass: organizationalPerson > objectClass: user > cn: Walid > sn: Largou > givenName: Walid > distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma > instanceType: 4 > whenCreated: 20190518224649.0Z > whenChanged: 20190520001645.0Z > uSNCreated: 12751 > memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma > uSNChanged: 16404 > name: Walid > objectGUID:: Le4tH38qy0SfcxaroNGPEg== > userAccountControl: 512 > badPwdCount: 0 > codePage: 0 > countryCode: 0 > badPasswordTime: 132028055547447029 > lastLogoff: 0 > lastLogon: 132028055940741392 > pwdLastSet: 132026934129698743 > primaryGroupID: 513 > objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== > adminCount: 1 > accountExpires: 9223372036854775807 > logonCount: 0 > sAMAccountName: walid > sAMAccountType: 805306368 > objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma > dSCorePropagationData: 20190518225159.0Z > dSCorePropagationData: 16010101000000.0Z > lastLogonTimestamp: 132027850050695698 > > # search reference > ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FForestDnsZones.powerm.ma%2FDC%3DForestDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=k6CYQeGq2lgAtY1qmVueO9OmK1a9SzGMNGm%2BPlyfwto%3D&reserved=0 > > > > # search reference > ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FDomainDnsZones.powerm.ma%2FDC%3DDomainDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=TFYJ1nBOLaxelI2KZPaoZidLvCOPv6lrD51ZRjEBkqA%3D&reserved=0 > > > > # search reference > ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma%2FCN%3DConfiguration%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=8cVvHhnXPrqogSd8QLP6McEAoGrc2oRIKbtZYBiDz3M%3D&reserved=0 > > > > # search result > search: 2 > result: 0 Success > > > On Sun, 19 May 2019 at 23:31, > wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org gpfsug-discuss at spectrumscale.org> > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 > > > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org gpfsug-discuss-request at spectrumscale.org> > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org gpfsug-discuss-owner at spectrumscale.org> > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Active Directory Authentification (Schmied, Will) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 19 May 2019 23:24:15 +0000 > From: "Schmied, Will" will.schmied at stjude.org>> > To: gpfsug main discussion list gpfsug-discuss at spectrumscale.org>> > Subject: Re: [gpfsug-discuss] Active Directory Authentification > Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org 4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org>> > Content-Type: text/plain; charset="utf-8" > > Hi Walid, > > Without knowing any specifics of your environment, the below command is > what I have used, successfully across multiple clusters at 4.2.x. The > binding account you specify needs to be able to add computers to the domain. > > mmuserauth service create --data-access-method file --type ad --servers > some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master > --netbios-name some_ad_computer_name --unixmap-domains > "DOMAIN_NETBIOS_NAME(10000-9999999)" > > 10000-9999999 is the acceptable range of UID / GID for AD accounts. > > > > Thanks, > Will > > > From: gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "L.walid > (PowerM)" > > Reply-To: gpfsug main discussion list > > Date: Sunday, May 19, 2019 at 14:30 > To: "gpfsug-discuss at spectrumscale.org gpfsug-discuss at spectrumscale.org>" > > Subject: [gpfsug-discuss] Active Directory Authentification > > Caution: External Sender > > Hi, > > I'm planning to integrate Active Directory with our Spectrum Scale, but it > seems i'm missing out something, please note that i'm on a 2 protocol nodes > with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've > tried from the gui the two ways, connect to Active Directory, and the other > to LDAP. > > Connect to LDAP : > mmuserauth service create --data-access-method 'file' --type 'LDAP' > --servers 'powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>' > --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' > --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn > 'cn=users,dc=powerm,dc=ma' > 7:26 PM > Either failed to create a samba domain entry on LDAP server if not present > or could not read the already existing samba domain entry from the LDAP > server > 7:26 PM > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > 7:26 PM > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > 7:26 PM > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > 7:26 PM > WARNING: Could not open passdb > 7:26 PM > File authentication configuration failed. > 7:26 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:26 PM > Operation Failed > 7:26 PM > Error: Either failed to create a samba domain entry on LDAP server if not > present or could not read the already existing samba domain entry from the > LDAP server > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > WARNING: Could not open passdb > File authentication configuration failed. > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > Connect to Active Directory : > mmuserauth service create --data-access-method 'file' --type 'AD' > --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' > --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains ' > powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=tJKajnPMlWowHIAHnoxbceVIbE4t19KiLCaohZRwwYQ%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 > >>(type=s > tand-alone:ldap_srv=192.168.56.5: > range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword > )' > 7:29 PM > mmuserauth service create: Invalid parameter passed for --ldapmap-domain > 7:29 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:29 PM > Operation Failed > 7:29 PM > Error: mmuserauth service create: Invalid parameter passed for > --ldapmap-domain > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > -- > Best regards, > > > Walid Largou > Senior IT Specialist > > Power Maroc > > Mobile : +212 621 31 98 71 > > Email: l.walid at powerm.ma y.largou at powerm.ma> > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > > https://www.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=lFUQnvPlecsmKcAL%2FC4PbmfqyxW0sn5PI%2Bu4aCD5448%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=qpwCQkujjr3Sq0wCySyjRMGZrp94mvRQAK0iGlh7DqQ%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 > >> > > [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > > ________________________________ > > Email Disclaimer: www.stjude.org/emaildisclaimer< > http://www.stjude.org/emaildisclaimer> > Consultation Disclaimer: www.stjude.org/consultationdisclaimer< > http://www.stjude.org/consultationdisclaimer> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190519/9b579ecf/attachment.html > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fpipermail%2Fgpfsug-discuss%2Fattachments%2F20190519%2F9b579ecf%2Fattachment.html&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=DlY%2Bdy25zq2TcPBLwf%2FDQm0cngmIu6FTDzEW9PgTsrc%3D&reserved=0 > >> > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=UNt7Tspdurvw2nLSOYUf3T5pbwfD0xmW91PlwxOJi2Y%3D&reserved=0 > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 > > > > > End of gpfsug-discuss Digest, Vol 88, Issue 19 > ********************************************** > > > -- > Best regards, > > > Walid Largou > Senior IT Specialist > > Power Maroc > > Mobile : +212 621 31 98 71 > > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > > https://www.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 > > > > [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190520/92f25565/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 88, Issue 21 > ********************************************** > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From christof.schmitt at us.ibm.com Mon May 20 19:51:46 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Mon, 20 May 2019 18:51:46 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 21 In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: From truston at mbari.org Mon May 20 21:05:53 2019 From: truston at mbari.org (Todd Ruston) Date: Mon, 20 May 2019 13:05:53 -0700 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question Message-ID: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Greetings all, First post here, so by way of introduction we are a fairly new Spectrum Scale and Archive customer (installed last year and live in production Q1 this year). We have a four node (plus EMS) ESS system with ~520TB of mixed spinning disk and SSD. Client access to the system is via CES (NFS and SMB, running on two protocol nodes), integrated with Active Directory, for a mixed population of Windows, Mac, and Linux clients. A separate pair of nodes run Spectrum Archive, with a TS4500 LTO-8 library behind them. We use the system for general institute data, with the largest data types being HD video, multibeam sonar, and hydrophone data. Video is the currently active data type in production; we will be migrating the rest over time. So far things are running pretty well. Our archive approach is to premigrate data, particularly the large, unchanging data like the above mentioned data types, almost immediately upon landing in the system. Then we migrate those that have not been accessed in a period of time (or manually if space demands require it). We do wish to allow users to recall archived data on demand as needed. Because we have a large contingent of Mac clients (accessing the system via SMB), one issue we want to get ahead of is inadvertent recalls triggered by Mac preview generation, Quick Look, Cover Flow/Gallery view, and the like. Going in we knew this was going to be something we'd need to address, and we anticipated being able to configure Finder to disable preview generation and train users to avoid Quick Look unless they intended to trigger a recall. In our testing however, even with those features disabled/avoided, we have seen Mac clients trigger inadvertent recalls just from CLI 'ls -lshrt' interactions with the system. While brainstorming ways to prevent these inadvertent recalls while still allowing users to initiate recalls on their own when needed, one thought that came to us is we might be able to turn off recalls via SMB (setgpfs:recalls = no via mmsmb), and create a simple self-service web portal that would allow users to browse the Scale file system with a web browser, select files for recall, and initiate the recall from there. The web interface could run on one of the Archive nodes, and the back end of it would simply send a list of selected file paths to ltfsee recall. Before possibly reinventing the wheel, I thought I'd check to see if something like this may already exist, either from IBM, the Scale user community, or a third-party/open source tool that could be leveraged for the purpose. I searched the list archive and didn't find anything, but please let me know if I missed something. And please let me know if you know of something that would fit this need, or other ideas as well. Cheers, -- Todd E. Ruston Information Systems Manager Monterey Bay Aquarium Research Institute (MBARI) 7700 Sandholdt Road, Moss Landing, CA, 95039 Phone 831-775-1997 Fax 831-775-1652 http://www.mbari.org From christof.schmitt at us.ibm.com Mon May 20 21:33:57 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Mon, 20 May 2019 20:33:57 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Intro=2C=09and_Spectrum_Archive_self-s?= =?utf-8?q?ervice_recall_interface_question?= In-Reply-To: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: An HTML attachment was scrubbed... URL: From stockf at us.ibm.com Mon May 20 21:41:16 2019 From: stockf at us.ibm.com (Frederick Stock) Date: Mon, 20 May 2019 20:41:16 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Intro=2C=09and_Spectrum_Archive_self-s?= =?utf-8?q?ervice_recall_interface_question?= In-Reply-To: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: An HTML attachment was scrubbed... URL: From richard.rupp at us.ibm.com Mon May 20 21:48:40 2019 From: richard.rupp at us.ibm.com (RICHARD RUPP) Date: Mon, 20 May 2019 16:48:40 -0400 Subject: [gpfsug-discuss] =?utf-8?q?Intro=2C=09and_Spectrum_Archive_self-s?= =?utf-8?q?ervice_recall_interface_question?= In-Reply-To: References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: I've heard that this works, but I have not tried it myself - https://support.apple.com/en-us/HT208209 Regards, Richard Rupp, Sales Specialist, Phone: 1-347-510-6746 From: "Frederick Stock" To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Date: 05/20/2019 04:41 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question Sent by: gpfsug-discuss-bounces at spectrumscale.org Todd I am not aware of any tool that provides the out of band recall that you propose, though it would be quite useful. However, I wanted to note that as I understand the reason the the Mac client initiates the file recalls is because the Mac SMB client ignores the archive bit, indicating a file does not reside in online storage, in the SMB protocol. To date efforts to have Apple change their SMB client to respect the archive bit have not been successful but if you feel so inclined we would be grateful if you would submit a request to Apple for them to change their SMB client to honor the archive bit and thus avoid file recalls. Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com ----- Original message ----- From: Todd Ruston Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: [EXTERNAL] [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question Date: Mon, May 20, 2019 4:12 PM Greetings all, First post here, so by way of introduction we are a fairly new Spectrum Scale and Archive customer (installed last year and live in production Q1 this year). We have a four node (plus EMS) ESS system with ~520TB of mixed spinning disk and SSD. Client access to the system is via CES (NFS and SMB, running on two protocol nodes), integrated with Active Directory, for a mixed population of Windows, Mac, and Linux clients. A separate pair of nodes run Spectrum Archive, with a TS4500 LTO-8 library behind them. We use the system for general institute data, with the largest data types being HD video, multibeam sonar, and hydrophone data. Video is the currently active data type in production; we will be migrating the rest over time. So far things are running pretty well. Our archive approach is to premigrate data, particularly the large, unchanging data like the above mentioned data types, almost immediately upon landing in the system. Then we migrate those that have not been accessed in a period of time (or manually if space demands require it). We do wish to allow users to recall archived data on demand as needed. Because we have a large contingent of Mac clients (accessing the system via SMB), one issue we want to get ahead of is inadvertent recalls triggered by Mac preview generation, Quick Look, Cover Flow/Gallery view, and the like. Going in we knew this was going to be something we'd need to address, and we anticipated being able to configure Finder to disable preview generation and train users to avoid Quick Look unless they intended to trigger a recall. In our testing however, even with those features disabled/avoided, we have seen Mac clients trigger inadvertent recalls just from CLI 'ls -lshrt' interactions with the system. While brainstorming ways to prevent these inadvertent recalls while still allowing users to initiate recalls on their own when needed, one thought that came to us is we might be able to turn off recalls via SMB (setgpfs:recalls = no via mmsmb), and create a simple self-service web portal that would allow users to browse the Scale file system with a web browser, select files for recall, and initiate the recall from there. The web interface could run on one of the Archive nodes, and the back end of it would simply send a list of selected file paths to ltfsee recall. Before possibly reinventing the wheel, I thought I'd check to see if something like this may already exist, either from IBM, the Scale user community, or a third-party/open source tool that could be leveraged for the purpose. I searched the list archive and didn't find anything, but please let me know if I missed something. And please let me know if you know of something that would fit this need, or other ideas as well. Cheers, -- Todd E. Ruston Information Systems Manager Monterey Bay Aquarium Research Institute (MBARI) 7700 Sandholdt Road, Moss Landing, CA, 95039 Phone 831-775-1997 Fax 831-775-1652 http://www.mbari.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=EXL-jEd1jmdzvOIhT87C7SIqmAS9uhVQ6J3kObct4OY&m=xkYegIiDkaPYiV4_T1Zd0mLhj-2r34rhi8EbFYw_ei8&s=bOxknFCPDWKJdnKbMs-BIU7zXcb0tsLSRw7YDzmRlgA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From truston at mbari.org Mon May 20 22:50:13 2019 From: truston at mbari.org (Todd Ruston) Date: Mon, 20 May 2019 14:50:13 -0700 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: Thanks very much for the replies so far. I had already pinged Apple asking them to honor the offline bit in their SMB implementation. I don't think we carry a whole lot of weight with them, but at least we've put another "vote in the hopper" for the feature. We had tried the settings in the article Richard referenced, but recalls still occurred. Christof's suggestion of parallel SMB exports, one with and one without recall enabled, is one we hadn't thought of and has a lot of promise for our situation. Thanks for the idea! Cheers, - Todd > On May 20, 2019, at 1:48 PM, RICHARD RUPP wrote: > > I've heard that this works, but I have not tried it myself - https://support.apple.com/en-us/HT208209 > > Regards, > > Richard Rupp, Sales Specialist, Phone: 1-347-510-6746 > > > "Frederick Stock" ---05/20/2019 04:41:37 PM---Todd I am not aware of any tool that provides the out of band recall that you propose, though it wou > > From: "Frederick Stock" > To: gpfsug-discuss at spectrumscale.org > Cc: gpfsug-discuss at spectrumscale.org > Date: 05/20/2019 04:41 PM > Subject: [EXTERNAL] Re: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > Todd I am not aware of any tool that provides the out of band recall that you propose, though it would be quite useful. However, I wanted to note that as I understand the reason the the Mac client initiates the file recalls is because the Mac SMB client ignores the archive bit, indicating a file does not reside in online storage, in the SMB protocol. To date efforts to have Apple change their SMB client to respect the archive bit have not been successful but if you feel so inclined we would be grateful if you would submit a request to Apple for them to change their SMB client to honor the archive bit and thus avoid file recalls. > > Fred > __________________________________________________ > Fred Stock | IBM Pittsburgh Lab | 720-430-8821 > stockf at us.ibm.com > > > ----- Original message ----- > From: Todd Ruston > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Cc: > Subject: [EXTERNAL] [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question > Date: Mon, May 20, 2019 4:12 PM > > Greetings all, > > First post here, so by way of introduction we are a fairly new Spectrum Scale and Archive customer (installed last year and live in production Q1 this year). We have a four node (plus EMS) ESS system with ~520TB of mixed spinning disk and SSD. Client access to the system is via CES (NFS and SMB, running on two protocol nodes), integrated with Active Directory, for a mixed population of Windows, Mac, and Linux clients. A separate pair of nodes run Spectrum Archive, with a TS4500 LTO-8 library behind them. > > We use the system for general institute data, with the largest data types being HD video, multibeam sonar, and hydrophone data. Video is the currently active data type in production; we will be migrating the rest over time. So far things are running pretty well. > > Our archive approach is to premigrate data, particularly the large, unchanging data like the above mentioned data types, almost immediately upon landing in the system. Then we migrate those that have not been accessed in a period of time (or manually if space demands require it). We do wish to allow users to recall archived data on demand as needed. > > Because we have a large contingent of Mac clients (accessing the system via SMB), one issue we want to get ahead of is inadvertent recalls triggered by Mac preview generation, Quick Look, Cover Flow/Gallery view, and the like. Going in we knew this was going to be something we'd need to address, and we anticipated being able to configure Finder to disable preview generation and train users to avoid Quick Look unless they intended to trigger a recall. In our testing however, even with those features disabled/avoided, we have seen Mac clients trigger inadvertent recalls just from CLI 'ls -lshrt' interactions with the system. > > While brainstorming ways to prevent these inadvertent recalls while still allowing users to initiate recalls on their own when needed, one thought that came to us is we might be able to turn off recalls via SMB (setgpfs:recalls = no via mmsmb), and create a simple self-service web portal that would allow users to browse the Scale file system with a web browser, select files for recall, and initiate the recall from there. The web interface could run on one of the Archive nodes, and the back end of it would simply send a list of selected file paths to ltfsee recall. > > Before possibly reinventing the wheel, I thought I'd check to see if something like this may already exist, either from IBM, the Scale user community, or a third-party/open source tool that could be leveraged for the purpose. I searched the list archive and didn't find anything, but please let me know if I missed something. And please let me know if you know of something that would fit this need, or other ideas as well. > > Cheers, > > -- > Todd E. Ruston > Information Systems Manager > Monterey Bay Aquarium Research Institute (MBARI) > 7700 Sandholdt Road, Moss Landing, CA, 95039 > Phone 831-775-1997 Fax 831-775-1652 http://www.mbari.org > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.walid at powerm.ma Tue May 21 03:24:58 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Tue, 21 May 2019 02:24:58 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 21 In-Reply-To: References: Message-ID: *Update :* I have the environment working now with the command : mmuserauth service create --data-access-method 'file' --type 'AD' --servers IPADDRESS--user-name USERNAME --netbios-name 'scaleces' --idmap-role 'MASTER' --idmap-range '10000000-11999999' --idmap-range-size '100000' Removing the unix-map solved the issue. Thanks for your help On Mon, 20 May 2019 at 15:36, L.walid (PowerM) wrote: > Hi, > > I manage to make the command work (basically checking /etc/resolv.conf, > /etc/hosts, /etc/nsswitch.conf) : > > root at scale1 committed]# mmuserauth service create --data-access-method > file --type ad --servers X.X.X.X --user-name MYUSER --idmap-role master > --netbios-name CESSCALE --unixmap-domains "MYDOMAIN(10000-9999999)" > Enter Active Directory User 'spectrum_scale' password: > File authentication configuration completed successfully. > > > [root at scale1 committed]# mmuserauth service check > > Userauth file check on node: scale1 > Checking nsswitch file: OK > Checking Pre-requisite Packages: OK > Checking SRV Records lookup: OK > Service 'gpfs-winbind' status: OK > Object not configured > > > [root at scale1 committed]# mmuserauth service check --server-reachability > > Userauth file check on node: scale1 > Checking nsswitch file: OK > Checking Pre-requisite Packages: OK > Checking SRV Records lookup: OK > > Domain Controller status > NETLOGON connection: OK, connection to DC: xxxx > Domain join status: OK > Machine password status: OK > Service 'gpfs-winbind' status: OK > Object not configured > > > But unfortunately, even if all the commands seems good, i cannot use user > from active directory as owner or to setup ACL on SMB shares (it doesn't > recognise AD users), plus the command 'id DOMAIN\USER' gives error cannot > find user. > > Any ideas ? > > > > > On Mon, 20 May 2019 at 01:46, > wrote: > >> Send gpfsug-discuss mailing list submissions to >> gpfsug-discuss at spectrumscale.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> or, via email, send a message with subject or body 'help' to >> gpfsug-discuss-request at spectrumscale.org >> >> You can reach the person managing the list at >> gpfsug-discuss-owner at spectrumscale.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of gpfsug-discuss digest..." >> >> >> Today's Topics: >> >> 1. Re: gpfsug-discuss Digest, Vol 88, Issue 19 (Schmied, Will) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Mon, 20 May 2019 01:45:57 +0000 >> From: "Schmied, Will" >> To: gpfsug main discussion list >> Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 >> Message-ID: >> Content-Type: text/plain; charset="utf-8" >> >> ?Well not seeing anything odd about the second try (just the username >> only) except that your NETBIOS domain name needs to be put in place of the >> placeholder (DOMAIN_NETBIOS_NAME). >> >> You can copy from a text file and then paste into the stdin when the >> command asks for your password. Just a way to be sure no typos are in the >> password entry. >> >> >> >> Thanks, >> Will >> >> >> From: on behalf of "L.walid >> (PowerM)" >> Reply-To: gpfsug main discussion list >> Date: Sunday, May 19, 2019 at 18:39 >> To: "gpfsug-discuss at spectrumscale.org" >> Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 >> >> Caution: External Sender >> >> Hi, >> >> Thanks for the feedback, i have tried the suggested command : >> >> mmuserauth service create --data-access-method file --type ad --servers >> powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> >> --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master >> --netbios-name scaleces --unixmap-domains >> "DOMAIN_NETBIOS_NAME(10000-9999999)" >> Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: >> Invalid credentials specified for the server powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 >> > >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> >> >> >> [root at scale1 ~]# mmuserauth service create --data-access-method file >> --type ad --servers powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> >> --user-name walid --idmap-role master --netbios-name scaleces >> --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" >> Enter Active Directory User 'walid' password: >> Invalid credentials specified for the server powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 >> > >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> >> >> >> i tried both domain qualifier and plain user in the --name parameters but >> i get Invalid Credentials (knowing that walid is an Administrator in Active >> Directory) >> >> [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> >> -x -W -D "walid at powerm.ma" -b "dc=powerm,dc=ma" >> "(sAMAccountName=walid)" >> Enter LDAP Password: >> # extended LDIF >> # >> # LDAPv3 >> # base with scope subtree >> # filter: (sAMAccountName=walid) >> # requesting: ALL >> # >> >> # Walid, Users, powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 >> > >> dn: CN=Walid,CN=Users,DC=powerm,DC=ma >> objectClass: top >> objectClass: person >> objectClass: organizationalPerson >> objectClass: user >> cn: Walid >> sn: Largou >> givenName: Walid >> distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma >> instanceType: 4 >> whenCreated: 20190518224649.0Z >> whenChanged: 20190520001645.0Z >> uSNCreated: 12751 >> memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma >> uSNChanged: 16404 >> name: Walid >> objectGUID:: Le4tH38qy0SfcxaroNGPEg== >> userAccountControl: 512 >> badPwdCount: 0 >> codePage: 0 >> countryCode: 0 >> badPasswordTime: 132028055547447029 >> lastLogoff: 0 >> lastLogon: 132028055940741392 >> pwdLastSet: 132026934129698743 >> primaryGroupID: 513 >> objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== >> adminCount: 1 >> accountExpires: 9223372036854775807 >> logonCount: 0 >> sAMAccountName: walid >> sAMAccountType: 805306368 >> objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma >> dSCorePropagationData: 20190518225159.0Z >> dSCorePropagationData: 16010101000000.0Z >> lastLogonTimestamp: 132027850050695698 >> >> # search reference >> ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FForestDnsZones.powerm.ma%2FDC%3DForestDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=k6CYQeGq2lgAtY1qmVueO9OmK1a9SzGMNGm%2BPlyfwto%3D&reserved=0 >> > >> >> # search reference >> ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FDomainDnsZones.powerm.ma%2FDC%3DDomainDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=TFYJ1nBOLaxelI2KZPaoZidLvCOPv6lrD51ZRjEBkqA%3D&reserved=0 >> > >> >> # search reference >> ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma%2FCN%3DConfiguration%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=8cVvHhnXPrqogSd8QLP6McEAoGrc2oRIKbtZYBiDz3M%3D&reserved=0 >> > >> >> # search result >> search: 2 >> result: 0 Success >> >> >> On Sun, 19 May 2019 at 23:31, > > wrote: >> Send gpfsug-discuss mailing list submissions to >> gpfsug-discuss at spectrumscale.org> gpfsug-discuss at spectrumscale.org> >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 >> > >> or, via email, send a message with subject or body 'help' to >> gpfsug-discuss-request at spectrumscale.org> gpfsug-discuss-request at spectrumscale.org> >> >> You can reach the person managing the list at >> gpfsug-discuss-owner at spectrumscale.org> gpfsug-discuss-owner at spectrumscale.org> >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of gpfsug-discuss digest..." >> >> >> Today's Topics: >> >> 1. Re: Active Directory Authentification (Schmied, Will) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Sun, 19 May 2019 23:24:15 +0000 >> From: "Schmied, Will" > will.schmied at stjude.org>> >> To: gpfsug main discussion list > gpfsug-discuss at spectrumscale.org>> >> Subject: Re: [gpfsug-discuss] Active Directory Authentification >> Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org> 4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org>> >> Content-Type: text/plain; charset="utf-8" >> >> Hi Walid, >> >> Without knowing any specifics of your environment, the below command is >> what I have used, successfully across multiple clusters at 4.2.x. The >> binding account you specify needs to be able to add computers to the domain. >> >> mmuserauth service create --data-access-method file --type ad --servers >> some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master >> --netbios-name some_ad_computer_name --unixmap-domains >> "DOMAIN_NETBIOS_NAME(10000-9999999)" >> >> 10000-9999999 is the acceptable range of UID / GID for AD accounts. >> >> >> >> Thanks, >> Will >> >> >> From: > gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "L.walid >> (PowerM)" > >> Reply-To: gpfsug main discussion list > > >> Date: Sunday, May 19, 2019 at 14:30 >> To: "gpfsug-discuss at spectrumscale.org> gpfsug-discuss at spectrumscale.org>" > > >> Subject: [gpfsug-discuss] Active Directory Authentification >> >> Caution: External Sender >> >> Hi, >> >> I'm planning to integrate Active Directory with our Spectrum Scale, but >> it seems i'm missing out something, please note that i'm on a 2 protocol >> nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest >> version). I've tried from the gui the two ways, connect to Active >> Directory, and the other to LDAP. >> >> Connect to LDAP : >> mmuserauth service create --data-access-method 'file' --type 'LDAP' >> --servers 'powermdomain.powerm.ma:389< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>' >> --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' >> --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn >> 'cn=users,dc=powerm,dc=ma' >> 7:26 PM >> Either failed to create a samba domain entry on LDAP server if not >> present or could not read the already existing samba domain entry from the >> LDAP server >> 7:26 PM >> Detailed message:smbldap_search_domain_info: Adding domain info for >> SCALECES failed with NT_STATUS_UNSUCCESSFUL >> 7:26 PM >> pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the >> domain. We cannot work reliably without it. >> 7:26 PM >> pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" >> did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) >> 7:26 PM >> WARNING: Could not open passdb >> 7:26 PM >> File authentication configuration failed. >> 7:26 PM >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> 7:26 PM >> Operation Failed >> 7:26 PM >> Error: Either failed to create a samba domain entry on LDAP server if not >> present or could not read the already existing samba domain entry from the >> LDAP server >> Detailed message:smbldap_search_domain_info: Adding domain info for >> SCALECES failed with NT_STATUS_UNSUCCESSFUL >> pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the >> domain. We cannot work reliably without it. >> pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" >> did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) >> WARNING: Could not open passdb >> File authentication configuration failed. >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> >> >> Connect to Active Directory : >> mmuserauth service create --data-access-method 'file' --type 'AD' >> --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' >> --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains ' >> powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=tJKajnPMlWowHIAHnoxbceVIbE4t19KiLCaohZRwwYQ%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 >> >>(type=s >> tand-alone:ldap_srv=192.168.56.5: >> range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword >> )' >> 7:29 PM >> mmuserauth service create: Invalid parameter passed for --ldapmap-domain >> 7:29 PM >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> 7:29 PM >> Operation Failed >> 7:29 PM >> Error: mmuserauth service create: Invalid parameter passed for >> --ldapmap-domain >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> -- >> Best regards, >> >> >> Walid Largou >> Senior IT Specialist >> >> Power Maroc >> >> Mobile : +212 621 31 98 71 >> >> Email: l.walid at powerm.ma> y.largou at powerm.ma> >> 320 Bd Zertouni 6th Floor, Casablanca, Morocco >> >> https://www.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=lFUQnvPlecsmKcAL%2FC4PbmfqyxW0sn5PI%2Bu4aCD5448%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=qpwCQkujjr3Sq0wCySyjRMGZrp94mvRQAK0iGlh7DqQ%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 >> >> >> >> [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] >> This message is confidential .Its contents do not constitute a commitment >> by Power Maroc S.A.R.L except where provided for in a written agreement >> between you and Power Maroc S.A.R.L. Any authorized disclosure, use or >> dissemination, either whole or partial, is prohibited. If you are not the >> intended recipient of the message, please notify the sender immediately. >> >> ________________________________ >> >> Email Disclaimer: www.stjude.org/emaildisclaimer< >> http://www.stjude.org/emaildisclaimer> >> Consultation Disclaimer: www.stjude.org/consultationdisclaimer< >> http://www.stjude.org/consultationdisclaimer> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190519/9b579ecf/attachment.html >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fpipermail%2Fgpfsug-discuss%2Fattachments%2F20190519%2F9b579ecf%2Fattachment.html&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=DlY%2Bdy25zq2TcPBLwf%2FDQm0cngmIu6FTDzEW9PgTsrc%3D&reserved=0 >> >> >> >> ------------------------------ >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=UNt7Tspdurvw2nLSOYUf3T5pbwfD0xmW91PlwxOJi2Y%3D&reserved=0 >> > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 >> > >> >> >> End of gpfsug-discuss Digest, Vol 88, Issue 19 >> ********************************************** >> >> >> -- >> Best regards, >> >> >> Walid Largou >> Senior IT Specialist >> >> Power Maroc >> >> Mobile : +212 621 31 98 71 >> >> Email: l.walid at powerm.ma >> 320 Bd Zertouni 6th Floor, Casablanca, Morocco >> >> https://www.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 >> > >> >> [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] >> This message is confidential .Its contents do not constitute a commitment >> by Power Maroc S.A.R.L except where provided for in a written agreement >> between you and Power Maroc S.A.R.L. Any authorized disclosure, use or >> dissemination, either whole or partial, is prohibited. If you are not the >> intended recipient of the message, please notify the sender immediately. >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190520/92f25565/attachment.html >> > >> >> ------------------------------ >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> End of gpfsug-discuss Digest, Vol 88, Issue 21 >> ********************************************** >> > > > -- > Best regards, > > Walid Largou > Senior IT Specialist > Power Maroc > Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > https://www.powerm.ma > > > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From INDULISB at uk.ibm.com Tue May 21 10:34:42 2019 From: INDULISB at uk.ibm.com (Indulis Bernsteins1) Date: Tue, 21 May 2019 10:34:42 +0100 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: Message-ID: Have you tried looking at Spectrum Archive setting instead of Spectrum Scale? You can set both the size of the "stub file" that remains behind when a file is migrated, and also the amount of data which would need to be read before a recall is triggered. This might catch enough of your recall storms... or at least help! IBM Spectrum Archive Enterprise Edition V1.3.0: Installation and Configuration Guide http://www.redbooks.ibm.com/abstracts/sg248333.html?Open 7.14.3 Read Starts Recalls: Early trigger for recalling a migrated file IBM Spectrum Archive EE can define a stub size for migrated files so that the stub size initial bytes of a migrated file are kept on disk while the entire file is migrated to tape. The migrated file bytes that are kept on the disk are called the stub. Reading from the stub does not trigger a recall of the rest of the file. After the file is read beyond the stub, the recall is triggered. The recall might take a long time while the entire file is read from tape because a tape mount might be required, and it takes time to position the tape before data can be recalled from tape. When Read Start Recalls (RSR) is enabled for a file, the first read from the stub file triggers a recall of the complete file in the background (asynchronous). Reads from the stubs are still possible while the rest of the file is being recalled. After the rest of the file is recalled to disks, reads from any file part are possible. With the Preview Size (PS) value, a preview size can be set to define the initial file part size for which any reads from the resident file part does not trigger a recall. Typically, the PS value is large enough to see whether a recall of the rest of the file is required without triggering a recall for reading from every stub. This process is important to prevent unintended massive recalls. The PS value can be set only smaller than or equal to the stub size. This feature is useful, for example, when playing migrated video files. While the initial stub size part of a video file is played, the rest of the video file can be recalled to prevent a pause when it plays beyond the stub size. You must set the stub size and preview size to be large enough to buffer the time that is required to recall the file from tape without triggering recall storms. Use the following dsmmigfs command options to set both the stub size and preview size of the file system being managed by IBM Spectrum Archive EE: dsmmigfs Update -STUBsize dsmmigfs Update -PREViewsize The value for the STUBsize is a multiple of the IBM Spectrum Scale file system?s block size. this value can be obtained by running the mmlsfs . The PREViewsize parameter must be equal to or less than the STUBsize value. Both parameters take a positive integer in bytes. Regards, Indulis Bernsteins Systems Architect IBM New Generation Storage Phone: +44 792 008 6548 E-mail: INDULISB at UK.IBM.COM Jackson House, Sibson Rd Sale, Cheshire M33 7RR United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10045 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10249 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10012 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10031 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 11771 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Tue May 21 11:30:09 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 21 May 2019 11:30:09 +0100 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: <7d068877a726fa5bd0703fcdd12fdc881f62711b.camel@strath.ac.uk> On Mon, 2019-05-20 at 20:33 +0000, Christof Schmitt wrote: > SMB clients know the state of the files through a OFFLINE bit that is > part of the metadata that is available through the SMB protocol. The > Windows Explorer in particular honors this bit and avoids reading > file data for previews, but the MacOS Finder seems to ignore it and > read file data for previews anyway, triggering recalls. > > The best way would be fixing this on the Mac clients to simply not > read file data for previews for OFFLINE files. So far requests to > Apple support to implement this behavior were unsuccessful, but it > might still be worthwhile to keep pushing this request. > In the interim would it be possible for the SMB server to detect the client OS and only allow recalls from say Windows. At least this would be in "our" control unlike getting Apple to change the finder.app behaviour. Then tell MacOS users to use Windows if they want to recall files and pin the blame squarely on Apple to your users. I note that Linux is no better at honouring the offline bit in the SMB protocol than MacOS. Oh the irony of Windows being the only main stream IS handling HSM'ed files properly! JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From christophe.darras at atempo.com Tue May 21 14:07:02 2019 From: christophe.darras at atempo.com (Christophe Darras) Date: Tue, 21 May 2019 13:07:02 +0000 Subject: [gpfsug-discuss] Spectrum Scale GPFS User Group Message-ID: Hello all, I would like to thank you for welcoming me on this group! My name is Christophe Darras (Chris), based in London and in charge of Atempo for North Europe. We are developing solutions of DATA MANAGEMENT for Spectrum Scale*: automated data migration and high performance backup, but also Archiving/retrieving/moving large data sets. Kindest Regards, Chris *and other File Systems and large NAS Christophe DARRAS Head of North Europe, Middle East & South Africa Cell. : +44 7555 993 529 -------------- next part -------------- An HTML attachment was scrubbed... URL: From truston at mbari.org Tue May 21 18:59:05 2019 From: truston at mbari.org (Todd Ruston) Date: Tue, 21 May 2019 10:59:05 -0700 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: Message-ID: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> Hi Indulis, Yes, thanks for the reminder. I'd come across that, and our system is currently set to a stub size of zero (the default, I presume). I'd intended to ask in my original query whether anyone had experimented and found an optimal value that prevents most common inadvertent recalls by Macs. I know that will likely vary by file type, but since we have a broad mix of file types I figure a value that covers the majority of cases without being excessively large is the best we could implement. Our system is using 16MiB blocks, with 1024 subblocks. Is stub size bounded by full blocks, or subblocks? In other words, would we need to set the stub value to increments of 16MiB, or 16KiB? Cheers, - Todd > On May 21, 2019, at 2:34 AM, Indulis Bernsteins1 wrote: > > Have you tried looking at Spectrum Archive setting instead of Spectrum Scale? > > You can set both the size of the "stub file" that remains behind when a file is migrated, and also the amount of data which would need to be read before a recall is triggered. This might catch enough of your recall storms... or at least help! > > IBM Spectrum Archive Enterprise Edition V1.3.0: Installation and Configuration Guide > http://www.redbooks.ibm.com/abstracts/sg248333.html?Open > > 7.14.3 Read Starts Recalls: Early trigger for recalling a migrated file > IBM Spectrum Archive EE can define a stub size for migrated files so that the stub size initial > bytes of a migrated file are kept on disk while the entire file is migrated to tape. The migrated > file bytes that are kept on the disk are called the stub. Reading from the stub does not trigger > a recall of the rest of the file. After the file is read beyond the stub, the recall is triggered. The > recall might take a long time while the entire file is read from tape because a tape mount > might be required, and it takes time to position the tape before data can be recalled from tape. > When Read Start Recalls (RSR) is enabled for a file, the first read from the stub file triggers a > recall of the complete file in the background (asynchronous). Reads from the stubs are still > possible while the rest of the file is being recalled. After the rest of the file is recalled to disks, > reads from any file part are possible. > With the Preview Size (PS) value, a preview size can be set to define the initial file part size > for which any reads from the resident file part does not trigger a recall. Typically, the PS value > is large enough to see whether a recall of the rest of the file is required without triggering a > recall for reading from every stub. This process is important to prevent unintended massive > recalls. The PS value can be set only smaller than or equal to the stub size. > This feature is useful, for example, when playing migrated video files. While the initial stub > size part of a video file is played, the rest of the video file can be recalled to prevent a pause > when it plays beyond the stub size. You must set the stub size and preview size to be large > enough to buffer the time that is required to recall the file from tape without triggering recall > storms. > Use the following dsmmigfs command options to set both the stub size and preview size of > the file system being managed by IBM Spectrum Archive EE: > dsmmigfs Update -STUBsize > dsmmigfs Update -PREViewsize > The value for the STUBsize is a multiple of the IBM Spectrum Scale file system?s block size. > this value can be obtained by running the mmlsfs . The PREViewsize parameter > must be equal to or less than the STUBsize value. Both parameters take a positive integer in > bytes. > > Regards, > > Indulis Bernsteins > Systems Architect > IBM New Generation Storage > Phone: +44 792 008 6548 > E-mail: INDULISB at UK.IBM.COM > > > Jackson House, Sibson Rd > Sale, Cheshire M33 7RR > United Kingdom > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Tue May 21 19:34:12 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 21 May 2019 20:34:12 +0200 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> References: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> Message-ID: It?s a multiple of full blocks. -jf tir. 21. mai 2019 kl. 20:06 skrev Todd Ruston : > Hi Indulis, > > Yes, thanks for the reminder. I'd come across that, and our system is > currently set to a stub size of zero (the default, I presume). I'd intended > to ask in my original query whether anyone had experimented and found an > optimal value that prevents most common inadvertent recalls by Macs. I know > that will likely vary by file type, but since we have a broad mix of file > types I figure a value that covers the majority of cases without being > excessively large is the best we could implement. > > Our system is using 16MiB blocks, with 1024 subblocks. Is stub size > bounded by full blocks, or subblocks? In other words, would we need to set > the stub value to increments of 16MiB, or 16KiB? > > Cheers, > > - Todd > > > On May 21, 2019, at 2:34 AM, Indulis Bernsteins1 > wrote: > > Have you tried looking at Spectrum Archive setting instead of Spectrum > Scale? > > You can set both the size of the "stub file" that remains behind when a > file is migrated, and also the amount of data which would need to be read > before a recall is triggered. This might catch enough of your recall > storms... or at least help! > > *IBM Spectrum Archive Enterprise Edition V1.3.0: Installation and > Configuration Guide* > http://www.redbooks.ibm.com/abstracts/sg248333.html?Open > > *7.14.3 Read Starts Recalls: Early trigger for recalling a migrated file* > IBM Spectrum Archive EE can define a stub size for migrated files so that > the stub size initial > bytes of a migrated file are kept on disk while the entire file is > migrated to tape. The migrated > file bytes that are kept on the disk are called the *stub*. Reading from > the stub does not trigger > a recall of the rest of the file. After the file is read beyond the stub, > the recall is triggered. The > recall might take a long time while the entire file is read from tape > because a tape mount > might be required, and it takes time to position the tape before data can > be recalled from tape. > When Read Start Recalls (RSR) is enabled for a file, the first read from > the stub file triggers a > recall of the complete file in the background (asynchronous). Reads from > the stubs are still > possible while the rest of the file is being recalled. After the rest of > the file is recalled to disks, > reads from any file part are possible. > With the Preview Size (PS) value, a preview size can be set to define the > initial file part size > for which any reads from the resident file part does not trigger a recall. > Typically, the PS value > is large enough to see whether a recall of the rest of the file is > required without triggering a > recall for reading from every stub. This process is important to prevent > unintended massive > recalls. The PS value can be set only smaller than or equal to the stub > size. > This feature is useful, for example, when playing migrated video files. > While the initial stub > size part of a video file is played, the rest of the video file can be > recalled to prevent a pause > when it plays beyond the stub size. You must set the stub size and preview > size to be large > enough to buffer the time that is required to recall the file from tape > without triggering recall > storms. > Use the following *dsmmigfs *command options to set both the stub size > and preview size of > the file system being managed by IBM Spectrum Archive EE: > *dsmmigfs Update -STUBsize* > *dsmmigfs Update -PREViewsize* > The value for the *STUBsize *is a multiple of the IBM Spectrum Scale file > system?s block size. > this value can be obtained by running the *mmlsfs *. The *PREViewsize > *parameter > must be equal to or less than the *STUBsize *value. Both parameters take > a positive integer in > bytes. > > Regards, > > *Indulis Bernsteins* > Systems Architect > IBM New Generation Storage > > ------------------------------ > *Phone:* +44 792 008 6548 > * E-mail:* *INDULISB at UK.IBM.COM * > [image: Description: Description: IBM] > > Jackson House, Sibson Rd > Sale, Cheshire M33 7RR > United Kingdom > Attachment.png> > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 21 19:40:56 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 21 May 2019 14:40:56 -0400 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> Message-ID: https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.0/com.ibm.itsm.hsmul.doc/c_mig_stub_size.html Trust but verify. And try it before you buy it. (Personally, I would have guessed sub-block, doc says otherwise, but I'd try it nevertheless.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Tue May 21 19:59:14 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Tue, 21 May 2019 18:59:14 +0000 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: <7d068877a726fa5bd0703fcdd12fdc881f62711b.camel@strath.ac.uk> References: <7d068877a726fa5bd0703fcdd12fdc881f62711b.camel@strath.ac.uk>, <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Wed May 22 09:50:22 2019 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Wed, 22 May 2019 10:50:22 +0200 Subject: [gpfsug-discuss] Save the date - User Meeting along ISC Frankfurt Message-ID: Greetings: IBM will host a joint "IBM Spectrum Scale and IBM Spectrum LSF User Meeting" at ISC. As with other user group meetings, the agenda will include user stories, updates on IBM Spectrum Scale & IBM Spectrum LSF, and access to IBM experts and your peers. We are still looking for customers to talk about their experience with Spectrum Scale and/or Spectrum LSF. Please send me a personal mail, if you are interested to talk. The meeting is planned for: Monday June 17th, 2019 - 1pm-5.30pm ISC Frankfurt, Germany I will send more details later. Best, Ulf -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Matthias Hartmann Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From INDULISB at uk.ibm.com Wed May 22 11:19:55 2019 From: INDULISB at uk.ibm.com (Indulis Bernsteins1) Date: Wed, 22 May 2019 11:19:55 +0100 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: Message-ID: There was some horrible way to do the same thing in previous versions of Spectrum Archive using the policy engine, which was more granular than the dsmmigfs command is now. I will ask one of the Scale developers if the developers might think about allowing multiples of the sub-block size, as this would make sense- 16 MiB is a very big stub to leave behind! Regards, Indulis Bernsteins Systems Architect IBM New Generation Storage Phone: +44 792 008 6548 E-mail: INDULISB at UK.IBM.COM Jackson House, Sibson Rd Sale, Cheshire M33 7RR United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10045 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10249 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10012 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10031 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 11771 bytes Desc: not available URL: From l.walid at powerm.ma Thu May 23 00:59:40 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Wed, 22 May 2019 23:59:40 +0000 Subject: [gpfsug-discuss] SMB share size on disk Windows Message-ID: Hi, We are contacting you regarding a behavior observed for our customer gpfs smb shares. When we try to view the file/folder properties, the values reported are significantly different from the folder/size and the folder/file size on disk. We tried to reproduce with creating a simple text file of 1ko and when we check the properties of the file it was a 1Mo on disk! I tried changing the block size of the fs from 4M to 256k , but still the same results Thank you -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From l.walid at powerm.ma Thu May 23 02:00:17 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Thu, 23 May 2019 01:00:17 +0000 Subject: [gpfsug-discuss] SMB share size on disk Windows In-Reply-To: References: Message-ID: Hi Everyone, Through some research, i found it's a normal behavior related to Samba "allocation roundup size" , since CES SMB is based on Samba that explains the behavior. (Windows assumes that the default size for a block is 1M). As such, i found somewhere else that changing this parameter can decrease performance, so if possible to advise on this. For the block size on the filesystem i would still go with 256k since it's the recommended for File Serving use cases. Thank you References : https://lists.samba.org/archive/samba-technical/2016-July/115166.html On Wed, May 22, 2019 at 11:59 PM L.walid (PowerM) wrote: > Hi, > > We are contacting you regarding a behavior observed for our customer gpfs > smb shares. When we try to view the file/folder properties, the values > reported are significantly different from the folder/size and the > folder/file size on disk. > > We tried to reproduce with creating a simple text file of 1ko and when we > check the properties of the file it was a 1Mo on disk! > > I tried changing the block size of the fs from 4M to 256k , but still the > same results > > Thank you > -- > Best regards, > > Walid Largou > Senior IT Specialist > Power Maroc > Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > https://www.powerm.ma > > > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From christof.schmitt at us.ibm.com Thu May 23 05:00:46 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 23 May 2019 04:00:46 +0000 Subject: [gpfsug-discuss] SMB share size on disk Windows In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: From oluwasijibomi.saula at ndsu.edu Thu May 23 18:40:03 2019 From: oluwasijibomi.saula at ndsu.edu (Saula, Oluwasijibomi) Date: Thu, 23 May 2019 17:40:03 +0000 Subject: [gpfsug-discuss] Reason for shutdown: Reset old shared segment In-Reply-To: References: Message-ID: Hey Folks, I got a strange message one of my HPC cluster nodes that I'm hoping to understand better: "Reason for shutdown: Reset old shared segment" 2019-05-23_11:47:07.328-0500: [I] This node has a valid standard license 2019-05-23_11:47:07.327-0500: [I] Initializing the fast condition variables at 0x555557115300 ... 2019-05-23_11:47:07.328-0500: [I] mmfsd initializing. {Version: 5.0.0.0 Built: Dec 10 2017 16:59:21} ... 2019-05-23_11:47:07.328-0500: [I] Cleaning old shared memory ... 2019-05-23_11:47:07.328-0500: [N] mmfsd is shutting down. 2019-05-23_11:47:07.328-0500: [N] Reason for shutdown: Reset old shared segment Shortly after the GPFS is back up without any intervention: 2019-05-23_11:47:52.685-0500: [N] Remounted gpfs1 2019-05-23_11:47:52.691-0500: [N] mmfsd ready I'm supposing this has to do with memory usage??... Thanks, Siji Saula HPC System Administrator Center for Computationally Assisted Science & Technology NORTH DAKOTA STATE UNIVERSITY Research 2 Building ? Room 220B Dept 4100, PO Box 6050 / Fargo, ND 58108-6050 p:701.231.7749 www.ccast.ndsu.edu | www.ndsu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Thu May 23 19:16:33 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Thu, 23 May 2019 14:16:33 -0400 Subject: [gpfsug-discuss] Reason for shutdown: Reset old shared segment In-Reply-To: References: Message-ID: (Somewhat educated guess.) Somehow a previous incarnation of the mmfsd daemon was killed, but left its shared segment laying about. When GPFS is restarted, it discovers the old segment and deallocates it, etc, etc... Then the safest, easiest thing to do after going down that error recover path is to quit and (re)start GPFS as if none of that ever happened. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpergamin at ddn.com Wed May 29 12:54:46 2019 From: rpergamin at ddn.com (Ran Pergamin) Date: Wed, 29 May 2019 11:54:46 +0000 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Message-ID: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Hi All, My customer has some nodes in the cluster which current have their second IB port disabled. Spectrum scale 4.2.3 update 13. Port 1 is defined in verbs port, yet sysmoncon monitor and reports error on port 2 despite not being used. I found an old listing claiming it will be solved in in 4.2.3-update5, yet nothing in 4.2.3-update7 release notes, about it. https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html Filters in sensor file say filters are not support + apply to ALL nodes, so no relevant where I need to ignore it. Any idea how can I disable the check of sensor on mlx4_0/2 on some of the nodes ? Node name: cff003-ib0.chemfarm Node status: DEGRADED Status Change: 2019-05-29 12:29:49 Component Status Status Change Reasons ------------------------------------------------------------------------------------------------------------------------------------------------- GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small NETWORK DEGRADED 2019-05-29 12:29:49 ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), ib_rdma_nic_unrecognized(mlx4_0/2) ib0 HEALTHY 2019-05-29 12:29:49 - mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, ib_rdma_nic_down, ib_rdma_nic_unrecognized FILESYSTEM HEALTHY 2019-05-29 12:29:48 - apps HEALTHY 2019-05-29 12:29:48 - data HEALTHY 2019-05-29 12:29:48 - PERFMON HEALTHY 2019-05-29 12:29:33 - THRESHOLD HEALTHY 2019-05-29 12:29:18 - Thanks ! Regards, Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From spectrumscale at kiranghag.com Wed May 29 13:14:17 2019 From: spectrumscale at kiranghag.com (KG) Date: Wed, 29 May 2019 17:44:17 +0530 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. In-Reply-To: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> References: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Message-ID: This is a per node setting so you should be able to set correct port for each node (mmchconfig -N) On Wed, May 29, 2019 at 5:24 PM Ran Pergamin wrote: > Hi All, > > My customer has some nodes in the cluster which current have their second > IB port disabled. > Spectrum scale 4.2.3 update 13. > > Port 1 is defined in verbs port, yet sysmoncon monitor and reports error > on port 2 despite not being used. > > I found an old listing claiming it will be solved in in 4.2.3-update5, yet > nothing in 4.2.3-update7 release notes, about it. > > > https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html > > Filters in sensor file say filters are not support + apply to ALL nodes, > so no relevant where I need to ignore it. > > Any idea how can I disable the check of sensor on mlx4_0/2 on some of the > nodes ? > > > > Node name: cff003-ib0.chemfarm > > Node status: DEGRADED > > Status Change: 2019-05-29 12:29:49 > > > > Component Status Status Change Reasons > > > ------------------------------------------------------------------------------------------------------------------------------------------------- > > GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small > > NETWORK DEGRADED 2019-05-29 12:29:49 > ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), > ib_rdma_nic_unrecognized(mlx4_0/2) > > ib0 HEALTHY 2019-05-29 12:29:49 - > > mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - > > * mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, > ib_rdma_nic_down, ib_rdma_nic_unrecognized* > > FILESYSTEM HEALTHY 2019-05-29 12:29:48 - > > apps HEALTHY 2019-05-29 12:29:48 - > > data HEALTHY 2019-05-29 12:29:48 - > > PERFMON HEALTHY 2019-05-29 12:29:33 - > > THRESHOLD HEALTHY 2019-05-29 12:29:18 - > > > > > Thanks ! > > Regards, > Ran > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Wed May 29 13:19:51 2019 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Wed, 29 May 2019 14:19:51 +0200 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. In-Reply-To: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> References: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Message-ID: Hi Ran, please double check that port 2 config is not yet active for the running mmfsd daemon. When changing the verbsPorts, the daemon keeps using the old value until a restart is done. mmdiag --config | grep verbsPorts Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: Ran Pergamin To: gpfsug main discussion list Date: 29/05/2019 13:54 Subject: [EXTERNAL] [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, My customer has some nodes in the cluster which current have their second IB port disabled. Spectrum scale 4.2.3 update 13. Port 1 is defined in verbs port, yet sysmoncon monitor and reports error on port 2 despite not being used. I found an old listing claiming it will be solved in in 4.2.3-update5, yet nothing in 4.2.3-update7 release notes, about it. https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html Filters in sensor file say filters are not support + apply to ALL nodes, so no relevant where I need to ignore it. Any idea how can I disable the check of sensor on mlx4_0/2 on some of the nodes ? Node name: cff003-ib0.chemfarm Node status: DEGRADED Status Change: 2019-05-29 12:29:49 Component Status Status Change Reasons ------------------------------------------------------------------------------------------------------------------------------------------------- GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small NETWORK DEGRADED 2019-05-29 12:29:49 ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), ib_rdma_nic_unrecognized(mlx4_0/2) ib0 HEALTHY 2019-05-29 12:29:49 - mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, ib_rdma_nic_down, ib_rdma_nic_unrecognized FILESYSTEM HEALTHY 2019-05-29 12:29:48 - apps HEALTHY 2019-05-29 12:29:48 - data HEALTHY 2019-05-29 12:29:48 - PERFMON HEALTHY 2019-05-29 12:29:33 - THRESHOLD HEALTHY 2019-05-29 12:29:18 - Thanks ! Regards, Ran _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=9dCEbNr27klWay2AcOfvOE1xq50K-CyRUu4qQx4HOlk&m=nFF5UhMPmV8schGYYE3L6ZG86b1SiY3-eXi4mz3CQxE&s=Y2emO_gUxLk44-GrE4_tOeQKWZsH1fZgNP4tELnjx_g&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpergamin at ddn.com Wed May 29 13:26:40 2019 From: rpergamin at ddn.com (Ran Pergamin) Date: Wed, 29 May 2019 12:26:40 +0000 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. In-Reply-To: References: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Message-ID: Thanks All. Solved it. The other port Link Layer was in autosense rather than IB. Once changed the Link Layer to IB the false report cleared. I assume that?s the auth fix that was applied. Regards, Ran From: on behalf of Mathias Dietz Reply-To: gpfsug main discussion list Date: Wednesday, 29 May 2019 at 15:20 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Hi Ran, please double check that port 2 config is not yet active for the running mmfsd daemon. When changing the verbsPorts, the daemon keeps using the old value until a restart is done. mmdiag --config | grep verbsPorts Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: Ran Pergamin To: gpfsug main discussion list Date: 29/05/2019 13:54 Subject: [EXTERNAL] [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi All, My customer has some nodes in the cluster which current have their second IB port disabled. Spectrum scale 4.2.3 update 13. Port 1 is defined in verbs port, yet sysmoncon monitor and reports error on port 2 despite not being used. I found an old listing claiming it will be solved in in 4.2.3-update5, yet nothing in 4.2.3-update7 release notes, about it. https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html Filters in sensor file say filters are not support + apply to ALL nodes, so no relevant where I need to ignore it. Any idea how can I disable the check of sensor on mlx4_0/2 on some of the nodes ? Node name: cff003-ib0.chemfarm Node status: DEGRADED Status Change: 2019-05-29 12:29:49 Component Status Status Change Reasons ------------------------------------------------------------------------------------------------------------------------------------------------- GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small NETWORK DEGRADED 2019-05-29 12:29:49 ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), ib_rdma_nic_unrecognized(mlx4_0/2) ib0 HEALTHY 2019-05-29 12:29:49 - mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, ib_rdma_nic_down, ib_rdma_nic_unrecognized FILESYSTEM HEALTHY 2019-05-29 12:29:48 - apps HEALTHY 2019-05-29 12:29:48 - data HEALTHY 2019-05-29 12:29:48 - PERFMON HEALTHY 2019-05-29 12:29:33 - THRESHOLD HEALTHY 2019-05-29 12:29:18 - Thanks ! Regards, Ran _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweil at wustl.edu Fri May 31 19:56:38 2019 From: mweil at wustl.edu (Weil, Matthew) Date: Fri, 31 May 2019 18:56:38 +0000 Subject: [gpfsug-discuss] Gateway role on a NSD server Message-ID: Hello all, How important is it to separate these two roles.? planning on using AFM and I am wondering if we should have the gateways on different nodes than the NSDs.? Any opinions?? What about fail overs and maintenance?? Could one role effect the other? Thanks Matt From cblack at nygenome.org Fri May 31 20:09:46 2019 From: cblack at nygenome.org (Christopher Black) Date: Fri, 31 May 2019 19:09:46 +0000 Subject: [gpfsug-discuss] Gateway role on a NSD server Message-ID: <59BC2553-2F56-4863-A353-C2E2062DA92D@nygenome.org> We've done it both ways. You will get better performance and fewer challenges of ensuring processes and memory don't step on eachother if afm gateway node is not also doing nsd server work. However, using an nsd server that mounts two filesystems (one via mmremotefs from another cluster) did work. Best, Chris ?On 5/31/19, 2:56 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Weil, Matthew" wrote: Hello all, How important is it to separate these two roles. planning on using AFM and I am wondering if we should have the gateways on different nodes than the NSDs. Any opinions? What about fail overs and maintenance? Could one role effect the other? Thanks Matt _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=C9X8xNkG_lwP_-eFHTGejw&r=DopWM-bvfskhBn2zeglfyyw5U2pumni6m_QzQFYFepU&m=ZRGpE3XENwtAlhLHRmvswDiYLgHX5WHNzqGhdZmqMCw&s=23djes6DK8Uzh7SLRQwUA-KphzsnVONiU4ieADwQwMA&e= ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email. From Robert.Oesterlin at nuance.com Wed May 1 14:35:21 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 1 May 2019 13:35:21 +0000 Subject: [gpfsug-discuss] PSA: Room Reservations for SC19 are now open Message-ID: It may be 6 months away, but SC19 room reservations fill fast! If you?re thinking about going, reserve a room - no cost to do so for most hotels. You don?t need to register to hold a room. We?ll have a user group meeting on Sunday afternoon 11/17. https://sc19.supercomputing.org/attend/housing/ Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Wed May 1 16:22:54 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Wed, 1 May 2019 15:22:54 +0000 Subject: [gpfsug-discuss] PSA: Room Reservations for SC19 are now open In-Reply-To: References: Message-ID: Or for anyone ever who has seen an IBM talk, this is a statement of intent and is not a binding commitment to run the user group on the Sunday... :-) Simon -------- Original Message -------- From: "Robert.Oesterlin at nuance.com" > Date: Wed, 1 May 2019, 14:50 To: gpfsug main discussion list > Subject: [gpfsug-discuss] PSA: Room Reservations for SC19 are now open It may be 6 months away, but SC19 room reservations fill fast! If you?re thinking about going, reserve a room - no cost to do so for most hotels. You don?t need to register to hold a room. We?ll have a user group meeting on Sunday afternoon 11/17. https://sc19.supercomputing.org/attend/housing/ Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Mon May 6 14:19:26 2019 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Mon, 6 May 2019 15:19:26 +0200 Subject: [gpfsug-discuss] Informal Social Gathering - Tue May 7th Message-ID: Some folks asked me about the the usual informal pre-event gathering for those arriving early. Simon sent details via Eventbrite, but it seems that this was easy to miss. As in the past, a few of us usually meet up for an informal gathering the evening before (7th May). (Bring you own money!). We've booked a few tables for this, but please drop a note to me if you plan to attend: Tuesday May 7th, 7pm - 9:30pm The White Hart, 29 Cornwall Road, London, SE1 9TJ www.thewhitehartwaterloo.co.uk (Reservation for "Spectrum Scale User Group") -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Matthias Hartmann Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeep.patil at in.ibm.com Tue May 7 10:31:35 2019 From: sandeep.patil at in.ibm.com (Sandeep Ramesh) Date: Tue, 7 May 2019 15:01:35 +0530 Subject: [gpfsug-discuss] Spectrum Scale Cyber Security Survey // Gentle Reminder Message-ID: Thank You to all who responded and Gentle Reminder to others. The survey will close on 10th May 2019 Spectrum Scale Cyber Security Survey https://www.surveymonkey.com/r/9ZNCZ75 -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.childs at qmul.ac.uk Tue May 7 15:35:26 2019 From: p.childs at qmul.ac.uk (Peter Childs) Date: Tue, 7 May 2019 14:35:26 +0000 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL In-Reply-To: References: Message-ID: <28b67f3b9cf87ff05c9e6bde50fbf8b644920985.camel@qmul.ac.uk> On Sat, 2019-04-06 at 23:50 +0200, Michal Zacek wrote: Hello, we decided to convert NFS4 acl to POSIX (we need share same data between SMB, NFS and GPFS clients), so I created script to convert NFS4 to posix ACL. It is very simple, first I do "chmod -R 770 DIR" and then "setfacl -R ..... DIR". I was surprised that conversion to posix acl has taken more then 2TB of metadata space.There is about one hundred million files at GPFS filesystem. Is this expected behavior? Thanks, Michal Example of NFS4 acl: #NFSv4 ACL #owner:root #group:root special:owner@:rwx-:allow (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED group:ag_cud_96_lab:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED group:ag_cud_96_lab_ro:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (-)WRITE_ACL (X)WRITE_ATTR (-)WRITE_NAMED converted to posix acl: # owner: root # group: root user::rwx group::rwx mask::rwx other::--- default:user::rwx default:group::rwx default:mask::rwx default:other::--- group:ag_cud_96_lab:rwx default:group:ag_cud_96_lab:rwx group:ag_cud_96_lab_ro:r-x default:group:ag_cud_96_lab_ro:r-x _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7Cp.childs%40qmul.ac.uk%7Ce1059833f7ed448b027608d6bad9ffec%7C569df091b01340e386eebd9cb9e25814%7C0%7C1%7C636901842833614488&sdata=ROQ3LKmLZ06pI%2FTfdKZ9oPJx5a2xCUINqBnlIfEKF2Q%3D&reserved=0 I've been trying to get my head round acls, with the plan to implement Cluster Export Services SMB rather than roll your own SMB. I'm not sure that plan is going to work Michal, although it might if your not using the Cluster Export Services version of SMB. Put simply if your running Cluster export services SMB you need to set ACLs in Spectrum Scale to "nfs4" we currently have it set to "all" and it won't let you export the shares until you change it, currently I'm still testing, and have had to write a change to go the other way. If you using linux kernel nfs4 that uses posix, however CES nfs uses ganasha which uses nfs4 acl correctly. It gets slightly more annoying as nfs4-setfacl does not work with Spectrum Scale and you have to use mmputacl which has no recursive flag, I even found a ibm article from a few years ago saying the best way to set acls is to use find, and a temporary file..... The other workaround they suggest is to update acls from windows or nfs to get the right. One thing I think may happen if you do as you've suggested is that you will break any acls under Samba badly. I think the other reason that command is taking up more space than expected is that your giving files acls that never had them to start with. I would love someone to say that I'm wrong, as changing our acl setting is going to be a pain. as while we don't make a lot of use of them we make enough that having to use nfs4 acls all the time is going to be a pain. -- Peter Childs ITS Research Storage Queen Mary, University of London -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 7 16:16:52 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 7 May 2019 11:16:52 -0400 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL In-Reply-To: References: Message-ID: 2TB of extra meta data space for 100M files with ACLS?! I think that would be 20KB per file! Does seem there's some mistake here. Perhaps 2GB ? or 20GB? I don't see how we get to 2TeraBytes! ALSO, IIRC GPFS is supposed to use an ACL scheme where identical ACLs are stored once and each file with the same ACL just has a pointer to that same ACL. So no matter how many files have a particular ACL, you only "pay" once... An ACL is stored more compactly than its printed format, so I'd guess your ordinary ACL with a few users and groups would be less than 200 bytes. From: Michal Zacek Hello, we decided to convert NFS4 acl to POSIX (we need share same data between? SMB, NFS and GPFS clients), so I created script to convert NFS4 to posix ACL. It is very simple, first I do "chmod -R 770 DIR" and then "setfacl -R ..... DIR".? I was surprised that conversion to posix acl has taken more then 2TB of metadata space.There is about one hundred million files at GPFS filesystem. Is this expected behavior? Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Tue May 7 17:14:49 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 07 May 2019 17:14:49 +0100 Subject: [gpfsug-discuss] Metadata space usage NFS4 vs POSIX ACL In-Reply-To: <28b67f3b9cf87ff05c9e6bde50fbf8b644920985.camel@qmul.ac.uk> References: <28b67f3b9cf87ff05c9e6bde50fbf8b644920985.camel@qmul.ac.uk> Message-ID: On Tue, 2019-05-07 at 14:35 +0000, Peter Childs wrote: [SNIP] > It gets slightly more annoying as nfs4-setfacl does not work with > Spectrum Scale and you have to use mmputacl which has no recursive > flag, I even found a ibm article from a few years ago saying the best > way to set acls is to use find, and a temporary file..... The other > workaround they suggest is to update acls from windows or nfs to get > the right. > I am working on making my solution to that production ready. I decided after doing a proof of concept with the Linux nfs4_[get|set]facl commands using the FreeBSD getfacl/setfacl commands as a basis would be better as it could both POSIX and NFSv4 ACL's out the same program. Noting initial version will be something of a bodge where we translate between the existing programs representation of the ACL and the GPFS version as we read/write the ACL's. Longer term the code will need refactoring to use the GPFS structs throughout I feel. Progress depends on my spare time. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From Robert.Oesterlin at nuance.com Wed May 8 15:29:57 2019 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 8 May 2019 14:29:57 +0000 Subject: [gpfsug-discuss] CES IP addresses - multiple subnets, using groups Message-ID: <3825202F-F636-48F7-BC78-3F07764A6FAD@nuance.com> Reference: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_configcesprotocolservipadd.htm I have a 3 CES servers with IP addresses: Node1 10.30.43.14 (netmask 255.255.255.224) export IP 10.30.43.25 Node2 10.30.43.24 (netmask 255.255.255.224) export IP 10.30.43.27 Node3 10.30.43.133 (netmask 255.255.255.224) export IP 10.30.43.135 Which means node 3 is on a different vlan. I want to assign export addresses to them and keep the export IPs on the correct vlan. This looks like it can be done with groups, but I?m not sure if I have the grouping right. I was considering the following: mmces address add --ces-ip 10.30.43.25 --ces-group vlan431 mmces address add --ces-ip 10.30.43.27 --ces-group vlan431 mmces address add --ces-ip 10.30.43.135 --ces-group vlan435 Which should mean nodes in group ?vlan431? will get IPs 10.30.43.25,10.30.43.27 and the node in group ?vlan435? will get IP 10.30.43.135 (and will remain unassigned if that node goes down) Do I have this right? Bob Oesterlin Sr Principal Storage Engineer, Nuance -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Wed May 8 16:58:59 2019 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Wed, 8 May 2019 17:58:59 +0200 Subject: [gpfsug-discuss] CES IP addresses - multiple subnets, using groups In-Reply-To: <3825202F-F636-48F7-BC78-3F07764A6FAD@nuance.com> References: <3825202F-F636-48F7-BC78-3F07764A6FAD@nuance.com> Message-ID: Hi Bob, you also need to specify which ces groups a node can host: mmchnode --ces-group vlan431 -N Node1,Node2 mmchnode --ces-group vlan435 -N Node3 Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Oesterlin, Robert" To: gpfsug main discussion list Date: 08/05/2019 16:31 Subject: [gpfsug-discuss] CES IP addresses - multiple subnets, using groups Sent by: gpfsug-discuss-bounces at spectrumscale.org Reference: https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1adm_configcesprotocolservipadd.htm I have a 3 CES servers with IP addresses: Node1 10.30.43.14 (netmask 255.255.255.224) export IP 10.30.43.25 Node2 10.30.43.24 (netmask 255.255.255.224) export IP 10.30.43.27 Node3 10.30.43.133 (netmask 255.255.255.224) export IP 10.30.43.135 Which means node 3 is on a different vlan. I want to assign export addresses to them and keep the export IPs on the correct vlan. This looks like it can be done with groups, but I?m not sure if I have the grouping right. I was considering the following: mmces address add --ces-ip 10.30.43.25 --ces-group vlan431 mmces address add --ces-ip 10.30.43.27 --ces-group vlan431 mmces address add --ces-ip 10.30.43.135 --ces-group vlan435 Which should mean nodes in group ?vlan431? will get IPs 10.30.43.25,10.30.43.27 and the node in group ?vlan435? will get IP 10.30.43.135 (and will remain unassigned if that node goes down) Do I have this right? Bob Oesterlin Sr Principal Storage Engineer, Nuance _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=9dCEbNr27klWay2AcOfvOE1xq50K-CyRUu4qQx4HOlk&m=P11oXJcKzIOkcqnAehRbMinQv-wJOXianaA2njslyC8&s=kxOMu99ZmGV7qT7PBewEhVv1Mb5ry2WgBDXwJmJPCvI&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From xhejtman at ics.muni.cz Wed May 8 17:03:59 2019 From: xhejtman at ics.muni.cz (Lukas Hejtmanek) Date: Wed, 8 May 2019 18:03:59 +0200 Subject: [gpfsug-discuss] gpfs and device number In-Reply-To: References: <20190426121733.jg6poxoykd2f5zxb@ics.muni.cz> Message-ID: <20190508160359.j4tzg3wpo3cnmp6y@ics.muni.cz> Hi, I use fsid=0 (having one export). It seems there is some incompatibility between gpfs and redhat 3.10.0-957. We have gpfs 5.0.2-1, I can see that 5.0.2-2 is tested. So maybe it is fixed in later gpfs versions. On Sat, Apr 27, 2019 at 10:37:48PM +0300, Tomer Perry wrote: > Hi, > > Please use the fsid option in /etc/exports ( man exports and: > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adm_nfslin.htm > ) > Also check > https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.3/com.ibm.spectrum.scale.v5r03.doc/bl1adv_cnfs.htm > in case you want HA with kernel NFS. > > > > Regards, > > Tomer Perry > Scalable I/O Development (Spectrum Scale) > email: tomp at il.ibm.com > 1 Azrieli Center, Tel Aviv 67021, Israel > Global Tel: +1 720 3422758 > Israel Tel: +972 3 9188625 > Mobile: +972 52 2554625 > > > > > From: Lukas Hejtmanek > To: gpfsug-discuss at spectrumscale.org > Date: 26/04/2019 15:37 > Subject: [gpfsug-discuss] gpfs and device number > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > Hello, > > I noticed that from time to time, device id of a gpfs volume is not same > across whole gpfs cluster. > > [root at kat1 ~]# stat /gpfs/vol1/ > File: ?/gpfs/vol1/? > Size: 262144 Blocks: 512 IO Block: 262144 > directory > Device: 28h/40d Inode: 3 > > [root at kat2 ~]# stat /gpfs/vol1/ > File: ?/gpfs/vol1/? > Size: 262144 Blocks: 512 IO Block: 262144 > directory > Device: 2bh/43d Inode: 3 > > [root at kat3 ~]# stat /gpfs/vol1/ > File: ?/gpfs/vol1/? > Size: 262144 Blocks: 512 IO Block: 262144 > directory > Device: 2ah/42d Inode: 3 > > this is really bad for kernel NFS as it uses device id for file handles > thus > NFS failover leads to nfs stale handle error. > > Is there a way to force a device number? > > -- > Luk?? Hejtm?nek > > Linux Administrator only because > Full Time Multitasking Ninja > is not an official job title > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=mLPyKeOa1gNDrORvEXBgMw&m=F4TfIKrFl9BVdEAYxZLWlFF-zF-irdwcP9LnGpgiZrs&s=Ice-yo0p955RcTDGPEGwJ-wIwN9F6PvWOpUvR6RMd4M&e= > > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Luk?? Hejtm?nek Linux Administrator only because Full Time Multitasking Ninja is not an official job title From stijn.deweirdt at ugent.be Thu May 9 15:12:10 2019 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 9 May 2019 16:12:10 +0200 Subject: [gpfsug-discuss] advanced filecache math Message-ID: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> hi all, we are looking into some memory issues with gpfs 5.0.2.2, and found following in mmfsadm dump fs: > fileCacheLimit 1000000 desired 1000000 ... > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840) the limit is 1M (we configured that), however, the fileCacheMem mentions 11.7M? this is also reported right after a mmshutdown/startup. how do these 2 relate (again?)? mnay thanks, stijn From Achim.Rehor at de.ibm.com Thu May 9 15:34:31 2019 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Thu, 9 May 2019 16:34:31 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 7182 bytes Desc: not available URL: From stijn.deweirdt at ugent.be Thu May 9 15:38:53 2019 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 9 May 2019 16:38:53 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> Message-ID: <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> hi achim, > you just misinterpreted the term fileCacheLimit. > This is not in byte, but specifies the maxFilesToCache setting : i understand that, but how does the fileCacheLimit relate to the fileCacheMem number? (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we are looking for large numbers that might explain wtf is going on (pardon my french ;) stijn > > UMALLOC limits: > bufferDescLimit 40000 desired 40000 > fileCacheLimit 4000 desired 4000 <=== mFtC > statCacheLimit 1000 desired 1000 <=== mSC > diskAddrBuffLimit 200 desired 200 > > # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" > maxFilesToCache 4000 > maxStatCache 1000 > > Mit freundlichen Gr??en / Kind regards > > *Achim Rehor* > > -------------------------------------------------------------------------------- > Software Technical Support Specialist AIX/ Emea HPC Support > IBM Certified Advanced Technical Expert - Power Systems with AIX > TSCC Software Service, Dept. 7922 > Global Technology Services > -------------------------------------------------------------------------------- > Phone: +49-7034-274-7862 IBM Deutschland > E-Mail: Achim.Rehor at de.ibm.com Am Weiher 24 > 65451 Kelsterbach > Germany > > -------------------------------------------------------------------------------- > IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter > Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, Stefan Lutz, > Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt > Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB > 14562 WEEE-Reg.-Nr. DE 99369940 > > > > > > > From: Stijn De Weirdt > To: gpfsug main discussion list > Date: 09/05/2019 16:21 > Subject: [gpfsug-discuss] advanced filecache math > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > -------------------------------------------------------------------------------- > > > > hi all, > > we are looking into some memory issues with gpfs 5.0.2.2, and found > following in mmfsadm dump fs: > > > fileCacheLimit 1000000 desired 1000000 > ... > > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840) > > the limit is 1M (we configured that), however, the fileCacheMem mentions > 11.7M? > > this is also reported right after a mmshutdown/startup. > > how do these 2 relate (again?)? > > mnay thanks, > > stijn > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From stijn.deweirdt at ugent.be Thu May 9 15:48:13 2019 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Thu, 9 May 2019 16:48:13 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> Message-ID: <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> seems like we are suffering from http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737 as these are ces nodes, we susepcted something wrong the caches, but it looks like a memleak instead. sorry for the noise (as usual you find the solution right after sending the mail ;) stijn On 5/9/19 4:38 PM, Stijn De Weirdt wrote: > hi achim, > >> you just misinterpreted the term fileCacheLimit. >> This is not in byte, but specifies the maxFilesToCache setting : > i understand that, but how does the fileCacheLimit relate to the > fileCacheMem number? > > > > (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we > are looking for large numbers that might explain wtf is going on > (pardon my french ;) > > stijn > >> >> UMALLOC limits: >> bufferDescLimit 40000 desired 40000 >> fileCacheLimit 4000 desired 4000 <=== mFtC >> statCacheLimit 1000 desired 1000 <=== mSC >> diskAddrBuffLimit 200 desired 200 >> >> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" >> maxFilesToCache 4000 >> maxStatCache 1000 >> >> Mit freundlichen Gr??en / Kind regards >> >> *Achim Rehor* >> >> -------------------------------------------------------------------------------- >> Software Technical Support Specialist AIX/ Emea HPC Support >> IBM Certified Advanced Technical Expert - Power Systems with AIX >> TSCC Software Service, Dept. 7922 >> Global Technology Services >> -------------------------------------------------------------------------------- >> Phone: +49-7034-274-7862 IBM Deutschland >> E-Mail: Achim.Rehor at de.ibm.com Am Weiher 24 >> 65451 Kelsterbach >> Germany >> >> -------------------------------------------------------------------------------- >> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter >> Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, Stefan Lutz, >> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt >> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB >> 14562 WEEE-Reg.-Nr. DE 99369940 >> >> >> >> >> >> >> From: Stijn De Weirdt >> To: gpfsug main discussion list >> Date: 09/05/2019 16:21 >> Subject: [gpfsug-discuss] advanced filecache math >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> -------------------------------------------------------------------------------- >> >> >> >> hi all, >> >> we are looking into some memory issues with gpfs 5.0.2.2, and found >> following in mmfsadm dump fs: >> >> > fileCacheLimit 1000000 desired 1000000 >> ... >> > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + 2840) >> >> the limit is 1M (we configured that), however, the fileCacheMem mentions >> 11.7M? >> >> this is also reported right after a mmshutdown/startup. >> >> how do these 2 relate (again?)? >> >> mnay thanks, >> >> stijn >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From Achim.Rehor at de.ibm.com Thu May 9 17:52:14 2019 From: Achim.Rehor at de.ibm.com (Achim Rehor) Date: Thu, 9 May 2019 18:52:14 +0200 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be><173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: An HTML attachment was scrubbed... URL: From oehmes at gmail.com Thu May 9 18:24:42 2019 From: oehmes at gmail.com (Sven Oehme) Date: Thu, 9 May 2019 18:24:42 +0100 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: Unfortunate more complicated :) The consumption here is an estimate based on 512b inodes, which no newly created filesystem has as all new default to 4k. So if you have 4k inodes you could easily need 2x of the estimated value. Then there are extended attributes, also not added here, etc . So don't take this number as usage, it's really just a rough estimate. Sven On Thu, May 9, 2019, 5:53 PM Achim Rehor wrote: > Sorry for my fast ( and not well thought) answer, before. You obviously > are correct, there is no relation between the setting of maxFilesToCache, > and the > > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size 512 + > 2840) > > usage. it is rather a statement of how many metadata may fit in the > remaining structures outside the pagepool. this value does not change at > all, when you modify your mFtC setting. > > There is a really good presentation by Tomer Perry on the User Group > meetings, referring about memory footprint of GPFS under various conditions. > > In your case, you may very well hit the CES nodes memleak you just pointed > out. > > Sorry for my hasty reply earlier ;) > > Achim > > > > From: Stijn De Weirdt > To: gpfsug-discuss at spectrumscale.org > Date: 09/05/2019 16:48 > Subject: Re: [gpfsug-discuss] advanced filecache math > Sent by: gpfsug-discuss-bounces at spectrumscale.org > ------------------------------ > > > > seems like we are suffering from > http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737 > > as these are ces nodes, we susepcted something wrong the caches, but it > looks like a memleak instead. > > sorry for the noise (as usual you find the solution right after sending > the mail ;) > > stijn > > On 5/9/19 4:38 PM, Stijn De Weirdt wrote: > > hi achim, > > > >> you just misinterpreted the term fileCacheLimit. > >> This is not in byte, but specifies the maxFilesToCache setting : > > i understand that, but how does the fileCacheLimit relate to the > > fileCacheMem number? > > > > > > > > (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT), so we > > are looking for large numbers that might explain wtf is going on > > (pardon my french ;) > > > > stijn > > > >> > >> UMALLOC limits: > >> bufferDescLimit 40000 desired 40000 > >> fileCacheLimit 4000 desired 4000 <=== mFtC > >> statCacheLimit 1000 desired 1000 <=== mSC > >> diskAddrBuffLimit 200 desired 200 > >> > >> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" > >> maxFilesToCache 4000 > >> maxStatCache 1000 > >> > >> Mit freundlichen Gr??en / Kind regards > >> > >> *Achim Rehor* > >> > >> > -------------------------------------------------------------------------------- > >> Software Technical Support Specialist AIX/ Emea HPC Support > > >> IBM Certified Advanced Technical Expert - Power Systems with AIX > >> TSCC Software Service, Dept. 7922 > >> Global Technology Services > >> > -------------------------------------------------------------------------------- > >> Phone: +49-7034-274-7862 IBM > Deutschland > >> E-Mail: Achim.Rehor at de.ibm.com Am > Weiher 24 > >> 65451 Kelsterbach > >> Germany > >> > >> > -------------------------------------------------------------------------------- > >> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter > >> Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen, > Stefan Lutz, > >> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt > >> Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht > Stuttgart, HRB > >> 14562 WEEE-Reg.-Nr. DE 99369940 > >> > >> > >> > >> > >> > >> > >> From: Stijn De Weirdt > >> To: gpfsug main discussion list > >> Date: 09/05/2019 16:21 > >> Subject: [gpfsug-discuss] advanced filecache math > >> Sent by: gpfsug-discuss-bounces at spectrumscale.org > >> > >> > -------------------------------------------------------------------------------- > >> > >> > >> > >> hi all, > >> > >> we are looking into some memory issues with gpfs 5.0.2.2, and found > >> following in mmfsadm dump fs: > >> > >> > fileCacheLimit 1000000 desired 1000000 > >> ... > >> > fileCacheMem 38359956 k = 11718554 * 3352 bytes (inode size > 512 + 2840) > >> > >> the limit is 1M (we configured that), however, the fileCacheMem mentions > >> 11.7M? > >> > >> this is also reported right after a mmshutdown/startup. > >> > >> how do these 2 relate (again?)? > >> > >> mnay thanks, > >> > >> stijn > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> > >> > >> > >> > >> > >> _______________________________________________ > >> gpfsug-discuss mailing list > >> gpfsug-discuss at spectrumscale.org > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >> > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjdoherty at yahoo.com Thu May 9 22:07:55 2019 From: jjdoherty at yahoo.com (Jim Doherty) Date: Thu, 9 May 2019 21:07:55 +0000 (UTC) Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: References: <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be> <173df898-a593-b7a0-a0de-b916011bb50d@ugent.be> <02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: <881377935.34017.1557436075166@mail.yahoo.com> A couple of observations on memory,?? a maxFilesToCache object takes anwhere from 6-10K, so 1 million =~ 6-10 Gig.??? Memory utilized in the mmfsd comes from either the pagepool,? the shared memory segment used by MFTC objects,? the token memory segment used to track MFTC objects,?? and newer is? memory used by AFM.??? If the memory resources are in the mmfsd address space then this will show in the RSS size of the mmfsd.??? Ignore the VMM size,? since the glibc change awhile back to allocate a heap for each thread VMM has become an imaginary number for a large multi-threaded application.?? There have been some memory leaks fixed in Ganesha that will be in? 4.2.3 PTF15 which is available on fixcentral Jim Doherty On Thursday, May 9, 2019, 1:25:03 PM EDT, Sven Oehme wrote: Unfortunate more complicated :) The consumption here is an estimate based on 512b inodes, which no newly created filesystem has as all new default to 4k. So if you have 4k inodes you could easily need 2x of the estimated value.Then there are extended attributes, also not added here, etc .So don't take this number as usage, it's really just a rough estimate. Sven On Thu, May 9, 2019, 5:53 PM Achim Rehor wrote: Sorry for my fast ( and not well thought)answer, before. You obviously are correct, there is no relation betweenthe setting of maxFilesToCache, and the fileCacheMem ? ? 38359956 k ?= 11718554* 3352 bytes (inode size 512 + 2840) usage. it is rather a statement of howmany metadata may fit in the remaining structures outside the pagepool.this value does not change at all, when you modify your mFtC setting. There is a really good presentationby Tomer Perry on the User Group meetings, referring about memory footprintof GPFS under various conditions. In your case, you may very well hitthe CES nodes memleak you just pointed out. Sorry for my hasty reply earlier ;) Achim From: ? ? ??Stijn De Weirdt To: ? ? ??gpfsug-discuss at spectrumscale.org Date: ? ? ??09/05/2019 16:48 Subject: ? ?? ?Re: [gpfsug-discuss]advanced filecache math Sent by: ? ?? ?gpfsug-discuss-bounces at spectrumscale.org seems like we are suffering from http://www-01.ibm.com/support/docview.wss?uid=isg1IJ12737 as these are ces nodes, we susepcted something wrong the caches, but it looks like a memleak instead. sorry for the noise (as usual you find the solution right after sending the mail ;) stijn On 5/9/19 4:38 PM, Stijn De Weirdt wrote: > hi achim, > >> you just misinterpreted the term fileCacheLimit. >> This is not in byte, but specifies the maxFilesToCache setting: > i understand that, but how does the fileCacheLimit relate to the > fileCacheMem number? > > > > (we have a 32GB pagepool, and mmfsd is using 80GB RES (101 VIRT),so we > are looking for large numbers that might explain wtf is going on > (pardon my french ;) > > stijn > >> >> UMALLOC limits: >> ? ? ?bufferDescLimit ? ? ?40000desired ? ?40000 >> ? ? ?fileCacheLimit ?4000 desired ? ?4000 ? <=== mFtC >> ? ? ?statCacheLimit ?1000 desired ? ?1000 ? <=== mSC >> ? ? ?diskAddrBuffLimit ? ? ?200desired ? ? ?200 >> >> # mmfsadm dump config | grep -E "maxFilesToCache|maxStatCache" >> ? ? maxFilesToCache 4000 >> ? ? maxStatCache 1000 >> >> Mit freundlichen Gr??en / Kind regards >> >> *Achim Rehor* >> >> -------------------------------------------------------------------------------- >> Software Technical Support Specialist AIX/ Emea HPC Support ?? ? ? ? ? ? ? >> IBM Certified Advanced Technical Expert - Power Systems with AIX >> TSCC Software Service, Dept. 7922 >> Global Technology Services >> -------------------------------------------------------------------------------- >> Phone: ? ? ? ? ? ? ?? +49-7034-274-7862 ? ? ? ? ?? ? ? ?IBM Deutschland >> E-Mail: ? ? ? ? ? ? ?? Achim.Rehor at de.ibm.com ? ? ? ?? ? ? ? ?Am Weiher 24 >> ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?? ?65451 Kelsterbach >> ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?? ?Germany >> ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ?? >> -------------------------------------------------------------------------------- >> IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: MartinJetter >> Gesch?ftsf?hrung: Martin Hartmann (Vorsitzender), Norbert Janzen,Stefan Lutz, >> Nicole Reimer, Dr. Klaus Seifert, Wolfgang Wendt >> Sitz der Gesellschaft: Ehningen / Registergericht: AmtsgerichtStuttgart, HRB >> 14562 WEEE-Reg.-Nr. DE 99369940 >> >> >> >> >> >> >> From: Stijn De Weirdt >> To: gpfsug main discussion list >> Date: 09/05/2019 16:21 >> Subject: [gpfsug-discuss] advanced filecache math >> Sent by: gpfsug-discuss-bounces at spectrumscale.org >> >> -------------------------------------------------------------------------------- >> >> >> >> hi all, >> >> we are looking into some memory issues with gpfs 5.0.2.2, andfound >> following in mmfsadm dump fs: >> >> ?> ? ? fileCacheLimit ? ? 1000000desired ?1000000 >> ... >> ?> ? ? fileCacheMem ? ? 38359956 k?= 11718554 * 3352 bytes (inode size 512 + 2840) >> >> the limit is 1M (we configured that), however, the fileCacheMemmentions >> 11.7M? >> >> this is also reported right after a mmshutdown/startup. >> >> how do these 2 relate (again?)? >> >> mnay thanks, >> >> stijn >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From anobre at br.ibm.com Thu May 9 22:51:37 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Thu, 9 May 2019 21:51:37 +0000 Subject: [gpfsug-discuss] advanced filecache math In-Reply-To: <881377935.34017.1557436075166@mail.yahoo.com> References: <881377935.34017.1557436075166@mail.yahoo.com>, <130a45aa-3ce1-6b8d-5a66-d97be054e7a4@ugent.be><173df898-a593-b7a0-a0de-b916011bb50d@ugent.be><02fe2558-bcb7-dc22-3a86-4289b60aa716@ugent.be> Message-ID: An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon May 13 14:11:06 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Mon, 13 May 2019 13:11:06 +0000 Subject: [gpfsug-discuss] IO-500 and POWER9 Message-ID: Hi, I was wondering if anyone has done anything with the IO-500 and POWER9 systems at all? One of the benchmarks (IOR-HARD-READ) always fails. Having slack?d the developers they said: ?It looks like data is not synchronized? and ?maybe a setting in GPFS is missing, e.g. locking, synchronization, ...?? Now I didn?t think there was any way to disable locking in GPFS. We tried some different byte settigns for the read and this made the error go away which apparently indicates ?lockicg issue -> false sharing of blocks?. We found that 1 or 2 nodes = OK. > 2 nodes breaks with 2ppn, > 2 nodes is OK with 1ppn. (We also got some fsstruct errors when running the mdtests ? I have a PMR open for that). Interestingly I ran the test on a bunch of x86 systems, and that ran fine. So ? anyone got any POWER9 (ac922) they could try see if the benchmarks work for them (just run the ior_hard tests is fine)? Or anyone any suggestions? These are all running Red Hat 7.5 and 5.0.2.3 code BTW. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.Turner at lboro.ac.uk Tue May 14 09:47:12 2019 From: A.Turner at lboro.ac.uk (Aaron Turner) Date: Tue, 14 May 2019 08:47:12 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? Message-ID: Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Renar.Grunenberg at huk-coburg.de Tue May 14 09:58:07 2019 From: Renar.Grunenberg at huk-coburg.de (Grunenberg, Renar) Date: Tue, 14 May 2019 08:58:07 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? In-Reply-To: References: Message-ID: Hallo Aaron, the granularity to handle storagecapacity in scale is the disk during createing of the filssystem. These disk are created nsd?s that represent your physical lun?s. Per fs there are a unique count of nsd?s == disk per filesystem. What you want is possible, no problem. Regards Renar Renar Grunenberg Abteilung Informatik - Betrieb HUK-COBURG Bahnhofsplatz 96444 Coburg Telefon: 09561 96-44110 Telefax: 09561 96-44104 E-Mail: Renar.Grunenberg at huk-coburg.de Internet: www.huk.de ________________________________ HUK-COBURG Haftpflicht-Unterst?tzungs-Kasse kraftfahrender Beamter Deutschlands a. G. in Coburg Reg.-Gericht Coburg HRB 100; St.-Nr. 9212/101/00021 Sitz der Gesellschaft: Bahnhofsplatz, 96444 Coburg Vorsitzender des Aufsichtsrats: Prof. Dr. Heinrich R. Schradin. Vorstand: Klaus-J?rgen Heitmann (Sprecher), Stefan Gronbach, Dr. Hans Olav Her?y, Dr. J?rg Rheinl?nder (stv.), Sarah R?ssler, Daniel Thomas. ________________________________ Diese Nachricht enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese Nachricht irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Nachricht. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Nachricht ist nicht gestattet. This information may contain confidential and/or privileged information. If you are not the intended recipient (or have received this information in error) please notify the sender immediately and destroy this information. Any unauthorized copying, disclosure or distribution of the material in this information is strictly forbidden. ________________________________ Von: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] Im Auftrag von Aaron Turner Gesendet: Dienstag, 14. Mai 2019 10:47 An: gpfsug-discuss at spectrumscale.org Betreff: [gpfsug-discuss] Identifiable groups of disks? Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Tue May 14 10:08:28 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Tue, 14 May 2019 09:08:28 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? Message-ID: When you create the file-system, you create NSD devices (on physical disks ? usually LUNs), and then assign these devices as disks to a file-system. This sounds straight forwards. Note GPFS isn?t really intedned for JBODs unless you have GNR code. Simon From: on behalf of Aaron Turner Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 14 May 2019 at 09:47 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Identifiable groups of disks? Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From abeattie at au1.ibm.com Tue May 14 10:17:33 2019 From: abeattie at au1.ibm.com (Andrew Beattie) Date: Tue, 14 May 2019 09:17:33 +0000 Subject: [gpfsug-discuss] Identifiable groups of disks? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From A.Turner at lboro.ac.uk Tue May 14 14:13:15 2019 From: A.Turner at lboro.ac.uk (Aaron Turner) Date: Tue, 14 May 2019 13:13:15 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 9 In-Reply-To: References: Message-ID: Thanks, Simon, This is what I thought was the case, and in fact I couldn't see it was not. In reality there -are- JBODs involved, so that was a somewhat hypothetical use case initially. Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of gpfsug-discuss-request at spectrumscale.org Sent: 14 May 2019 12:00 To: gpfsug-discuss at spectrumscale.org Subject: gpfsug-discuss Digest, Vol 88, Issue 9 Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Identifiable groups of disks? (Simon Thompson) 2. Re: Identifiable groups of disks? (Andrew Beattie) ---------------------------------------------------------------------- Message: 1 Date: Tue, 14 May 2019 09:08:28 +0000 From: Simon Thompson To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Identifiable groups of disks? Message-ID: Content-Type: text/plain; charset="utf-8" When you create the file-system, you create NSD devices (on physical disks ? usually LUNs), and then assign these devices as disks to a file-system. This sounds straight forwards. Note GPFS isn?t really intedned for JBODs unless you have GNR code. Simon From: on behalf of Aaron Turner Reply-To: "gpfsug-discuss at spectrumscale.org" Date: Tuesday, 14 May 2019 at 09:47 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Identifiable groups of disks? Scenario: * one set of JBODS * want to create two GPFS file systems * want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 * want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation * Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Tue, 14 May 2019 09:17:33 +0000 From: "Andrew Beattie" To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Identifiable groups of disks? Message-ID: Content-Type: text/plain; charset="us-ascii" An HTML attachment was scrubbed... URL: ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 88, Issue 9 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 14 18:00:42 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 14 May 2019 13:00:42 -0400 Subject: [gpfsug-discuss] Identifiable groups of disks? In-Reply-To: References: Message-ID: The simple answer is YES. I think the other replies are questioning whether you really want something different or more robust against failures. From: Aaron Turner To: "gpfsug-discuss at spectrumscale.org" Date: 05/14/2019 04:48 AM Subject: [EXTERNAL] [gpfsug-discuss] Identifiable groups of disks? Sent by: gpfsug-discuss-bounces at spectrumscale.org Scenario: one set of JBODS want to create two GPFS file systems want to ensure that file system A uses physical disks a0, a1... an-1 and file system B uses physical disks b0, b1... bn-1 want to be able to assign specific sets of disks a0..an-1, b0..bn-1 on creation Potentially allows all disks b0..bn-1 to be destroyed if required whilst not affecting a0..an-1 Is this possible in GPFS? Regards _______?_______________________________ Aaron Turner Senior IT Services Specialist in High Performance Computing Loughborough University a.turner at lboro.ac.uk 01509 226185 ______________________________________ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=cvpnBBH0j41aQy0RPiG2xRL_M8mTc1izuQD3_PmtjZ8&m=OtYY8BVp6eITFG1uShfpYVLZRwNNia-iJUwMXjZyuNc&s=Haef2-lDTRaLo2K-JNaB6xOK9LOgHg8A0Fn6dc6vOMM&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Philipp.Rehs at uni-duesseldorf.de Wed May 15 09:48:19 2019 From: Philipp.Rehs at uni-duesseldorf.de (Rehs, Philipp Helo) Date: Wed, 15 May 2019 08:48:19 +0000 Subject: [gpfsug-discuss] Enforce ACLs Message-ID: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 7077 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Wed May 15 10:13:30 2019 From: S.J.Thompson at bham.ac.uk (Simon Thompson) Date: Wed, 15 May 2019 09:13:30 +0000 Subject: [gpfsug-discuss] Enforce ACLs Message-ID: <8FA1923B-9903-4304-876B-2E492E968C52@bham.ac.uk> I *think* this behaviour depends on the file set setting .. Check what "--allow-permission-change" is set to for the file set. I think it needs to be "chmodAndUpdateAcl" Simon ?On 15/05/2019, 09:55, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Philipp.Rehs at uni-duesseldorf.de" wrote: Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de From jfosburg at mdanderson.org Wed May 15 11:42:42 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 15 May 2019 10:42:42 +0000 Subject: [gpfsug-discuss] Enforce ACLs In-Reply-To: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> Message-ID: <73495e917ff74131bd0511c166f385fa@mdanderson.org> I'm not 100% sure this is that it is, but it is most likely your ACL config. If you have to use the nfsv4 ACLs, check in mmlsconfig to make sure you are only using nfsv4 ACLs. I think the options are posix, nfsv4, and both. I would guess you are set to both. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Rehs, Philipp Helo Sent: Wednesday, May 15, 2019 3:48:19 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Enforce ACLs Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Wed May 15 12:14:40 2019 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Wed, 15 May 2019 13:14:40 +0200 Subject: [gpfsug-discuss] Enforce ACLs In-Reply-To: <73495e917ff74131bd0511c166f385fa@mdanderson.org> References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> <73495e917ff74131bd0511c166f385fa@mdanderson.org> Message-ID: Jonathan is mostly right, except that the option is not in mmlsconfig but part of the filesystem configuration (mmlsfs,mmchfs) # mmlsfs objfs -k flag value description ------------------- ------------------------ ----------------------------------- -k nfs4 ACL semantics in effect Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Fosburgh,Jonathan" To: "gpfsug-discuss at spectrumscale.org" Date: 15/05/2019 12:52 Subject: Re: [gpfsug-discuss] Enforce ACLs Sent by: gpfsug-discuss-bounces at spectrumscale.org I'm not 100% sure this is that it is, but it is most likely your ACL config. If you have to use the nfsv4 ACLs, check in mmlsconfig to make sure you are only using nfsv4 ACLs. I think the options are posix, nfsv4, and both. I would guess you are set to both. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Rehs, Philipp Helo Sent: Wednesday, May 15, 2019 3:48:19 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Enforce ACLs Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=9dCEbNr27klWay2AcOfvOE1xq50K-CyRUu4qQx4HOlk&m=T_hndYqE7LOa07-SB6rtf9IPYJT3XiUhUHcCpwbwduM&s=1Xxw6UtKRGh1T4KLYgawTRpI_E_3jHdYnmAy_1rUSrg&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.buzzard at strath.ac.uk Wed May 15 12:20:21 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Wed, 15 May 2019 12:20:21 +0100 Subject: [gpfsug-discuss] Enforce ACLs In-Reply-To: <73495e917ff74131bd0511c166f385fa@mdanderson.org> References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> <73495e917ff74131bd0511c166f385fa@mdanderson.org> Message-ID: On Wed, 2019-05-15 at 10:42 +0000, Fosburgh,Jonathan wrote: > I'm not 100% sure this is that it is, but it is most likely your ACL > config. If you have to use the nfsv4 ACLs, check in mmlsconfig to > make sure you are only using nfsv4 ACLs. I think the options are > posix, nfsv4, and both. I would guess you are set to both. > I would say the same except the options are actually posix, nfsv4, samba and all and covered by mmlsfs,mmchfs not mmlsconfig. JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From jfosburg at mdanderson.org Wed May 15 12:24:31 2019 From: jfosburg at mdanderson.org (Fosburgh,Jonathan) Date: Wed, 15 May 2019 11:24:31 +0000 Subject: [gpfsug-discuss] [EXT] Re: Enforce ACLs In-Reply-To: References: <74970fcf76fd4f8568dd4848b9fe35f3728bfead.camel@uni-duesseldorf.de> <73495e917ff74131bd0511c166f385fa@mdanderson.org>, Message-ID: <43a4cc9e539a4e04b70eadf88c7d5457@mdanderson.org> Not bad for having been awake for only half an hour. ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Mathias Dietz Sent: Wednesday, May 15, 2019 6:14:40 AM To: gpfsug main discussion list Subject: [EXT] Re: [gpfsug-discuss] Enforce ACLs WARNING: This email originated from outside of MD Anderson. Please validate the sender's email address before clicking on links or attachments as they may not be safe. Jonathan is mostly right, except that the option is not in mmlsconfig but part of the filesystem configuration (mmlsfs,mmchfs) # mmlsfs objfs -k flag value description ------------------- ------------------------ ----------------------------------- -k nfs4 ACL semantics in effect Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: "Fosburgh,Jonathan" To: "gpfsug-discuss at spectrumscale.org" Date: 15/05/2019 12:52 Subject: Re: [gpfsug-discuss] Enforce ACLs Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ I'm not 100% sure this is that it is, but it is most likely your ACL config. If you have to use the nfsv4 ACLs, check in mmlsconfig to make sure you are only using nfsv4 ACLs. I think the options are posix, nfsv4, and both. I would guess you are set to both. -- Jonathan Fosburgh Principal Application Systems Analyst IT Operations Storage Team The University of Texas MD Anderson Cancer Center (713) 745-9346 ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Rehs, Philipp Helo Sent: Wednesday, May 15, 2019 3:48:19 AM To: gpfsug-discuss at spectrumscale.org Subject: [EXT] [gpfsug-discuss] Enforce ACLs Hello, we are using GPFS 4.2.3 and at the moment we are looking into acls and inheritance. I have the following acls on a directory: #NFSv4 ACL #owner:root #group:root special:owner@:rwxc:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (X)DELETE_CHILD (X)CHOWN (X)EXEC/SEARCH (X)WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED special:group@:r-x-:allow:FileInherit:DirInherit (X)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED special:everyone@:----:allow:FileInherit:DirInherit (-)READ/LIST (-)WRITE/CREATE (-)APPEND/MKDIR (-)SYNCHRONIZE (- )READ_ACL (-)READ_ATTR (-)READ_NAMED (-)DELETE (-)DELETE_CHILD (-)CHOWN (-)EXEC/SEARCH (- )WRITE_ACL (-)WRITE_ATTR (-)WRITE_NAMED user:userABC:rwx-:allow:FileInherit:DirInherit (X)READ/LIST (X)WRITE/CREATE (X)APPEND/MKDIR (X)SYNCHRONIZE (X)READ_ACL (X)READ_ATTR (X)READ_NAMED (X)DELETE (X)DELETE_CHILD (-)CHOWN (X)EXEC/SEARCH (- )WRITE_ACL (X)WRITE_ATTR (X)WRITE_NAMED Then the user creates a new folder in this directory and it does not get the same acl but normal unix permissions. Is there any way to enforce the new permissions from the parent? Kind regards Philipp -- Heinrich-Heine-Universit?t D?sseldorf Zentrum f?r Informations- und Medientechnologie Kompetenzzentrum f?r wissenschaftliches Rechnen und Speichern Universit?tsstra?e 1 Geb?ude 25.41 Raum 00.51 Telefon: +49-211-81-15557 Mail: Philipp.Rehs at uni-duesseldorf.de The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The information contained in this e-mail message may be privileged, confidential, and/or protected from disclosure. This e-mail message may contain protected health information (PHI); dissemination of PHI should comply with applicable federal and state laws. If you are not the intended recipient, or an authorized representative of the intended recipient, any further review, disclosure, use, dissemination, distribution, or copying of this message or any attachment (or the information contained therein) is strictly prohibited. If you think that you have received this e-mail message in error, please notify the sender by return e-mail and delete all references to it and its contents from your systems. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben.nickell at inl.gov Thu May 16 17:01:21 2019 From: ben.nickell at inl.gov (Ben G. Nickell) Date: Thu, 16 May 2019 16:01:21 +0000 Subject: [gpfsug-discuss] mmbuild problem Message-ID: First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm not the GPFS guy, but Having a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4. I get the following errors. Any ideas while our GPFS guy tries to get newer software? uname -a Linux hostname 4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux ./mmbuildgpl --build-package -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019. -------------------------------------------------------- Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir = /lib/modules/4.12.14-95.13-default/build kernel source dir = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying rpmbuild... Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... Verifying that tools to build the portability layer exist.... cpp present gcc present g++ present ld present cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1 rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver cleaning (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' rm -f trcid.h ibm_kxi.trclst rm -f install.he; \ for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h DirIds.h; do \ (set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' cleaning (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' rm -f install.he; \ for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \ (set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' cleaning (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build M=/usr/lpp/mmfs/src/gpl-linux clean make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he rm -f -rf .tmp_versions kdump-kern-dwarfs.c rm -f -f gpl-linux.trclst kdump lxtrace rm -f -rf usr make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' for i in ibm-kxi ibm-linux gpl-linux ; do \ (cd $i; echo "installing header files" "(`pwd`)"; \ /usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \ exit $?) || exit 1; \ done installing header files (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' Making directory /usr/lpp/mmfs/src/include/cxi + /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h + /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h + /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h + /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h + /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h + /usr/bin/install cxiGcryptoDefs.h /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + /usr/bin/install cxiSynchNames.h /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + /usr/bin/install cxiMiscNames.h /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' installing header files (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' + /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' installing header files (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Making directory /usr/lpp/mmfs/src/include/gpl-linux + /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h + /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h + /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h + /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h + /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h + /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h + /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h + /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... Pre-kbuild step 2... touch install.he Invoking Kbuild... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:65:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/inode.c:136:3: error: aggregate value used where an integer was expected TRACE5(TRACE_VNODE, 3, TRCID_PRINTINODE_4, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: At top level: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:2800:3: error: unknown type name ?wait_queue_t? wait_queue_t qwaiter; ^ /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: In function ?cxiWaitEventWait?: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3882:3: warning: passing argument 1 of ?init_waitqueue_entry? from incompatible pointer type [enabled by default] init_waitqueue_entry(&waitElement.qwaiter, current); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:78:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3883:3: warning: passing argument 2 of ?__add_wait_queue? from incompatible pointer type [enabled by default] __add_wait_queue(&waitElement.qhead, &waitElement.qwaiter); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:153:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiStartIO?: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2474:13: error: ?struct bio? has no member named ?bi_bdev? bioP->bi_bdev = bdevP; ^ In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiCleanIO?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:81: error: ?struct bio? has no member named ?bi_bdev? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:395:23: note: in definition of macro ?_TRACE_MACRO? { _TR_BEFORE; _ktrc; KTRCOPTCODE; _TR_AFTER; } else NOOP ^ /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:5: note: in expansion of macro ?_TRACE3D? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:432:26: note: in expansion of macro ?TRACE_TRCID_WAITIO_BDEVP_CALL? _TRACE_MACRO(_c, _l, TRACE_##id##_CALL) ^ /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2906:7: note: in expansion of macro ?TRACE3? TRACE3(TRACE_IO, 6, TRCID_WAITIO_BDEVP, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2915:23: error: ?struct bio? has no member named ?bi_error? if (bcP->biop[i]->bi_error) ^ /usr/src/linux-4.12.14-95.13/scripts/Makefile.build:326: recipe for target '/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o' failed make[5]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1 /usr/src/linux-4.12.14-95.13/Makefile:1557: recipe for target '_module_/usr/lpp/mmfs/src/gpl-linux' failed make[4]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2 Makefile:152: recipe for target 'sub-make' failed make[3]: *** [sub-make] Error 2 Makefile:24: recipe for target '__sub-make' failed make[2]: *** [__sub-make] Error 2 make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' makefile:130: recipe for target 'modules' failed make[1]: *** [modules] Error 1 make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' makefile:148: recipe for target 'Modules' failed make: *** [Modules] Error 1 -------------------------------------------------------- mmbuildgpl: Building GPL module failed at Thu May 16 09:28:54 MDT 2019. -------------------------------------------------------- mmbuildgpl: Command failed. Examine previous error messages to determine cause. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 From knop at us.ibm.com Thu May 16 17:12:18 2019 From: knop at us.ibm.com (Felipe Knop) Date: Thu, 16 May 2019 12:12:18 -0400 Subject: [gpfsug-discuss] mmbuild problem In-Reply-To: References: Message-ID: Ben, According to the FAQ ( https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html) SLES 12 SP4 is only supported starting with Scale V5.0.2.3 . |-----+-----------+-----------+--------------------+--------------------| | ?12 | | | ?From V4.2.3.13 in | ?From V4.2.3.13 in | | SP4 | 4.12.14-95| 4.12.14-95| the 4.2 release | the 4.2 release | | | .3-default| .3-default| | | | | | | | | | | | | From V5.0.2.3 or | From V5.0.2.3 or | | | | | later in the 5.0 | later in the 5.0 | | | | | release | release | |-----+-----------+-----------+--------------------+--------------------| Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Ben G. Nickell" To: "gpfsug-discuss at spectrumscale.org" Date: 05/16/2019 12:02 PM Subject: [EXTERNAL] [gpfsug-discuss] mmbuild problem Sent by: gpfsug-discuss-bounces at spectrumscale.org First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm not the GPFS guy, but Having a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4. I get the following errors. Any ideas while our GPFS guy tries to get newer software? uname -a Linux hostname 4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux ./mmbuildgpl --build-package -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019. -------------------------------------------------------- Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir = /lib/modules/4.12.14-95.13-default/build kernel source dir = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying rpmbuild... Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... Verifying that tools to build the portability layer exist.... cpp present gcc present g++ present ld present cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1 rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver cleaning (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' rm -f trcid.h ibm_kxi.trclst rm -f install.he; \ for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h DirIds.h; do \ (set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' cleaning (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' rm -f install.he; \ for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \ (set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' cleaning (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build M=/usr/lpp/mmfs/src/gpl-linux clean make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he rm -f -rf .tmp_versions kdump-kern-dwarfs.c rm -f -f gpl-linux.trclst kdump lxtrace rm -f -rf usr make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' for i in ibm-kxi ibm-linux gpl-linux ; do \ (cd $i; echo "installing header files" "(`pwd`)"; \ /usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \ exit $?) || exit 1; \ done installing header files (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' Making directory /usr/lpp/mmfs/src/include/cxi + /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h + /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h + /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h + /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h + /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h + /usr/bin/install cxiGcryptoDefs.h /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + /usr/bin/install cxiSynchNames.h /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + /usr/bin/install cxiMiscNames.h /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' installing header files (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' + /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' installing header files (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Making directory /usr/lpp/mmfs/src/include/gpl-linux + /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h + /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h + /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h + /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h + /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h + /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h + /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h + /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... Pre-kbuild step 2... touch install.he Invoking Kbuild... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:65:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/inode.c:136:3: error: aggregate value used where an integer was expected TRACE5(TRACE_VNODE, 3, TRCID_PRINTINODE_4, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: At top level: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:2800:3: error: unknown type name ?wait_queue_t? wait_queue_t qwaiter; ^ /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: In function ?cxiWaitEventWait?: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3882:3: warning: passing argument 1 of ?init_waitqueue_entry? from incompatible pointer type [enabled by default] init_waitqueue_entry(&waitElement.qwaiter, current); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:78:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3883:3: warning: passing argument 2 of ?__add_wait_queue? from incompatible pointer type [enabled by default] __add_wait_queue(&waitElement.qhead, &waitElement.qwaiter); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:153:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiStartIO?: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2474:13: error: ?struct bio? has no member named ?bi_bdev? bioP->bi_bdev = bdevP; ^ In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiCleanIO?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:81: error: ?struct bio? has no member named ?bi_bdev? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP-> biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:395:23: note: in definition of macro ?_TRACE_MACRO? { _TR_BEFORE; _ktrc; KTRCOPTCODE; _TR_AFTER; } else NOOP ^ /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:5: note: in expansion of macro ?_TRACE3D? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP-> biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:432:26: note: in expansion of macro ?TRACE_TRCID_WAITIO_BDEVP_CALL? _TRACE_MACRO(_c, _l, TRACE_##id##_CALL) ^ /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2906:7: note: in expansion of macro ?TRACE3? TRACE3(TRACE_IO, 6, TRCID_WAITIO_BDEVP, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2915:23: error: ?struct bio? has no member named ?bi_error? if (bcP->biop[i]->bi_error) ^ /usr/src/linux-4.12.14-95.13/scripts/Makefile.build:326: recipe for target '/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o' failed make[5]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1 /usr/src/linux-4.12.14-95.13/Makefile:1557: recipe for target '_module_/usr/lpp/mmfs/src/gpl-linux' failed make[4]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2 Makefile:152: recipe for target 'sub-make' failed make[3]: *** [sub-make] Error 2 Makefile:24: recipe for target '__sub-make' failed make[2]: *** [__sub-make] Error 2 make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' makefile:130: recipe for target 'modules' failed make[1]: *** [modules] Error 1 make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' makefile:148: recipe for target 'Modules' failed make: *** [Modules] Error 1 -------------------------------------------------------- mmbuildgpl: Building GPL module failed at Thu May 16 09:28:54 MDT 2019. -------------------------------------------------------- mmbuildgpl: Command failed. Examine previous error messages to determine cause. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIF-g&c=jf_iaSHvJObTbx-siA1ZOg&r=oNT2koCZX0xmWlSlLblR9Q&m=WnfLPJrGAP9SlsDZnSceHbB2mqQuXDSofnAOTM7LxtU&s=H8TOSiLsqot1vScrOTBmzisftHF8LaCDIxXfOrAWB0M&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ben.nickell at inl.gov Thu May 16 17:19:54 2019 From: ben.nickell at inl.gov (Ben G. Nickell) Date: Thu, 16 May 2019 16:19:54 +0000 Subject: [gpfsug-discuss] [EXTERNAL] Re: mmbuild problem In-Reply-To: References: , Message-ID: Thanks for the quick reply Felipe, and also for pointing me at the FAQ. I found the same. The standard version of 5.2.0.3 built fine. We apparently don't know how to get the advanced version, but I don't we are using that anyway, I imagine we could figure out how to get it if we do need it. I just sent this a little too soon, sorry for the noise. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org on behalf of Felipe Knop Sent: Thursday, May 16, 2019 10:12 AM To: gpfsug main discussion list Subject: [EXTERNAL] Re: [gpfsug-discuss] mmbuild problem Ben, According to the FAQ (https://www.ibm.com/support/knowledgecenter/en/STXKQY/gpfsclustersfaq.html) SLES 12 SP4 is only supported starting with Scale V5.0.2.3 . 12 SP4 4.12.14-95.3-default 4.12.14-95.3-default From V4.2.3.13 in the 4.2 release >From V5.0.2.3 or later in the 5.0 release From V4.2.3.13 in the 4.2 release >From V5.0.2.3 or later in the 5.0 release Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development and Security IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 [Inactive hide details for "Ben G. Nickell" ---05/16/2019 12:02:23 PM---First time poster, hopefully not a simple RTFM question]"Ben G. Nickell" ---05/16/2019 12:02:23 PM---First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm From: "Ben G. Nickell" To: "gpfsug-discuss at spectrumscale.org" Date: 05/16/2019 12:02 PM Subject: [EXTERNAL] [gpfsug-discuss] mmbuild problem Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ First time poster, hopefully not a simple RTFM question, I've done some rudimentary googling. I'm not the GPFS guy, but Having a problem building Spectrum Scale 5.0.2.0 on Suse SLES SP4. I get the following errors. Any ideas while our GPFS guy tries to get newer software? uname -a Linux hostname 4.12.14-95.13-default #1 SMP Fri Mar 22 06:04:58 UTC 2019 (c01bf34) x86_64 x86_64 x86_64 GNU/Linux ./mmbuildgpl --build-package -------------------------------------------------------- mmbuildgpl: Building GPL module begins at Thu May 16 09:28:50 MDT 2019. -------------------------------------------------------- Verifying Kernel Header... kernel version = 41214095 (41214095013000, 4.12.14-95.13-default, 4.12.14-95.13) module include dir = /lib/modules/4.12.14-95.13-default/build/include module build dir = /lib/modules/4.12.14-95.13-default/build kernel source dir = /usr/src/linux-4.12.14-95.13/include Found valid kernel header file under /lib/modules/4.12.14-95.13-default/build/include Verifying Compiler... make is present at /usr/bin/make cpp is present at /usr/bin/cpp gcc is present at /usr/bin/gcc g++ is present at /usr/bin/g++ ld is present at /usr/bin/ld Verifying rpmbuild... Verifying Additional System Headers... Verifying linux-glibc-devel is installed ... Command: /bin/rpm -q linux-glibc-devel The required package linux-glibc-devel is installed make World ... Verifying that tools to build the portability layer exist.... cpp present gcc present g++ present ld present cd /usr/lpp/mmfs/src/config; /usr/bin/cpp -P def.mk.proto > ./def.mk; exit $? || exit 1 rm -rf /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib mkdir /usr/lpp/mmfs/src/include /usr/lpp/mmfs/src/bin /usr/lpp/mmfs/src/lib rm -f //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver cleaning (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' rm -f trcid.h ibm_kxi.trclst rm -f install.he; \ for i in cxiTypes.h cxiSystem.h cxi2gpfs.h cxiVFSStats.h cxiCred.h cxiIOBuffer.h cxiSharedSeg.h cxiMode.h Trace.h cxiMmap.h cxiAtomic.h cxiTSFattr.h cxiAclUser.h cxiLinkList.h cxiDmapi.h LockNames.h lxtrace.h cxiGcryptoDefs.h cxiSynchNames.h cxiMiscNames.h DirIds.h; do \ (set -x; rm -f -r /usr/lpp/mmfs/src/include/cxi/$i) done + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiCred.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMode.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/Trace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/LockNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/lxtrace.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + rm -f -r /usr/lpp/mmfs/src/include/cxi/DirIds.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' cleaning (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' rm -f install.he; \ for i in cxiTypes-plat.h cxiSystem-plat.h cxiIOBuffer-plat.h cxiSharedSeg-plat.h cxiMode-plat.h Trace-plat.h cxiAtomic-plat.h cxiMmap-plat.h cxiVFSStats-plat.h cxiCred-plat.h cxiDmapi-plat.h; do \ (set -x; rm -rf /usr/lpp/mmfs/src/include/cxi/$i) done + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + rm -rf /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' cleaning (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build M=/usr/lpp/mmfs/src/gpl-linux clean make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/tracedev.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfslinux.ko rm -f -f /lib/modules/`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver`/extra/mmfs26.ko rm -f -f /usr/lpp/mmfs/src/../bin/lxtrace-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f /usr/lpp/mmfs/src/../bin/kdump-`cat //usr/lpp/mmfs/src/gpl-linux/gpl_kernel.tmp.ver` rm -f -f *.o .depends .*.cmd *.ko *.a *.mod.c core *_shipped *map *mod.c.saved *.symvers *.ko.ver ./*.ver install.he rm -f -rf .tmp_versions kdump-kern-dwarfs.c rm -f -f gpl-linux.trclst kdump lxtrace rm -f -rf usr make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' for i in ibm-kxi ibm-linux gpl-linux ; do \ (cd $i; echo "installing header files" "(`pwd`)"; \ /usr/bin/make DESTDIR=/usr/lpp/mmfs/src Headers; \ exit $?) || exit 1; \ done installing header files (/usr/lpp/mmfs/src/ibm-kxi) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-kxi' Making directory /usr/lpp/mmfs/src/include/cxi + /usr/bin/install cxiTypes.h /usr/lpp/mmfs/src/include/cxi/cxiTypes.h + /usr/bin/install cxiSystem.h /usr/lpp/mmfs/src/include/cxi/cxiSystem.h + /usr/bin/install cxi2gpfs.h /usr/lpp/mmfs/src/include/cxi/cxi2gpfs.h + /usr/bin/install cxiVFSStats.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats.h + /usr/bin/install cxiCred.h /usr/lpp/mmfs/src/include/cxi/cxiCred.h + /usr/bin/install cxiIOBuffer.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer.h + /usr/bin/install cxiSharedSeg.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg.h + /usr/bin/install cxiMode.h /usr/lpp/mmfs/src/include/cxi/cxiMode.h + /usr/bin/install Trace.h /usr/lpp/mmfs/src/include/cxi/Trace.h + /usr/bin/install cxiMmap.h /usr/lpp/mmfs/src/include/cxi/cxiMmap.h + /usr/bin/install cxiAtomic.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic.h + /usr/bin/install cxiTSFattr.h /usr/lpp/mmfs/src/include/cxi/cxiTSFattr.h + /usr/bin/install cxiAclUser.h /usr/lpp/mmfs/src/include/cxi/cxiAclUser.h + /usr/bin/install cxiLinkList.h /usr/lpp/mmfs/src/include/cxi/cxiLinkList.h + /usr/bin/install cxiDmapi.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi.h + /usr/bin/install LockNames.h /usr/lpp/mmfs/src/include/cxi/LockNames.h + /usr/bin/install lxtrace.h /usr/lpp/mmfs/src/include/cxi/lxtrace.h + /usr/bin/install cxiGcryptoDefs.h /usr/lpp/mmfs/src/include/cxi/cxiGcryptoDefs.h + /usr/bin/install cxiSynchNames.h /usr/lpp/mmfs/src/include/cxi/cxiSynchNames.h + /usr/bin/install cxiMiscNames.h /usr/lpp/mmfs/src/include/cxi/cxiMiscNames.h + /usr/bin/install DirIds.h /usr/lpp/mmfs/src/include/cxi/DirIds.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-kxi' installing header files (/usr/lpp/mmfs/src/ibm-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/ibm-linux' + /usr/bin/install cxiTypes-plat.h /usr/lpp/mmfs/src/include/cxi/cxiTypes-plat.h + /usr/bin/install cxiSystem-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSystem-plat.h + /usr/bin/install cxiIOBuffer-plat.h /usr/lpp/mmfs/src/include/cxi/cxiIOBuffer-plat.h + /usr/bin/install cxiSharedSeg-plat.h /usr/lpp/mmfs/src/include/cxi/cxiSharedSeg-plat.h + /usr/bin/install cxiMode-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMode-plat.h + /usr/bin/install Trace-plat.h /usr/lpp/mmfs/src/include/cxi/Trace-plat.h + /usr/bin/install cxiAtomic-plat.h /usr/lpp/mmfs/src/include/cxi/cxiAtomic-plat.h + /usr/bin/install cxiMmap-plat.h /usr/lpp/mmfs/src/include/cxi/cxiMmap-plat.h + /usr/bin/install cxiVFSStats-plat.h /usr/lpp/mmfs/src/include/cxi/cxiVFSStats-plat.h + /usr/bin/install cxiCred-plat.h /usr/lpp/mmfs/src/include/cxi/cxiCred-plat.h + /usr/bin/install cxiDmapi-plat.h /usr/lpp/mmfs/src/include/cxi/cxiDmapi-plat.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/ibm-linux' installing header files (/usr/lpp/mmfs/src/gpl-linux) make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Making directory /usr/lpp/mmfs/src/include/gpl-linux + /usr/bin/install Shark-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Shark-gpl.h + /usr/bin/install prelinux.h /usr/lpp/mmfs/src/include/gpl-linux/prelinux.h + /usr/bin/install postlinux.h /usr/lpp/mmfs/src/include/gpl-linux/postlinux.h + /usr/bin/install linux2gpfs.h /usr/lpp/mmfs/src/include/gpl-linux/linux2gpfs.h + /usr/bin/install verdep.h /usr/lpp/mmfs/src/include/gpl-linux/verdep.h + /usr/bin/install Logger-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/Logger-gpl.h + /usr/bin/install arch-gpl.h /usr/lpp/mmfs/src/include/gpl-linux/arch-gpl.h + /usr/bin/install oplock.h /usr/lpp/mmfs/src/include/gpl-linux/oplock.h touch install.he make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' make[1]: Entering directory '/usr/lpp/mmfs/src/gpl-linux' Pre-kbuild step 1... Pre-kbuild step 2... touch install.he Invoking Kbuild... /usr/bin/make -C /lib/modules/4.12.14-95.13-default/build ARCH=x86_64 M=/usr/lpp/mmfs/src/gpl-linux CONFIGDIR=/usr/lpp/mmfs/src/config ; \ if [ $? -ne 0 ]; then \ exit 1;\ fi make[2]: Entering directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' LD /usr/lpp/mmfs/src/gpl-linux/built-in.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracelin.o CC [M] /usr/lpp/mmfs/src/gpl-linux/tracedev-ksyms.o CC [M] /usr/lpp/mmfs/src/gpl-linux/ktrccalls.o CC [M] /usr/lpp/mmfs/src/gpl-linux/relaytrc.o LD [M] /usr/lpp/mmfs/src/gpl-linux/tracedev.o CC [M] /usr/lpp/mmfs/src/gpl-linux/mmfsmod.o LD [M] /usr/lpp/mmfs/src/gpl-linux/mmfs26.o CC [M] /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:65:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/inode.c: In function ?printInode?: /usr/lpp/mmfs/src/gpl-linux/inode.c:136:3: error: aggregate value used where an integer was expected TRACE5(TRACE_VNODE, 3, TRCID_PRINTINODE_4, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: At top level: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:2800:3: error: unknown type name ?wait_queue_t? wait_queue_t qwaiter; ^ /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c: In function ?cxiWaitEventWait?: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3882:3: warning: passing argument 1 of ?init_waitqueue_entry? from incompatible pointer type [enabled by default] init_waitqueue_entry(&waitElement.qwaiter, current); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:78:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void init_waitqueue_entry(struct wait_queue_entry *wq_entry, struct task_struct *p) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:68:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiSystem.c:3883:3: warning: passing argument 2 of ?__add_wait_queue? from incompatible pointer type [enabled by default] __add_wait_queue(&waitElement.qhead, &waitElement.qwaiter); ^ In file included from /usr/src/linux-4.12.14-95.13/include/linux/wait_bit.h:7:0, from /usr/src/linux-4.12.14-95.13/include/linux/fs.h:5, from /usr/lpp/mmfs/src/gpl-linux/dir.c:50, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/src/linux-4.12.14-95.13/include/linux/wait.h:153:20: note: expected ?struct wait_queue_entry *? but argument is of type ?int *? static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiStartIO?: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2474:13: error: ?struct bio? has no member named ?bi_bdev? bioP->bi_bdev = bdevP; ^ In file included from /usr/lpp/mmfs/src/gpl-linux/dir.c:63:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:60, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c: In function ?cxiCleanIO?: /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:81: error: ?struct bio? has no member named ?bi_bdev? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:395:23: note: in definition of macro ?_TRACE_MACRO? { _TR_BEFORE; _ktrc; KTRCOPTCODE; _TR_AFTER; } else NOOP ^ /usr/lpp/mmfs/src/gpl-linux/trcid.h:2086:5: note: in expansion of macro ?_TRACE3D? _TRACE3D(_HOOKWORD(TRCID_WAITIO_BDEVP), (Int64)(bdevP), (Int64)(bcP->biop[i]->bi_bdev), (Int64)(bdevP->bd_contains)); ^ /usr/lpp/mmfs/src/include/cxi/Trace.h:432:26: note: in expansion of macro ?TRACE_TRCID_WAITIO_BDEVP_CALL? _TRACE_MACRO(_c, _l, TRACE_##id##_CALL) ^ /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2906:7: note: in expansion of macro ?TRACE3? TRACE3(TRACE_IO, 6, TRCID_WAITIO_BDEVP, ^ In file included from /usr/lpp/mmfs/src/gpl-linux/cfiles.c:69:0, from /usr/lpp/mmfs/src/gpl-linux/cfiles_cust.c:54: /usr/lpp/mmfs/src/gpl-linux/cxiIOBuffer.c:2915:23: error: ?struct bio? has no member named ?bi_error? if (bcP->biop[i]->bi_error) ^ /usr/src/linux-4.12.14-95.13/scripts/Makefile.build:326: recipe for target '/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o' failed make[5]: *** [/usr/lpp/mmfs/src/gpl-linux/cfiles_cust.o] Error 1 /usr/src/linux-4.12.14-95.13/Makefile:1557: recipe for target '_module_/usr/lpp/mmfs/src/gpl-linux' failed make[4]: *** [_module_/usr/lpp/mmfs/src/gpl-linux] Error 2 Makefile:152: recipe for target 'sub-make' failed make[3]: *** [sub-make] Error 2 Makefile:24: recipe for target '__sub-make' failed make[2]: *** [__sub-make] Error 2 make[2]: Leaving directory '/usr/src/linux-4.12.14-95.13-obj/x86_64/default' makefile:130: recipe for target 'modules' failed make[1]: *** [modules] Error 1 make[1]: Leaving directory '/usr/lpp/mmfs/src/gpl-linux' makefile:148: recipe for target 'Modules' failed make: *** [Modules] Error 1 -------------------------------------------------------- mmbuildgpl: Building GPL module failed at Thu May 16 09:28:54 MDT 2019. -------------------------------------------------------- mmbuildgpl: Command failed. Examine previous error messages to determine cause. -- Ben Nickell ----- Idaho National Laboratory High Performance Computing System Administrator Desk: 208-526-4251 Mobile: 208-317-4259 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From anobre at br.ibm.com Thu May 16 17:36:35 2019 From: anobre at br.ibm.com (Anderson Ferreira Nobre) Date: Thu, 16 May 2019 16:36:35 +0000 Subject: [gpfsug-discuss] mmbuild problem In-Reply-To: References: , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15580071695162.gif Type: image/gif Size: 105 bytes Desc: not available URL: From lgayne at us.ibm.com Thu May 16 18:05:48 2019 From: lgayne at us.ibm.com (Lyle Gayne) Date: Thu, 16 May 2019 17:05:48 +0000 Subject: [gpfsug-discuss] mmbuild problem In-Reply-To: References: , , , Message-ID: An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Image.15580071695162.gif Type: image/gif Size: 105 bytes Desc: not available URL: From brianbur at us.ibm.com Fri May 17 16:24:52 2019 From: brianbur at us.ibm.com (Brian Burnette) Date: Fri, 17 May 2019 15:24:52 +0000 Subject: [gpfsug-discuss] IBM Spectrum Scale Non-root Admin Research Message-ID: An HTML attachment was scrubbed... URL: From sadaniel at us.ibm.com Fri May 17 16:37:42 2019 From: sadaniel at us.ibm.com (Steven Daniels) Date: Fri, 17 May 2019 15:37:42 +0000 Subject: [gpfsug-discuss] IBM Spectrum Scale Non-root Admin Research In-Reply-To: References: Message-ID: Brian, We have a number of government clients that have to seek a waiver for each and every Spectrum Scale installation because of the root password-less ssh requirements. The sudo wrappers help but not really. My clients would all like to see the ssh requirement go away and also need to comply with Nessus scans. Different agencies may have custom scan profiles but even passing the standard ones is a good step. I have been discussing this internal with the development team for years. Thanks, Steve Steven A. Daniels Cross-brand Client Architect Senior Certified IT Specialist National Programs Fax and Voice: 3038101229 sadaniel at us.ibm.com http://www.ibm.com From: "Brian Burnette" To: gpfsug-discuss at spectrumscale.org Date: 05/17/2019 09:25 AM Subject: [EXTERNAL] [gpfsug-discuss] IBM Spectrum Scale Non-root Admin Research Sent by: gpfsug-discuss-bounces at spectrumscale.org Hey there Spectrum Scale Users, Are you interested in allowing members of your team to administer parts or all of your Spectrum Scale clusters without the power of root access? Chances are your answer is somewhere between "Yes" and "Definitely, yes, yes, yes!" If so, the Scale Research team would love to sit down with you to better understand the problems you're trying to solve with non-root access and possibly work with you over the coming months to design concepts and prototypes of different solutions. Just reply back and we'll work with you to schedule a time to chat. If you have any other comments, questions, or concerns feel free to let us know. Look forward to talking with you soon Brian Burnette IBM Systems - Spectrum Scale and Discover E-mail: brianbur at us.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=6mf8yZ-lDnfsy3mVONFq1RV1ypXT67SthQnq3D6Ym4Q&m=deWOF7sVb3e9mZabqIi0axMgkZE1FEs99isaMTZQcmw&s=axVOZPNouq3IgCatiR49oOZ0bw5OR0JaECiJuxHzQl0&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: opencits-d.jpg Type: image/jpeg Size: 182862 bytes Desc: not available URL: From l.walid at powerm.ma Sun May 19 05:14:05 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Sun, 19 May 2019 04:14:05 +0000 Subject: [gpfsug-discuss] Introduction Message-ID: Hi, I'm Largou Walid, Technical Architect for Power Maroc, Platinium Business Partner, we specialize in IBM Products (Hardware & Software). I've been using Spectrum Scale for about two years now, we have an upcoming project for HPC for the local Weather Company with an amazing 120 Spectrum Scale Nodes (10.000 CPU), i've worked on CES Services also, and AFM DR for one of our customers. I'm from Casablanca, Morocco, glad to be part of the community. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From l.walid at powerm.ma Sun May 19 20:30:06 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Sun, 19 May 2019 19:30:06 +0000 Subject: [gpfsug-discuss] Active Directory Authentification Message-ID: Hi, I'm planning to integrate Active Directory with our Spectrum Scale, but it seems i'm missing out something, please note that i'm on a 2 protocol nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've tried from the gui the two ways, connect to Active Directory, and the other to LDAP. *Connect to LDAP : * mmuserauth service create --data-access-method 'file' --type 'LDAP' --servers 'powermdomain.powerm.ma:389' --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' 7:26 PM Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server 7:26 PM Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL 7:26 PM pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. 7:26 PM pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) 7:26 PM WARNING: Could not open passdb 7:26 PM File authentication configuration failed. 7:26 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:26 PM Operation Failed 7:26 PM Error: Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) WARNING: Could not open passdb File authentication configuration failed. mmuserauth service create: Command failed. Examine previous error messages to determine cause. *Connect to Active Directory : * mmuserauth service create --data-access-method 'file' --type 'AD' --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains 'powerm.ma (type=stand-alone:ldap_srv=192.168.56.5: range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword )' 7:29 PM mmuserauth service create: Invalid parameter passed for --ldapmap-domain 7:29 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:29 PM Operation Failed 7:29 PM Error: mmuserauth service create: Invalid parameter passed for --ldapmap-domain mmuserauth service create: Command failed. Examine previous error messages to determine cause. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From will.schmied at stjude.org Mon May 20 00:24:15 2019 From: will.schmied at stjude.org (Schmied, Will) Date: Sun, 19 May 2019 23:24:15 +0000 Subject: [gpfsug-discuss] Active Directory Authentification In-Reply-To: References: Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826@stjude.org> Hi Walid, Without knowing any specifics of your environment, the below command is what I have used, successfully across multiple clusters at 4.2.x. The binding account you specify needs to be able to add computers to the domain. mmuserauth service create --data-access-method file --type ad --servers some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master --netbios-name some_ad_computer_name --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" 10000-9999999 is the acceptable range of UID / GID for AD accounts. Thanks, Will From: on behalf of "L.walid (PowerM)" Reply-To: gpfsug main discussion list Date: Sunday, May 19, 2019 at 14:30 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Active Directory Authentification Caution: External Sender Hi, I'm planning to integrate Active Directory with our Spectrum Scale, but it seems i'm missing out something, please note that i'm on a 2 protocol nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've tried from the gui the two ways, connect to Active Directory, and the other to LDAP. Connect to LDAP : mmuserauth service create --data-access-method 'file' --type 'LDAP' --servers 'powermdomain.powerm.ma:389' --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' 7:26 PM Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server 7:26 PM Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL 7:26 PM pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. 7:26 PM pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) 7:26 PM WARNING: Could not open passdb 7:26 PM File authentication configuration failed. 7:26 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:26 PM Operation Failed 7:26 PM Error: Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) WARNING: Could not open passdb File authentication configuration failed. mmuserauth service create: Command failed. Examine previous error messages to determine cause. Connect to Active Directory : mmuserauth service create --data-access-method 'file' --type 'AD' --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains 'powerm.ma(type=stand-alone:ldap_srv=192.168.56.5:range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword)' 7:29 PM mmuserauth service create: Invalid parameter passed for --ldapmap-domain 7:29 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:29 PM Operation Failed 7:29 PM Error: mmuserauth service create: Invalid parameter passed for --ldapmap-domain mmuserauth service create: Command failed. Examine previous error messages to determine cause. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 621 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. ________________________________ Email Disclaimer: www.stjude.org/emaildisclaimer Consultation Disclaimer: www.stjude.org/consultationdisclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.walid at powerm.ma Mon May 20 00:39:31 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Sun, 19 May 2019 23:39:31 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 In-Reply-To: References: Message-ID: Hi, Thanks for the feedback, i have tried the suggested command : mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. [root at scale1 ~]# mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name walid --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'walid' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. i tried both domain qualifier and plain user in the --name parameters but i get Invalid Credentials (knowing that walid is an Administrator in Active Directory) [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma -x -W -D " walid at powerm.ma" -b "dc=powerm,dc=ma" "(sAMAccountName=walid)" Enter LDAP Password: # extended LDIF # # LDAPv3 # base with scope subtree # filter: (sAMAccountName=walid) # requesting: ALL # # Walid, Users, powerm.ma dn: CN=Walid,CN=Users,DC=powerm,DC=ma objectClass: top objectClass: person objectClass: organizationalPerson objectClass: user cn: Walid sn: Largou givenName: Walid distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma instanceType: 4 whenCreated: 20190518224649.0Z whenChanged: 20190520001645.0Z uSNCreated: 12751 memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma uSNChanged: 16404 name: Walid objectGUID:: Le4tH38qy0SfcxaroNGPEg== userAccountControl: 512 badPwdCount: 0 codePage: 0 countryCode: 0 badPasswordTime: 132028055547447029 lastLogoff: 0 lastLogon: 132028055940741392 pwdLastSet: 132026934129698743 primaryGroupID: 513 objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== adminCount: 1 accountExpires: 9223372036854775807 logonCount: 0 sAMAccountName: walid sAMAccountType: 805306368 objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma dSCorePropagationData: 20190518225159.0Z dSCorePropagationData: 16010101000000.0Z lastLogonTimestamp: 132027850050695698 # search reference ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma # search reference ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma # search reference ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma # search result search: 2 result: 0 Success On Sun, 19 May 2019 at 23:31, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Active Directory Authentification (Schmied, Will) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 19 May 2019 23:24:15 +0000 > From: "Schmied, Will" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Active Directory Authentification > Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org> > Content-Type: text/plain; charset="utf-8" > > Hi Walid, > > Without knowing any specifics of your environment, the below command is > what I have used, successfully across multiple clusters at 4.2.x. The > binding account you specify needs to be able to add computers to the domain. > > mmuserauth service create --data-access-method file --type ad --servers > some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master > --netbios-name some_ad_computer_name --unixmap-domains > "DOMAIN_NETBIOS_NAME(10000-9999999)" > > 10000-9999999 is the acceptable range of UID / GID for AD accounts. > > > > Thanks, > Will > > > From: on behalf of "L.walid > (PowerM)" > Reply-To: gpfsug main discussion list > Date: Sunday, May 19, 2019 at 14:30 > To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Active Directory Authentification > > Caution: External Sender > > Hi, > > I'm planning to integrate Active Directory with our Spectrum Scale, but it > seems i'm missing out something, please note that i'm on a 2 protocol nodes > with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've > tried from the gui the two ways, connect to Active Directory, and the other > to LDAP. > > Connect to LDAP : > mmuserauth service create --data-access-method 'file' --type 'LDAP' > --servers 'powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0>' > --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' > --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' > 7:26 PM > Either failed to create a samba domain entry on LDAP server if not present > or could not read the already existing samba domain entry from the LDAP > server > 7:26 PM > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > 7:26 PM > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > 7:26 PM > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > 7:26 PM > WARNING: Could not open passdb > 7:26 PM > File authentication configuration failed. > 7:26 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:26 PM > Operation Failed > 7:26 PM > Error: Either failed to create a samba domain entry on LDAP server if not > present or could not read the already existing samba domain entry from the > LDAP server > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > WARNING: Could not open passdb > File authentication configuration failed. > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > Connect to Active Directory : > mmuserauth service create --data-access-method 'file' --type 'AD' > --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' > --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains ' > powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=tJKajnPMlWowHIAHnoxbceVIbE4t19KiLCaohZRwwYQ%3D&reserved=0 > >(type=stand-alone:ldap_srv=192.168.56.5: > range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword > )' > 7:29 PM > mmuserauth service create: Invalid parameter passed for --ldapmap-domain > 7:29 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:29 PM > Operation Failed > 7:29 PM > Error: mmuserauth service create: Invalid parameter passed for > --ldapmap-domain > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > -- > Best regards, > > > Walid Largou > Senior IT Specialist > > Power Maroc > > Mobile : +212 621 31 98 71 > > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > > https://www.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=qpwCQkujjr3Sq0wCySyjRMGZrp94mvRQAK0iGlh7DqQ%3D&reserved=0 > > > > [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > > ________________________________ > > Email Disclaimer: www.stjude.org/emaildisclaimer > Consultation Disclaimer: www.stjude.org/consultationdisclaimer > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190519/9b579ecf/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 88, Issue 19 > ********************************************** > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From will.schmied at stjude.org Mon May 20 02:45:57 2019 From: will.schmied at stjude.org (Schmied, Will) Date: Mon, 20 May 2019 01:45:57 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 In-Reply-To: References: Message-ID: ?Well not seeing anything odd about the second try (just the username only) except that your NETBIOS domain name needs to be put in place of the placeholder (DOMAIN_NETBIOS_NAME). You can copy from a text file and then paste into the stdin when the command asks for your password. Just a way to be sure no typos are in the password entry. Thanks, Will From: on behalf of "L.walid (PowerM)" Reply-To: gpfsug main discussion list Date: Sunday, May 19, 2019 at 18:39 To: "gpfsug-discuss at spectrumscale.org" Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 Caution: External Sender Hi, Thanks for the feedback, i have tried the suggested command : mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. [root at scale1 ~]# mmuserauth service create --data-access-method file --type ad --servers powermdomain.powerm.ma --user-name walid --idmap-role master --netbios-name scaleces --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" Enter Active Directory User 'walid' password: Invalid credentials specified for the server powermdomain.powerm.ma mmuserauth service create: Command failed. Examine previous error messages to determine cause. i tried both domain qualifier and plain user in the --name parameters but i get Invalid Credentials (knowing that walid is an Administrator in Active Directory) [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma -x -W -D "walid at powerm.ma" -b "dc=powerm,dc=ma" "(sAMAccountName=walid)" Enter LDAP Password: # extended LDIF # # LDAPv3 # base with scope subtree # filter: (sAMAccountName=walid) # requesting: ALL # # Walid, Users, powerm.ma dn: CN=Walid,CN=Users,DC=powerm,DC=ma objectClass: top objectClass: person objectClass: organizationalPerson objectClass: user cn: Walid sn: Largou givenName: Walid distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma instanceType: 4 whenCreated: 20190518224649.0Z whenChanged: 20190520001645.0Z uSNCreated: 12751 memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma uSNChanged: 16404 name: Walid objectGUID:: Le4tH38qy0SfcxaroNGPEg== userAccountControl: 512 badPwdCount: 0 codePage: 0 countryCode: 0 badPasswordTime: 132028055547447029 lastLogoff: 0 lastLogon: 132028055940741392 pwdLastSet: 132026934129698743 primaryGroupID: 513 objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== adminCount: 1 accountExpires: 9223372036854775807 logonCount: 0 sAMAccountName: walid sAMAccountType: 805306368 objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma dSCorePropagationData: 20190518225159.0Z dSCorePropagationData: 16010101000000.0Z lastLogonTimestamp: 132027850050695698 # search reference ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma # search reference ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma # search reference ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma # search result search: 2 result: 0 Success On Sun, 19 May 2019 at 23:31, > wrote: Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Re: Active Directory Authentification (Schmied, Will) ---------------------------------------------------------------------- Message: 1 Date: Sun, 19 May 2019 23:24:15 +0000 From: "Schmied, Will" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Active Directory Authentification Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org> Content-Type: text/plain; charset="utf-8" Hi Walid, Without knowing any specifics of your environment, the below command is what I have used, successfully across multiple clusters at 4.2.x. The binding account you specify needs to be able to add computers to the domain. mmuserauth service create --data-access-method file --type ad --servers some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master --netbios-name some_ad_computer_name --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" 10000-9999999 is the acceptable range of UID / GID for AD accounts. Thanks, Will From: > on behalf of "L.walid (PowerM)" > Reply-To: gpfsug main discussion list > Date: Sunday, May 19, 2019 at 14:30 To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Active Directory Authentification Caution: External Sender Hi, I'm planning to integrate Active Directory with our Spectrum Scale, but it seems i'm missing out something, please note that i'm on a 2 protocol nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've tried from the gui the two ways, connect to Active Directory, and the other to LDAP. Connect to LDAP : mmuserauth service create --data-access-method 'file' --type 'LDAP' --servers 'powermdomain.powerm.ma:389>' --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn 'cn=users,dc=powerm,dc=ma' 7:26 PM Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server 7:26 PM Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL 7:26 PM pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. 7:26 PM pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389>" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) 7:26 PM WARNING: Could not open passdb 7:26 PM File authentication configuration failed. 7:26 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:26 PM Operation Failed 7:26 PM Error: Either failed to create a samba domain entry on LDAP server if not present or could not read the already existing samba domain entry from the LDAP server Detailed message:smbldap_search_domain_info: Adding domain info for SCALECES failed with NT_STATUS_UNSUCCESSFUL pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the domain. We cannot work reliably without it. pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389>" did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) WARNING: Could not open passdb File authentication configuration failed. mmuserauth service create: Command failed. Examine previous error messages to determine cause. Connect to Active Directory : mmuserauth service create --data-access-method 'file' --type 'AD' --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains 'powerm.ma>(type=stand-alone:ldap_srv=192.168.56.5:range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword)' 7:29 PM mmuserauth service create: Invalid parameter passed for --ldapmap-domain 7:29 PM mmuserauth service create: Command failed. Examine previous error messages to determine cause. 7:29 PM Operation Failed 7:29 PM Error: mmuserauth service create: Invalid parameter passed for --ldapmap-domain mmuserauth service create: Command failed. Examine previous error messages to determine cause. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 621 31 98 71 Email: l.walid at powerm.ma> 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma> [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. ________________________________ Email Disclaimer: www.stjude.org/emaildisclaimer Consultation Disclaimer: www.stjude.org/consultationdisclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: > ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 88, Issue 19 ********************************************** -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 621 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: From par at nl.ibm.com Mon May 20 15:45:11 2019 From: par at nl.ibm.com (Par Hettinga-Ayakannu) Date: Mon, 20 May 2019 16:45:11 +0200 Subject: [gpfsug-discuss] Introduction In-Reply-To: References: Message-ID: Hi Largou, Welcome to the community, glad you joined. Best Regards, Par Hettinga, Global SDI Sales Enablement Leader Storage and Software Defined Infrastructure, IBM Systems Tel:+31(0)20-5132194 Mobile:+31(0)6-53359940 email:par at nl.ibm.com From: "L.walid (PowerM)" To: gpfsug-discuss at spectrumscale.org Date: 19/05/2019 06:14 Subject: [gpfsug-discuss] Introduction Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I'm Largou Walid, Technical Architect for Power Maroc, Platinium Business Partner, we specialize in IBM Products (Hardware & Software). I've been using Spectrum Scale for about two?years now, we have an upcoming project for HPC for the local Weather Company with an amazing 120 Spectrum Scale Nodes (10.000 CPU), i've worked on CES Services also, and AFM DR for one of our customers. I'm from Casablanca, Morocco, glad to be part of the community. -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile :?+212 621 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute?a commitment by Power Maroc S.A.R.L except where?provided for in a written agreement between you and?Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you?are not the intended recipient of the message, please notify?the sender immediately.[attachment "PastedGraphic-2.png" deleted by Par Hettinga-Ayakannu/Netherlands/IBM] _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=aJbIzokCniIn3gptOcLKQA&m=2sXSYflj8LjtwCIsCO2D34AV3EC94GqkwXC_gYthAgk&s=UBmuldWixuYylgIv3yT-6ILUkt7L5UTT6QOaY-NaljI&e= Tenzij hierboven anders aangegeven: / Unless stated otherwise above: IBM Nederland B.V. Gevestigd te Amsterdam Inschrijving Handelsregister Amsterdam Nr. 33054214 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From l.walid at powerm.ma Mon May 20 16:36:08 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Mon, 20 May 2019 15:36:08 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 21 In-Reply-To: References: Message-ID: Hi, I manage to make the command work (basically checking /etc/resolv.conf, /etc/hosts, /etc/nsswitch.conf) : root at scale1 committed]# mmuserauth service create --data-access-method file --type ad --servers X.X.X.X --user-name MYUSER --idmap-role master --netbios-name CESSCALE --unixmap-domains "MYDOMAIN(10000-9999999)" Enter Active Directory User 'spectrum_scale' password: File authentication configuration completed successfully. [root at scale1 committed]# mmuserauth service check Userauth file check on node: scale1 Checking nsswitch file: OK Checking Pre-requisite Packages: OK Checking SRV Records lookup: OK Service 'gpfs-winbind' status: OK Object not configured [root at scale1 committed]# mmuserauth service check --server-reachability Userauth file check on node: scale1 Checking nsswitch file: OK Checking Pre-requisite Packages: OK Checking SRV Records lookup: OK Domain Controller status NETLOGON connection: OK, connection to DC: xxxx Domain join status: OK Machine password status: OK Service 'gpfs-winbind' status: OK Object not configured But unfortunately, even if all the commands seems good, i cannot use user from active directory as owner or to setup ACL on SMB shares (it doesn't recognise AD users), plus the command 'id DOMAIN\USER' gives error cannot find user. Any ideas ? On Mon, 20 May 2019 at 01:46, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: gpfsug-discuss Digest, Vol 88, Issue 19 (Schmied, Will) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 20 May 2019 01:45:57 +0000 > From: "Schmied, Will" > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 > Message-ID: > Content-Type: text/plain; charset="utf-8" > > ?Well not seeing anything odd about the second try (just the username > only) except that your NETBIOS domain name needs to be put in place of the > placeholder (DOMAIN_NETBIOS_NAME). > > You can copy from a text file and then paste into the stdin when the > command asks for your password. Just a way to be sure no typos are in the > password entry. > > > > Thanks, > Will > > > From: on behalf of "L.walid > (PowerM)" > Reply-To: gpfsug main discussion list > Date: Sunday, May 19, 2019 at 18:39 > To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 > > Caution: External Sender > > Hi, > > Thanks for the feedback, i have tried the suggested command : > > mmuserauth service create --data-access-method file --type ad --servers > powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> > --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master > --netbios-name scaleces --unixmap-domains > "DOMAIN_NETBIOS_NAME(10000-9999999)" > Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: > Invalid credentials specified for the server powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 > > > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > > [root at scale1 ~]# mmuserauth service create --data-access-method file > --type ad --servers powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> > --user-name walid --idmap-role master --netbios-name scaleces > --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" > Enter Active Directory User 'walid' password: > Invalid credentials specified for the server powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 > > > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > > i tried both domain qualifier and plain user in the --name parameters but > i get Invalid Credentials (knowing that walid is an Administrator in Active > Directory) > > [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> > -x -W -D "walid at powerm.ma" -b "dc=powerm,dc=ma" > "(sAMAccountName=walid)" > Enter LDAP Password: > # extended LDIF > # > # LDAPv3 > # base with scope subtree > # filter: (sAMAccountName=walid) > # requesting: ALL > # > > # Walid, Users, powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 > > > dn: CN=Walid,CN=Users,DC=powerm,DC=ma > objectClass: top > objectClass: person > objectClass: organizationalPerson > objectClass: user > cn: Walid > sn: Largou > givenName: Walid > distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma > instanceType: 4 > whenCreated: 20190518224649.0Z > whenChanged: 20190520001645.0Z > uSNCreated: 12751 > memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma > uSNChanged: 16404 > name: Walid > objectGUID:: Le4tH38qy0SfcxaroNGPEg== > userAccountControl: 512 > badPwdCount: 0 > codePage: 0 > countryCode: 0 > badPasswordTime: 132028055547447029 > lastLogoff: 0 > lastLogon: 132028055940741392 > pwdLastSet: 132026934129698743 > primaryGroupID: 513 > objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== > adminCount: 1 > accountExpires: 9223372036854775807 > logonCount: 0 > sAMAccountName: walid > sAMAccountType: 805306368 > objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma > dSCorePropagationData: 20190518225159.0Z > dSCorePropagationData: 16010101000000.0Z > lastLogonTimestamp: 132027850050695698 > > # search reference > ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FForestDnsZones.powerm.ma%2FDC%3DForestDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=k6CYQeGq2lgAtY1qmVueO9OmK1a9SzGMNGm%2BPlyfwto%3D&reserved=0 > > > > # search reference > ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FDomainDnsZones.powerm.ma%2FDC%3DDomainDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=TFYJ1nBOLaxelI2KZPaoZidLvCOPv6lrD51ZRjEBkqA%3D&reserved=0 > > > > # search reference > ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma%2FCN%3DConfiguration%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=8cVvHhnXPrqogSd8QLP6McEAoGrc2oRIKbtZYBiDz3M%3D&reserved=0 > > > > # search result > search: 2 > result: 0 Success > > > On Sun, 19 May 2019 at 23:31, > wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at spectrumscale.org gpfsug-discuss at spectrumscale.org> > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 > > > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at spectrumscale.org gpfsug-discuss-request at spectrumscale.org> > > You can reach the person managing the list at > gpfsug-discuss-owner at spectrumscale.org gpfsug-discuss-owner at spectrumscale.org> > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Active Directory Authentification (Schmied, Will) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 19 May 2019 23:24:15 +0000 > From: "Schmied, Will" will.schmied at stjude.org>> > To: gpfsug main discussion list gpfsug-discuss at spectrumscale.org>> > Subject: Re: [gpfsug-discuss] Active Directory Authentification > Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org 4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org>> > Content-Type: text/plain; charset="utf-8" > > Hi Walid, > > Without knowing any specifics of your environment, the below command is > what I have used, successfully across multiple clusters at 4.2.x. The > binding account you specify needs to be able to add computers to the domain. > > mmuserauth service create --data-access-method file --type ad --servers > some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master > --netbios-name some_ad_computer_name --unixmap-domains > "DOMAIN_NETBIOS_NAME(10000-9999999)" > > 10000-9999999 is the acceptable range of UID / GID for AD accounts. > > > > Thanks, > Will > > > From: gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "L.walid > (PowerM)" > > Reply-To: gpfsug main discussion list > > Date: Sunday, May 19, 2019 at 14:30 > To: "gpfsug-discuss at spectrumscale.org gpfsug-discuss at spectrumscale.org>" > > Subject: [gpfsug-discuss] Active Directory Authentification > > Caution: External Sender > > Hi, > > I'm planning to integrate Active Directory with our Spectrum Scale, but it > seems i'm missing out something, please note that i'm on a 2 protocol nodes > with only service SMB running Spectrum Scale 5.0.3.0 (latest version). I've > tried from the gui the two ways, connect to Active Directory, and the other > to LDAP. > > Connect to LDAP : > mmuserauth service create --data-access-method 'file' --type 'LDAP' > --servers 'powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>' > --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' > --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn > 'cn=users,dc=powerm,dc=ma' > 7:26 PM > Either failed to create a samba domain entry on LDAP server if not present > or could not read the already existing samba domain entry from the LDAP > server > 7:26 PM > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > 7:26 PM > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > 7:26 PM > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > 7:26 PM > WARNING: Could not open passdb > 7:26 PM > File authentication configuration failed. > 7:26 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:26 PM > Operation Failed > 7:26 PM > Error: Either failed to create a samba domain entry on LDAP server if not > present or could not read the already existing samba domain entry from the > LDAP server > Detailed message:smbldap_search_domain_info: Adding domain info for > SCALECES failed with NT_STATUS_UNSUCCESSFUL > pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the > domain. We cannot work reliably without it. > pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" > did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) > WARNING: Could not open passdb > File authentication configuration failed. > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > > > Connect to Active Directory : > mmuserauth service create --data-access-method 'file' --type 'AD' > --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' > --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains ' > powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=tJKajnPMlWowHIAHnoxbceVIbE4t19KiLCaohZRwwYQ%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 > >>(type=s > tand-alone:ldap_srv=192.168.56.5: > range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword > )' > 7:29 PM > mmuserauth service create: Invalid parameter passed for --ldapmap-domain > 7:29 PM > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > 7:29 PM > Operation Failed > 7:29 PM > Error: mmuserauth service create: Invalid parameter passed for > --ldapmap-domain > mmuserauth service create: Command failed. Examine previous error messages > to determine cause. > -- > Best regards, > > > Walid Largou > Senior IT Specialist > > Power Maroc > > Mobile : +212 621 31 98 71 > > Email: l.walid at powerm.ma y.largou at powerm.ma> > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > > https://www.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=lFUQnvPlecsmKcAL%2FC4PbmfqyxW0sn5PI%2Bu4aCD5448%3D&reserved=0 > >< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=qpwCQkujjr3Sq0wCySyjRMGZrp94mvRQAK0iGlh7DqQ%3D&reserved=0 > < > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 > >> > > [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > > ________________________________ > > Email Disclaimer: www.stjude.org/emaildisclaimer< > http://www.stjude.org/emaildisclaimer> > Consultation Disclaimer: www.stjude.org/consultationdisclaimer< > http://www.stjude.org/consultationdisclaimer> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190519/9b579ecf/attachment.html > < > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fpipermail%2Fgpfsug-discuss%2Fattachments%2F20190519%2F9b579ecf%2Fattachment.html&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=DlY%2Bdy25zq2TcPBLwf%2FDQm0cngmIu6FTDzEW9PgTsrc%3D&reserved=0 > >> > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=UNt7Tspdurvw2nLSOYUf3T5pbwfD0xmW91PlwxOJi2Y%3D&reserved=0 > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss< > https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 > > > > > End of gpfsug-discuss Digest, Vol 88, Issue 19 > ********************************************** > > > -- > Best regards, > > > Walid Largou > Senior IT Specialist > > Power Maroc > > Mobile : +212 621 31 98 71 > > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > > https://www.powerm.ma< > https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 > > > > [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190520/92f25565/attachment.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 88, Issue 21 > ********************************************** > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From christof.schmitt at us.ibm.com Mon May 20 19:51:46 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Mon, 20 May 2019 18:51:46 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 21 In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: From truston at mbari.org Mon May 20 21:05:53 2019 From: truston at mbari.org (Todd Ruston) Date: Mon, 20 May 2019 13:05:53 -0700 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question Message-ID: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Greetings all, First post here, so by way of introduction we are a fairly new Spectrum Scale and Archive customer (installed last year and live in production Q1 this year). We have a four node (plus EMS) ESS system with ~520TB of mixed spinning disk and SSD. Client access to the system is via CES (NFS and SMB, running on two protocol nodes), integrated with Active Directory, for a mixed population of Windows, Mac, and Linux clients. A separate pair of nodes run Spectrum Archive, with a TS4500 LTO-8 library behind them. We use the system for general institute data, with the largest data types being HD video, multibeam sonar, and hydrophone data. Video is the currently active data type in production; we will be migrating the rest over time. So far things are running pretty well. Our archive approach is to premigrate data, particularly the large, unchanging data like the above mentioned data types, almost immediately upon landing in the system. Then we migrate those that have not been accessed in a period of time (or manually if space demands require it). We do wish to allow users to recall archived data on demand as needed. Because we have a large contingent of Mac clients (accessing the system via SMB), one issue we want to get ahead of is inadvertent recalls triggered by Mac preview generation, Quick Look, Cover Flow/Gallery view, and the like. Going in we knew this was going to be something we'd need to address, and we anticipated being able to configure Finder to disable preview generation and train users to avoid Quick Look unless they intended to trigger a recall. In our testing however, even with those features disabled/avoided, we have seen Mac clients trigger inadvertent recalls just from CLI 'ls -lshrt' interactions with the system. While brainstorming ways to prevent these inadvertent recalls while still allowing users to initiate recalls on their own when needed, one thought that came to us is we might be able to turn off recalls via SMB (setgpfs:recalls = no via mmsmb), and create a simple self-service web portal that would allow users to browse the Scale file system with a web browser, select files for recall, and initiate the recall from there. The web interface could run on one of the Archive nodes, and the back end of it would simply send a list of selected file paths to ltfsee recall. Before possibly reinventing the wheel, I thought I'd check to see if something like this may already exist, either from IBM, the Scale user community, or a third-party/open source tool that could be leveraged for the purpose. I searched the list archive and didn't find anything, but please let me know if I missed something. And please let me know if you know of something that would fit this need, or other ideas as well. Cheers, -- Todd E. Ruston Information Systems Manager Monterey Bay Aquarium Research Institute (MBARI) 7700 Sandholdt Road, Moss Landing, CA, 95039 Phone 831-775-1997 Fax 831-775-1652 http://www.mbari.org From christof.schmitt at us.ibm.com Mon May 20 21:33:57 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Mon, 20 May 2019 20:33:57 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Intro=2C=09and_Spectrum_Archive_self-s?= =?utf-8?q?ervice_recall_interface_question?= In-Reply-To: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: An HTML attachment was scrubbed... URL: From stockf at us.ibm.com Mon May 20 21:41:16 2019 From: stockf at us.ibm.com (Frederick Stock) Date: Mon, 20 May 2019 20:41:16 +0000 Subject: [gpfsug-discuss] =?utf-8?q?Intro=2C=09and_Spectrum_Archive_self-s?= =?utf-8?q?ervice_recall_interface_question?= In-Reply-To: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: An HTML attachment was scrubbed... URL: From richard.rupp at us.ibm.com Mon May 20 21:48:40 2019 From: richard.rupp at us.ibm.com (RICHARD RUPP) Date: Mon, 20 May 2019 16:48:40 -0400 Subject: [gpfsug-discuss] =?utf-8?q?Intro=2C=09and_Spectrum_Archive_self-s?= =?utf-8?q?ervice_recall_interface_question?= In-Reply-To: References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: I've heard that this works, but I have not tried it myself - https://support.apple.com/en-us/HT208209 Regards, Richard Rupp, Sales Specialist, Phone: 1-347-510-6746 From: "Frederick Stock" To: gpfsug-discuss at spectrumscale.org Cc: gpfsug-discuss at spectrumscale.org Date: 05/20/2019 04:41 PM Subject: [EXTERNAL] Re: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question Sent by: gpfsug-discuss-bounces at spectrumscale.org Todd I am not aware of any tool that provides the out of band recall that you propose, though it would be quite useful. However, I wanted to note that as I understand the reason the the Mac client initiates the file recalls is because the Mac SMB client ignores the archive bit, indicating a file does not reside in online storage, in the SMB protocol. To date efforts to have Apple change their SMB client to respect the archive bit have not been successful but if you feel so inclined we would be grateful if you would submit a request to Apple for them to change their SMB client to honor the archive bit and thus avoid file recalls. Fred __________________________________________________ Fred Stock | IBM Pittsburgh Lab | 720-430-8821 stockf at us.ibm.com ----- Original message ----- From: Todd Ruston Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Cc: Subject: [EXTERNAL] [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question Date: Mon, May 20, 2019 4:12 PM Greetings all, First post here, so by way of introduction we are a fairly new Spectrum Scale and Archive customer (installed last year and live in production Q1 this year). We have a four node (plus EMS) ESS system with ~520TB of mixed spinning disk and SSD. Client access to the system is via CES (NFS and SMB, running on two protocol nodes), integrated with Active Directory, for a mixed population of Windows, Mac, and Linux clients. A separate pair of nodes run Spectrum Archive, with a TS4500 LTO-8 library behind them. We use the system for general institute data, with the largest data types being HD video, multibeam sonar, and hydrophone data. Video is the currently active data type in production; we will be migrating the rest over time. So far things are running pretty well. Our archive approach is to premigrate data, particularly the large, unchanging data like the above mentioned data types, almost immediately upon landing in the system. Then we migrate those that have not been accessed in a period of time (or manually if space demands require it). We do wish to allow users to recall archived data on demand as needed. Because we have a large contingent of Mac clients (accessing the system via SMB), one issue we want to get ahead of is inadvertent recalls triggered by Mac preview generation, Quick Look, Cover Flow/Gallery view, and the like. Going in we knew this was going to be something we'd need to address, and we anticipated being able to configure Finder to disable preview generation and train users to avoid Quick Look unless they intended to trigger a recall. In our testing however, even with those features disabled/avoided, we have seen Mac clients trigger inadvertent recalls just from CLI 'ls -lshrt' interactions with the system. While brainstorming ways to prevent these inadvertent recalls while still allowing users to initiate recalls on their own when needed, one thought that came to us is we might be able to turn off recalls via SMB (setgpfs:recalls = no via mmsmb), and create a simple self-service web portal that would allow users to browse the Scale file system with a web browser, select files for recall, and initiate the recall from there. The web interface could run on one of the Archive nodes, and the back end of it would simply send a list of selected file paths to ltfsee recall. Before possibly reinventing the wheel, I thought I'd check to see if something like this may already exist, either from IBM, the Scale user community, or a third-party/open source tool that could be leveraged for the purpose. I searched the list archive and didn't find anything, but please let me know if I missed something. And please let me know if you know of something that would fit this need, or other ideas as well. Cheers, -- Todd E. Ruston Information Systems Manager Monterey Bay Aquarium Research Institute (MBARI) 7700 Sandholdt Road, Moss Landing, CA, 95039 Phone 831-775-1997 Fax 831-775-1652 http://www.mbari.org _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=EXL-jEd1jmdzvOIhT87C7SIqmAS9uhVQ6J3kObct4OY&m=xkYegIiDkaPYiV4_T1Zd0mLhj-2r34rhi8EbFYw_ei8&s=bOxknFCPDWKJdnKbMs-BIU7zXcb0tsLSRw7YDzmRlgA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From truston at mbari.org Mon May 20 22:50:13 2019 From: truston at mbari.org (Todd Ruston) Date: Mon, 20 May 2019 14:50:13 -0700 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: Thanks very much for the replies so far. I had already pinged Apple asking them to honor the offline bit in their SMB implementation. I don't think we carry a whole lot of weight with them, but at least we've put another "vote in the hopper" for the feature. We had tried the settings in the article Richard referenced, but recalls still occurred. Christof's suggestion of parallel SMB exports, one with and one without recall enabled, is one we hadn't thought of and has a lot of promise for our situation. Thanks for the idea! Cheers, - Todd > On May 20, 2019, at 1:48 PM, RICHARD RUPP wrote: > > I've heard that this works, but I have not tried it myself - https://support.apple.com/en-us/HT208209 > > Regards, > > Richard Rupp, Sales Specialist, Phone: 1-347-510-6746 > > > "Frederick Stock" ---05/20/2019 04:41:37 PM---Todd I am not aware of any tool that provides the out of band recall that you propose, though it wou > > From: "Frederick Stock" > To: gpfsug-discuss at spectrumscale.org > Cc: gpfsug-discuss at spectrumscale.org > Date: 05/20/2019 04:41 PM > Subject: [EXTERNAL] Re: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > Todd I am not aware of any tool that provides the out of band recall that you propose, though it would be quite useful. However, I wanted to note that as I understand the reason the the Mac client initiates the file recalls is because the Mac SMB client ignores the archive bit, indicating a file does not reside in online storage, in the SMB protocol. To date efforts to have Apple change their SMB client to respect the archive bit have not been successful but if you feel so inclined we would be grateful if you would submit a request to Apple for them to change their SMB client to honor the archive bit and thus avoid file recalls. > > Fred > __________________________________________________ > Fred Stock | IBM Pittsburgh Lab | 720-430-8821 > stockf at us.ibm.com > > > ----- Original message ----- > From: Todd Ruston > Sent by: gpfsug-discuss-bounces at spectrumscale.org > To: gpfsug-discuss at spectrumscale.org > Cc: > Subject: [EXTERNAL] [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question > Date: Mon, May 20, 2019 4:12 PM > > Greetings all, > > First post here, so by way of introduction we are a fairly new Spectrum Scale and Archive customer (installed last year and live in production Q1 this year). We have a four node (plus EMS) ESS system with ~520TB of mixed spinning disk and SSD. Client access to the system is via CES (NFS and SMB, running on two protocol nodes), integrated with Active Directory, for a mixed population of Windows, Mac, and Linux clients. A separate pair of nodes run Spectrum Archive, with a TS4500 LTO-8 library behind them. > > We use the system for general institute data, with the largest data types being HD video, multibeam sonar, and hydrophone data. Video is the currently active data type in production; we will be migrating the rest over time. So far things are running pretty well. > > Our archive approach is to premigrate data, particularly the large, unchanging data like the above mentioned data types, almost immediately upon landing in the system. Then we migrate those that have not been accessed in a period of time (or manually if space demands require it). We do wish to allow users to recall archived data on demand as needed. > > Because we have a large contingent of Mac clients (accessing the system via SMB), one issue we want to get ahead of is inadvertent recalls triggered by Mac preview generation, Quick Look, Cover Flow/Gallery view, and the like. Going in we knew this was going to be something we'd need to address, and we anticipated being able to configure Finder to disable preview generation and train users to avoid Quick Look unless they intended to trigger a recall. In our testing however, even with those features disabled/avoided, we have seen Mac clients trigger inadvertent recalls just from CLI 'ls -lshrt' interactions with the system. > > While brainstorming ways to prevent these inadvertent recalls while still allowing users to initiate recalls on their own when needed, one thought that came to us is we might be able to turn off recalls via SMB (setgpfs:recalls = no via mmsmb), and create a simple self-service web portal that would allow users to browse the Scale file system with a web browser, select files for recall, and initiate the recall from there. The web interface could run on one of the Archive nodes, and the back end of it would simply send a list of selected file paths to ltfsee recall. > > Before possibly reinventing the wheel, I thought I'd check to see if something like this may already exist, either from IBM, the Scale user community, or a third-party/open source tool that could be leveraged for the purpose. I searched the list archive and didn't find anything, but please let me know if I missed something. And please let me know if you know of something that would fit this need, or other ideas as well. > > Cheers, > > -- > Todd E. Ruston > Information Systems Manager > Monterey Bay Aquarium Research Institute (MBARI) > 7700 Sandholdt Road, Moss Landing, CA, 95039 > Phone 831-775-1997 Fax 831-775-1652 http://www.mbari.org > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From l.walid at powerm.ma Tue May 21 03:24:58 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Tue, 21 May 2019 02:24:58 +0000 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 21 In-Reply-To: References: Message-ID: *Update :* I have the environment working now with the command : mmuserauth service create --data-access-method 'file' --type 'AD' --servers IPADDRESS--user-name USERNAME --netbios-name 'scaleces' --idmap-role 'MASTER' --idmap-range '10000000-11999999' --idmap-range-size '100000' Removing the unix-map solved the issue. Thanks for your help On Mon, 20 May 2019 at 15:36, L.walid (PowerM) wrote: > Hi, > > I manage to make the command work (basically checking /etc/resolv.conf, > /etc/hosts, /etc/nsswitch.conf) : > > root at scale1 committed]# mmuserauth service create --data-access-method > file --type ad --servers X.X.X.X --user-name MYUSER --idmap-role master > --netbios-name CESSCALE --unixmap-domains "MYDOMAIN(10000-9999999)" > Enter Active Directory User 'spectrum_scale' password: > File authentication configuration completed successfully. > > > [root at scale1 committed]# mmuserauth service check > > Userauth file check on node: scale1 > Checking nsswitch file: OK > Checking Pre-requisite Packages: OK > Checking SRV Records lookup: OK > Service 'gpfs-winbind' status: OK > Object not configured > > > [root at scale1 committed]# mmuserauth service check --server-reachability > > Userauth file check on node: scale1 > Checking nsswitch file: OK > Checking Pre-requisite Packages: OK > Checking SRV Records lookup: OK > > Domain Controller status > NETLOGON connection: OK, connection to DC: xxxx > Domain join status: OK > Machine password status: OK > Service 'gpfs-winbind' status: OK > Object not configured > > > But unfortunately, even if all the commands seems good, i cannot use user > from active directory as owner or to setup ACL on SMB shares (it doesn't > recognise AD users), plus the command 'id DOMAIN\USER' gives error cannot > find user. > > Any ideas ? > > > > > On Mon, 20 May 2019 at 01:46, > wrote: > >> Send gpfsug-discuss mailing list submissions to >> gpfsug-discuss at spectrumscale.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> or, via email, send a message with subject or body 'help' to >> gpfsug-discuss-request at spectrumscale.org >> >> You can reach the person managing the list at >> gpfsug-discuss-owner at spectrumscale.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of gpfsug-discuss digest..." >> >> >> Today's Topics: >> >> 1. Re: gpfsug-discuss Digest, Vol 88, Issue 19 (Schmied, Will) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Mon, 20 May 2019 01:45:57 +0000 >> From: "Schmied, Will" >> To: gpfsug main discussion list >> Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 >> Message-ID: >> Content-Type: text/plain; charset="utf-8" >> >> ?Well not seeing anything odd about the second try (just the username >> only) except that your NETBIOS domain name needs to be put in place of the >> placeholder (DOMAIN_NETBIOS_NAME). >> >> You can copy from a text file and then paste into the stdin when the >> command asks for your password. Just a way to be sure no typos are in the >> password entry. >> >> >> >> Thanks, >> Will >> >> >> From: on behalf of "L.walid >> (PowerM)" >> Reply-To: gpfsug main discussion list >> Date: Sunday, May 19, 2019 at 18:39 >> To: "gpfsug-discuss at spectrumscale.org" >> Subject: Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 88, Issue 19 >> >> Caution: External Sender >> >> Hi, >> >> Thanks for the feedback, i have tried the suggested command : >> >> mmuserauth service create --data-access-method file --type ad --servers >> powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> >> --user-name cn=walid,cn=users,dc=powerm,dc=ma --idmap-role master >> --netbios-name scaleces --unixmap-domains >> "DOMAIN_NETBIOS_NAME(10000-9999999)" >> Enter Active Directory User 'cn=walid,cn=users,dc=powerm,dc=ma' password: >> Invalid credentials specified for the server powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 >> > >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> >> >> >> [root at scale1 ~]# mmuserauth service create --data-access-method file >> --type ad --servers powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> >> --user-name walid --idmap-role master --netbios-name scaleces >> --unixmap-domains "DOMAIN_NETBIOS_NAME(10000-9999999)" >> Enter Active Directory User 'walid' password: >> Invalid credentials specified for the server powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0 >> > >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> >> >> >> i tried both domain qualifier and plain user in the --name parameters but >> i get Invalid Credentials (knowing that walid is an Administrator in Active >> Directory) >> >> [root at scale1 ~]# ldapsearch -H ldap://powermdomain.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=e550J4Mi%2FuxvD%2Bn2KXAyFsN4NQdiSykTBy0DMMfrHqo%3D&reserved=0> >> -x -W -D "walid at powerm.ma" -b "dc=powerm,dc=ma" >> "(sAMAccountName=walid)" >> Enter LDAP Password: >> # extended LDIF >> # >> # LDAPv3 >> # base with scope subtree >> # filter: (sAMAccountName=walid) >> # requesting: ALL >> # >> >> # Walid, Users, powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 >> > >> dn: CN=Walid,CN=Users,DC=powerm,DC=ma >> objectClass: top >> objectClass: person >> objectClass: organizationalPerson >> objectClass: user >> cn: Walid >> sn: Largou >> givenName: Walid >> distinguishedName: CN=Walid,CN=Users,DC=powerm,DC=ma >> instanceType: 4 >> whenCreated: 20190518224649.0Z >> whenChanged: 20190520001645.0Z >> uSNCreated: 12751 >> memberOf: CN=Domain Admins,CN=Users,DC=powerm,DC=ma >> uSNChanged: 16404 >> name: Walid >> objectGUID:: Le4tH38qy0SfcxaroNGPEg== >> userAccountControl: 512 >> badPwdCount: 0 >> codePage: 0 >> countryCode: 0 >> badPasswordTime: 132028055547447029 >> lastLogoff: 0 >> lastLogon: 132028055940741392 >> pwdLastSet: 132026934129698743 >> primaryGroupID: 513 >> objectSid:: AQUAAAAAAAUVAAAAG4qBuwTv6AKWAIpcTwQAAA== >> adminCount: 1 >> accountExpires: 9223372036854775807 >> logonCount: 0 >> sAMAccountName: walid >> sAMAccountType: 805306368 >> objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=powerm,DC=ma >> dSCorePropagationData: 20190518225159.0Z >> dSCorePropagationData: 16010101000000.0Z >> lastLogonTimestamp: 132027850050695698 >> >> # search reference >> ref: ldap://ForestDnsZones.powerm.ma/DC=ForestDnsZones,DC=powerm,DC=ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FForestDnsZones.powerm.ma%2FDC%3DForestDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=k6CYQeGq2lgAtY1qmVueO9OmK1a9SzGMNGm%2BPlyfwto%3D&reserved=0 >> > >> >> # search reference >> ref: ldap://DomainDnsZones.powerm.ma/DC=DomainDnsZones,DC=powerm,DC=ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2FDomainDnsZones.powerm.ma%2FDC%3DDomainDnsZones%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=TFYJ1nBOLaxelI2KZPaoZidLvCOPv6lrD51ZRjEBkqA%3D&reserved=0 >> > >> >> # search reference >> ref: ldap://powerm.ma/CN=Configuration,DC=powerm,DC=ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma%2FCN%3DConfiguration%2CDC%3Dpowerm%2CDC%3Dma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=8cVvHhnXPrqogSd8QLP6McEAoGrc2oRIKbtZYBiDz3M%3D&reserved=0 >> > >> >> # search result >> search: 2 >> result: 0 Success >> >> >> On Sun, 19 May 2019 at 23:31, > > wrote: >> Send gpfsug-discuss mailing list submissions to >> gpfsug-discuss at spectrumscale.org> gpfsug-discuss at spectrumscale.org> >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 >> > >> or, via email, send a message with subject or body 'help' to >> gpfsug-discuss-request at spectrumscale.org> gpfsug-discuss-request at spectrumscale.org> >> >> You can reach the person managing the list at >> gpfsug-discuss-owner at spectrumscale.org> gpfsug-discuss-owner at spectrumscale.org> >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of gpfsug-discuss digest..." >> >> >> Today's Topics: >> >> 1. Re: Active Directory Authentification (Schmied, Will) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Sun, 19 May 2019 23:24:15 +0000 >> From: "Schmied, Will" > will.schmied at stjude.org>> >> To: gpfsug main discussion list > gpfsug-discuss at spectrumscale.org>> >> Subject: Re: [gpfsug-discuss] Active Directory Authentification >> Message-ID: <4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org> 4A5C9EC6-5E53-4CC7-925C-CCA954969826 at stjude.org>> >> Content-Type: text/plain; charset="utf-8" >> >> Hi Walid, >> >> Without knowing any specifics of your environment, the below command is >> what I have used, successfully across multiple clusters at 4.2.x. The >> binding account you specify needs to be able to add computers to the domain. >> >> mmuserauth service create --data-access-method file --type ad --servers >> some_dc.foo.bar --user-name some_ad_bind_account --idmap-role master >> --netbios-name some_ad_computer_name --unixmap-domains >> "DOMAIN_NETBIOS_NAME(10000-9999999)" >> >> 10000-9999999 is the acceptable range of UID / GID for AD accounts. >> >> >> >> Thanks, >> Will >> >> >> From: > gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "L.walid >> (PowerM)" > >> Reply-To: gpfsug main discussion list > > >> Date: Sunday, May 19, 2019 at 14:30 >> To: "gpfsug-discuss at spectrumscale.org> gpfsug-discuss at spectrumscale.org>" > > >> Subject: [gpfsug-discuss] Active Directory Authentification >> >> Caution: External Sender >> >> Hi, >> >> I'm planning to integrate Active Directory with our Spectrum Scale, but >> it seems i'm missing out something, please note that i'm on a 2 protocol >> nodes with only service SMB running Spectrum Scale 5.0.3.0 (latest >> version). I've tried from the gui the two ways, connect to Active >> Directory, and the other to LDAP. >> >> Connect to LDAP : >> mmuserauth service create --data-access-method 'file' --type 'LDAP' >> --servers 'powermdomain.powerm.ma:389< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>' >> --user-name 'cn=walid,cn=users,dc=powerm,dc=ma' >> --pwd-file 'auth_pass.txt' --netbios-name 'scaleces' --base-dn >> 'cn=users,dc=powerm,dc=ma' >> 7:26 PM >> Either failed to create a samba domain entry on LDAP server if not >> present or could not read the already existing samba domain entry from the >> LDAP server >> 7:26 PM >> Detailed message:smbldap_search_domain_info: Adding domain info for >> SCALECES failed with NT_STATUS_UNSUCCESSFUL >> 7:26 PM >> pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the >> domain. We cannot work reliably without it. >> 7:26 PM >> pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" >> did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) >> 7:26 PM >> WARNING: Could not open passdb >> 7:26 PM >> File authentication configuration failed. >> 7:26 PM >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> 7:26 PM >> Operation Failed >> 7:26 PM >> Error: Either failed to create a samba domain entry on LDAP server if not >> present or could not read the already existing samba domain entry from the >> LDAP server >> Detailed message:smbldap_search_domain_info: Adding domain info for >> SCALECES failed with NT_STATUS_UNSUCCESSFUL >> pdb_init_ldapsam: WARNING: Could not get domain info, nor add one to the >> domain. We cannot work reliably without it. >> pdb backend ldapsam:"ldap://powermdomain.powerm.ma:389< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=93WuDa2hnFQNGoSTzw%2F4pBQE0fIN29v0Fu9Jti8mYFo%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowermdomain.powerm.ma%3A389&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=Mi5M5of5vnyzZbIl0%2Fj72PHs%2FJtsQ1S%2FvagsRASXag8%3D&reserved=0>>" >> did not correctly init (error was NT_STATUS_CANT_ACCESS_DOMAIN_INFO) >> WARNING: Could not open passdb >> File authentication configuration failed. >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> >> >> Connect to Active Directory : >> mmuserauth service create --data-access-method 'file' --type 'AD' >> --servers '192.168.56.5' --user-name 'walid' --pwd-file 'auth_pass.txt' >> --netbios-name 'scaleces' --idmap-role 'MASTER' --ldapmap-domains ' >> powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=tJKajnPMlWowHIAHnoxbceVIbE4t19KiLCaohZRwwYQ%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpowerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=XHcjIaRj2bGiWYXZUsDJFDJ2Ts3Y%2FKHzxD3yUhcHNgc%3D&reserved=0 >> >>(type=s >> tand-alone:ldap_srv=192.168.56.5: >> range=-9000000000000000-4294967296:usr_dn=cn=users,dc=powerm,dc=ma:grp_dn=cn=users,dc=powerm,dc=ma:bind_dn=cn=walid,cn=users,dc=powerm,dc=ma:bind_dn_pwd=P at ssword >> )' >> 7:29 PM >> mmuserauth service create: Invalid parameter passed for --ldapmap-domain >> 7:29 PM >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> 7:29 PM >> Operation Failed >> 7:29 PM >> Error: mmuserauth service create: Invalid parameter passed for >> --ldapmap-domain >> mmuserauth service create: Command failed. Examine previous error >> messages to determine cause. >> -- >> Best regards, >> >> >> Walid Largou >> Senior IT Specialist >> >> Power Maroc >> >> Mobile : +212 621 31 98 71 >> >> Email: l.walid at powerm.ma> y.largou at powerm.ma> >> 320 Bd Zertouni 6th Floor, Casablanca, Morocco >> >> https://www.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=lFUQnvPlecsmKcAL%2FC4PbmfqyxW0sn5PI%2Bu4aCD5448%3D&reserved=0 >> >< >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7C5f5f690cddd748100dde08d6dc906f79%7C22340fa892264871b677d3b3e377af72%7C0&sdata=qpwCQkujjr3Sq0wCySyjRMGZrp94mvRQAK0iGlh7DqQ%3D&reserved=0 >> < >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 >> >> >> >> [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] >> This message is confidential .Its contents do not constitute a commitment >> by Power Maroc S.A.R.L except where provided for in a written agreement >> between you and Power Maroc S.A.R.L. Any authorized disclosure, use or >> dissemination, either whole or partial, is prohibited. If you are not the >> intended recipient of the message, please notify the sender immediately. >> >> ________________________________ >> >> Email Disclaimer: www.stjude.org/emaildisclaimer< >> http://www.stjude.org/emaildisclaimer> >> Consultation Disclaimer: www.stjude.org/consultationdisclaimer< >> http://www.stjude.org/consultationdisclaimer> >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190519/9b579ecf/attachment.html >> < >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fpipermail%2Fgpfsug-discuss%2Fattachments%2F20190519%2F9b579ecf%2Fattachment.html&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=DlY%2Bdy25zq2TcPBLwf%2FDQm0cngmIu6FTDzEW9PgTsrc%3D&reserved=0 >> >> >> >> ------------------------------ >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fspectrumscale.org&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=UNt7Tspdurvw2nLSOYUf3T5pbwfD0xmW91PlwxOJi2Y%3D&reserved=0 >> > >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss< >> https://nam03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=hsZ3rLuyxLIr2B0TrHCD2kMKSzR52sIzAmyepo1KELo%3D&reserved=0 >> > >> >> >> End of gpfsug-discuss Digest, Vol 88, Issue 19 >> ********************************************** >> >> >> -- >> Best regards, >> >> >> Walid Largou >> Senior IT Specialist >> >> Power Maroc >> >> Mobile : +212 621 31 98 71 >> >> Email: l.walid at powerm.ma >> 320 Bd Zertouni 6th Floor, Casablanca, Morocco >> >> https://www.powerm.ma< >> https://nam03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.powerm.ma%2F&data=01%7C01%7Cwill.schmied%40stjude.org%7Cd2f49b0330c843ce107208d6dcb347c0%7C22340fa892264871b677d3b3e377af72%7C0&sdata=atqyy5y7T%2FnzfUkOukPmT%2BAprZQDIrtQFemjHpYDLDE%3D&reserved=0 >> > >> >> [cid:A8AE246E-9B75-4FE9-AE84-3DC9C8753FEA] >> This message is confidential .Its contents do not constitute a commitment >> by Power Maroc S.A.R.L except where provided for in a written agreement >> between you and Power Maroc S.A.R.L. Any authorized disclosure, use or >> dissemination, either whole or partial, is prohibited. If you are not the >> intended recipient of the message, please notify the sender immediately. >> -------------- next part -------------- >> An HTML attachment was scrubbed... >> URL: < >> http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20190520/92f25565/attachment.html >> > >> >> ------------------------------ >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> >> End of gpfsug-discuss Digest, Vol 88, Issue 21 >> ********************************************** >> > > > -- > Best regards, > > Walid Largou > Senior IT Specialist > Power Maroc > Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > https://www.powerm.ma > > > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From INDULISB at uk.ibm.com Tue May 21 10:34:42 2019 From: INDULISB at uk.ibm.com (Indulis Bernsteins1) Date: Tue, 21 May 2019 10:34:42 +0100 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: Message-ID: Have you tried looking at Spectrum Archive setting instead of Spectrum Scale? You can set both the size of the "stub file" that remains behind when a file is migrated, and also the amount of data which would need to be read before a recall is triggered. This might catch enough of your recall storms... or at least help! IBM Spectrum Archive Enterprise Edition V1.3.0: Installation and Configuration Guide http://www.redbooks.ibm.com/abstracts/sg248333.html?Open 7.14.3 Read Starts Recalls: Early trigger for recalling a migrated file IBM Spectrum Archive EE can define a stub size for migrated files so that the stub size initial bytes of a migrated file are kept on disk while the entire file is migrated to tape. The migrated file bytes that are kept on the disk are called the stub. Reading from the stub does not trigger a recall of the rest of the file. After the file is read beyond the stub, the recall is triggered. The recall might take a long time while the entire file is read from tape because a tape mount might be required, and it takes time to position the tape before data can be recalled from tape. When Read Start Recalls (RSR) is enabled for a file, the first read from the stub file triggers a recall of the complete file in the background (asynchronous). Reads from the stubs are still possible while the rest of the file is being recalled. After the rest of the file is recalled to disks, reads from any file part are possible. With the Preview Size (PS) value, a preview size can be set to define the initial file part size for which any reads from the resident file part does not trigger a recall. Typically, the PS value is large enough to see whether a recall of the rest of the file is required without triggering a recall for reading from every stub. This process is important to prevent unintended massive recalls. The PS value can be set only smaller than or equal to the stub size. This feature is useful, for example, when playing migrated video files. While the initial stub size part of a video file is played, the rest of the video file can be recalled to prevent a pause when it plays beyond the stub size. You must set the stub size and preview size to be large enough to buffer the time that is required to recall the file from tape without triggering recall storms. Use the following dsmmigfs command options to set both the stub size and preview size of the file system being managed by IBM Spectrum Archive EE: dsmmigfs Update -STUBsize dsmmigfs Update -PREViewsize The value for the STUBsize is a multiple of the IBM Spectrum Scale file system?s block size. this value can be obtained by running the mmlsfs . The PREViewsize parameter must be equal to or less than the STUBsize value. Both parameters take a positive integer in bytes. Regards, Indulis Bernsteins Systems Architect IBM New Generation Storage Phone: +44 792 008 6548 E-mail: INDULISB at UK.IBM.COM Jackson House, Sibson Rd Sale, Cheshire M33 7RR United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10045 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10249 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10012 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10031 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 11771 bytes Desc: not available URL: From jonathan.buzzard at strath.ac.uk Tue May 21 11:30:09 2019 From: jonathan.buzzard at strath.ac.uk (Jonathan Buzzard) Date: Tue, 21 May 2019 11:30:09 +0100 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: <7d068877a726fa5bd0703fcdd12fdc881f62711b.camel@strath.ac.uk> On Mon, 2019-05-20 at 20:33 +0000, Christof Schmitt wrote: > SMB clients know the state of the files through a OFFLINE bit that is > part of the metadata that is available through the SMB protocol. The > Windows Explorer in particular honors this bit and avoids reading > file data for previews, but the MacOS Finder seems to ignore it and > read file data for previews anyway, triggering recalls. > > The best way would be fixing this on the Mac clients to simply not > read file data for previews for OFFLINE files. So far requests to > Apple support to implement this behavior were unsuccessful, but it > might still be worthwhile to keep pushing this request. > In the interim would it be possible for the SMB server to detect the client OS and only allow recalls from say Windows. At least this would be in "our" control unlike getting Apple to change the finder.app behaviour. Then tell MacOS users to use Windows if they want to recall files and pin the blame squarely on Apple to your users. I note that Linux is no better at honouring the offline bit in the SMB protocol than MacOS. Oh the irony of Windows being the only main stream IS handling HSM'ed files properly! JAB. -- Jonathan A. Buzzard Tel: +44141-5483420 HPC System Administrator, ARCHIE-WeSt. University of Strathclyde, John Anderson Building, Glasgow. G4 0NG From christophe.darras at atempo.com Tue May 21 14:07:02 2019 From: christophe.darras at atempo.com (Christophe Darras) Date: Tue, 21 May 2019 13:07:02 +0000 Subject: [gpfsug-discuss] Spectrum Scale GPFS User Group Message-ID: Hello all, I would like to thank you for welcoming me on this group! My name is Christophe Darras (Chris), based in London and in charge of Atempo for North Europe. We are developing solutions of DATA MANAGEMENT for Spectrum Scale*: automated data migration and high performance backup, but also Archiving/retrieving/moving large data sets. Kindest Regards, Chris *and other File Systems and large NAS Christophe DARRAS Head of North Europe, Middle East & South Africa Cell. : +44 7555 993 529 -------------- next part -------------- An HTML attachment was scrubbed... URL: From truston at mbari.org Tue May 21 18:59:05 2019 From: truston at mbari.org (Todd Ruston) Date: Tue, 21 May 2019 10:59:05 -0700 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: Message-ID: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> Hi Indulis, Yes, thanks for the reminder. I'd come across that, and our system is currently set to a stub size of zero (the default, I presume). I'd intended to ask in my original query whether anyone had experimented and found an optimal value that prevents most common inadvertent recalls by Macs. I know that will likely vary by file type, but since we have a broad mix of file types I figure a value that covers the majority of cases without being excessively large is the best we could implement. Our system is using 16MiB blocks, with 1024 subblocks. Is stub size bounded by full blocks, or subblocks? In other words, would we need to set the stub value to increments of 16MiB, or 16KiB? Cheers, - Todd > On May 21, 2019, at 2:34 AM, Indulis Bernsteins1 wrote: > > Have you tried looking at Spectrum Archive setting instead of Spectrum Scale? > > You can set both the size of the "stub file" that remains behind when a file is migrated, and also the amount of data which would need to be read before a recall is triggered. This might catch enough of your recall storms... or at least help! > > IBM Spectrum Archive Enterprise Edition V1.3.0: Installation and Configuration Guide > http://www.redbooks.ibm.com/abstracts/sg248333.html?Open > > 7.14.3 Read Starts Recalls: Early trigger for recalling a migrated file > IBM Spectrum Archive EE can define a stub size for migrated files so that the stub size initial > bytes of a migrated file are kept on disk while the entire file is migrated to tape. The migrated > file bytes that are kept on the disk are called the stub. Reading from the stub does not trigger > a recall of the rest of the file. After the file is read beyond the stub, the recall is triggered. The > recall might take a long time while the entire file is read from tape because a tape mount > might be required, and it takes time to position the tape before data can be recalled from tape. > When Read Start Recalls (RSR) is enabled for a file, the first read from the stub file triggers a > recall of the complete file in the background (asynchronous). Reads from the stubs are still > possible while the rest of the file is being recalled. After the rest of the file is recalled to disks, > reads from any file part are possible. > With the Preview Size (PS) value, a preview size can be set to define the initial file part size > for which any reads from the resident file part does not trigger a recall. Typically, the PS value > is large enough to see whether a recall of the rest of the file is required without triggering a > recall for reading from every stub. This process is important to prevent unintended massive > recalls. The PS value can be set only smaller than or equal to the stub size. > This feature is useful, for example, when playing migrated video files. While the initial stub > size part of a video file is played, the rest of the video file can be recalled to prevent a pause > when it plays beyond the stub size. You must set the stub size and preview size to be large > enough to buffer the time that is required to recall the file from tape without triggering recall > storms. > Use the following dsmmigfs command options to set both the stub size and preview size of > the file system being managed by IBM Spectrum Archive EE: > dsmmigfs Update -STUBsize > dsmmigfs Update -PREViewsize > The value for the STUBsize is a multiple of the IBM Spectrum Scale file system?s block size. > this value can be obtained by running the mmlsfs . The PREViewsize parameter > must be equal to or less than the STUBsize value. Both parameters take a positive integer in > bytes. > > Regards, > > Indulis Bernsteins > Systems Architect > IBM New Generation Storage > Phone: +44 792 008 6548 > E-mail: INDULISB at UK.IBM.COM > > > Jackson House, Sibson Rd > Sale, Cheshire M33 7RR > United Kingdom > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From janfrode at tanso.net Tue May 21 19:34:12 2019 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 21 May 2019 20:34:12 +0200 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> References: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> Message-ID: It?s a multiple of full blocks. -jf tir. 21. mai 2019 kl. 20:06 skrev Todd Ruston : > Hi Indulis, > > Yes, thanks for the reminder. I'd come across that, and our system is > currently set to a stub size of zero (the default, I presume). I'd intended > to ask in my original query whether anyone had experimented and found an > optimal value that prevents most common inadvertent recalls by Macs. I know > that will likely vary by file type, but since we have a broad mix of file > types I figure a value that covers the majority of cases without being > excessively large is the best we could implement. > > Our system is using 16MiB blocks, with 1024 subblocks. Is stub size > bounded by full blocks, or subblocks? In other words, would we need to set > the stub value to increments of 16MiB, or 16KiB? > > Cheers, > > - Todd > > > On May 21, 2019, at 2:34 AM, Indulis Bernsteins1 > wrote: > > Have you tried looking at Spectrum Archive setting instead of Spectrum > Scale? > > You can set both the size of the "stub file" that remains behind when a > file is migrated, and also the amount of data which would need to be read > before a recall is triggered. This might catch enough of your recall > storms... or at least help! > > *IBM Spectrum Archive Enterprise Edition V1.3.0: Installation and > Configuration Guide* > http://www.redbooks.ibm.com/abstracts/sg248333.html?Open > > *7.14.3 Read Starts Recalls: Early trigger for recalling a migrated file* > IBM Spectrum Archive EE can define a stub size for migrated files so that > the stub size initial > bytes of a migrated file are kept on disk while the entire file is > migrated to tape. The migrated > file bytes that are kept on the disk are called the *stub*. Reading from > the stub does not trigger > a recall of the rest of the file. After the file is read beyond the stub, > the recall is triggered. The > recall might take a long time while the entire file is read from tape > because a tape mount > might be required, and it takes time to position the tape before data can > be recalled from tape. > When Read Start Recalls (RSR) is enabled for a file, the first read from > the stub file triggers a > recall of the complete file in the background (asynchronous). Reads from > the stubs are still > possible while the rest of the file is being recalled. After the rest of > the file is recalled to disks, > reads from any file part are possible. > With the Preview Size (PS) value, a preview size can be set to define the > initial file part size > for which any reads from the resident file part does not trigger a recall. > Typically, the PS value > is large enough to see whether a recall of the rest of the file is > required without triggering a > recall for reading from every stub. This process is important to prevent > unintended massive > recalls. The PS value can be set only smaller than or equal to the stub > size. > This feature is useful, for example, when playing migrated video files. > While the initial stub > size part of a video file is played, the rest of the video file can be > recalled to prevent a pause > when it plays beyond the stub size. You must set the stub size and preview > size to be large > enough to buffer the time that is required to recall the file from tape > without triggering recall > storms. > Use the following *dsmmigfs *command options to set both the stub size > and preview size of > the file system being managed by IBM Spectrum Archive EE: > *dsmmigfs Update -STUBsize* > *dsmmigfs Update -PREViewsize* > The value for the *STUBsize *is a multiple of the IBM Spectrum Scale file > system?s block size. > this value can be obtained by running the *mmlsfs *. The *PREViewsize > *parameter > must be equal to or less than the *STUBsize *value. Both parameters take > a positive integer in > bytes. > > Regards, > > *Indulis Bernsteins* > Systems Architect > IBM New Generation Storage > > ------------------------------ > *Phone:* +44 792 008 6548 > * E-mail:* *INDULISB at UK.IBM.COM * > [image: Description: Description: IBM] > > Jackson House, Sibson Rd > Sale, Cheshire M33 7RR > United Kingdom > Attachment.png> > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue May 21 19:40:56 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 21 May 2019 14:40:56 -0400 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: <9DD5C356-15E7-4E41-9AB0-8F94CA031141@mbari.org> Message-ID: https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.0/com.ibm.itsm.hsmul.doc/c_mig_stub_size.html Trust but verify. And try it before you buy it. (Personally, I would have guessed sub-block, doc says otherwise, but I'd try it nevertheless.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Tue May 21 19:59:14 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Tue, 21 May 2019 18:59:14 +0000 Subject: [gpfsug-discuss] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: <7d068877a726fa5bd0703fcdd12fdc881f62711b.camel@strath.ac.uk> References: <7d068877a726fa5bd0703fcdd12fdc881f62711b.camel@strath.ac.uk>, <703BD4A1-24B6-4B6E-A19A-2853C83189EF@mbari.org> Message-ID: An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Wed May 22 09:50:22 2019 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Wed, 22 May 2019 10:50:22 +0200 Subject: [gpfsug-discuss] Save the date - User Meeting along ISC Frankfurt Message-ID: Greetings: IBM will host a joint "IBM Spectrum Scale and IBM Spectrum LSF User Meeting" at ISC. As with other user group meetings, the agenda will include user stories, updates on IBM Spectrum Scale & IBM Spectrum LSF, and access to IBM experts and your peers. We are still looking for customers to talk about their experience with Spectrum Scale and/or Spectrum LSF. Please send me a personal mail, if you are interested to talk. The meeting is planned for: Monday June 17th, 2019 - 1pm-5.30pm ISC Frankfurt, Germany I will send more details later. Best, Ulf -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Matthias Hartmann Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From INDULISB at uk.ibm.com Wed May 22 11:19:55 2019 From: INDULISB at uk.ibm.com (Indulis Bernsteins1) Date: Wed, 22 May 2019 11:19:55 +0100 Subject: [gpfsug-discuss] [EXTERNAL] Intro, and Spectrum Archive self-service recall interface question In-Reply-To: References: Message-ID: There was some horrible way to do the same thing in previous versions of Spectrum Archive using the policy engine, which was more granular than the dsmmigfs command is now. I will ask one of the Scale developers if the developers might think about allowing multiples of the sub-block size, as this would make sense- 16 MiB is a very big stub to leave behind! Regards, Indulis Bernsteins Systems Architect IBM New Generation Storage Phone: +44 792 008 6548 E-mail: INDULISB at UK.IBM.COM Jackson House, Sibson Rd Sale, Cheshire M33 7RR United Kingdom Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10045 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10249 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10012 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 10031 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 11771 bytes Desc: not available URL: From l.walid at powerm.ma Thu May 23 00:59:40 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Wed, 22 May 2019 23:59:40 +0000 Subject: [gpfsug-discuss] SMB share size on disk Windows Message-ID: Hi, We are contacting you regarding a behavior observed for our customer gpfs smb shares. When we try to view the file/folder properties, the values reported are significantly different from the folder/size and the folder/file size on disk. We tried to reproduce with creating a simple text file of 1ko and when we check the properties of the file it was a 1Mo on disk! I tried changing the block size of the fs from 4M to 256k , but still the same results Thank you -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From l.walid at powerm.ma Thu May 23 02:00:17 2019 From: l.walid at powerm.ma (L.walid (PowerM)) Date: Thu, 23 May 2019 01:00:17 +0000 Subject: [gpfsug-discuss] SMB share size on disk Windows In-Reply-To: References: Message-ID: Hi Everyone, Through some research, i found it's a normal behavior related to Samba "allocation roundup size" , since CES SMB is based on Samba that explains the behavior. (Windows assumes that the default size for a block is 1M). As such, i found somewhere else that changing this parameter can decrease performance, so if possible to advise on this. For the block size on the filesystem i would still go with 256k since it's the recommended for File Serving use cases. Thank you References : https://lists.samba.org/archive/samba-technical/2016-July/115166.html On Wed, May 22, 2019 at 11:59 PM L.walid (PowerM) wrote: > Hi, > > We are contacting you regarding a behavior observed for our customer gpfs > smb shares. When we try to view the file/folder properties, the values > reported are significantly different from the folder/size and the > folder/file size on disk. > > We tried to reproduce with creating a simple text file of 1ko and when we > check the properties of the file it was a 1Mo on disk! > > I tried changing the block size of the fs from 4M to 256k , but still the > same results > > Thank you > -- > Best regards, > > Walid Largou > Senior IT Specialist > Power Maroc > Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 > Email: l.walid at powerm.ma > 320 Bd Zertouni 6th Floor, Casablanca, Morocco > https://www.powerm.ma > > > This message is confidential .Its contents do not constitute a commitment > by Power Maroc S.A.R.L except where provided for in a written agreement > between you and Power Maroc S.A.R.L. Any authorized disclosure, use or > dissemination, either whole or partial, is prohibited. If you are not the > intended recipient of the message, please notify the sender immediately. > -- Best regards, Walid Largou Senior IT Specialist Power Maroc Mobile : +212 62 <+212%20661%2015%2021%2055>1 31 98 71 Email: l.walid at powerm.ma 320 Bd Zertouni 6th Floor, Casablanca, Morocco https://www.powerm.ma This message is confidential .Its contents do not constitute a commitment by Power Maroc S.A.R.L except where provided for in a written agreement between you and Power Maroc S.A.R.L. Any authorized disclosure, use or dissemination, either whole or partial, is prohibited. If you are not the intended recipient of the message, please notify the sender immediately. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-2.png Type: image/png Size: 10214 bytes Desc: not available URL: From christof.schmitt at us.ibm.com Thu May 23 05:00:46 2019 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 23 May 2019 04:00:46 +0000 Subject: [gpfsug-discuss] SMB share size on disk Windows In-Reply-To: References: , Message-ID: An HTML attachment was scrubbed... URL: From oluwasijibomi.saula at ndsu.edu Thu May 23 18:40:03 2019 From: oluwasijibomi.saula at ndsu.edu (Saula, Oluwasijibomi) Date: Thu, 23 May 2019 17:40:03 +0000 Subject: [gpfsug-discuss] Reason for shutdown: Reset old shared segment In-Reply-To: References: Message-ID: Hey Folks, I got a strange message one of my HPC cluster nodes that I'm hoping to understand better: "Reason for shutdown: Reset old shared segment" 2019-05-23_11:47:07.328-0500: [I] This node has a valid standard license 2019-05-23_11:47:07.327-0500: [I] Initializing the fast condition variables at 0x555557115300 ... 2019-05-23_11:47:07.328-0500: [I] mmfsd initializing. {Version: 5.0.0.0 Built: Dec 10 2017 16:59:21} ... 2019-05-23_11:47:07.328-0500: [I] Cleaning old shared memory ... 2019-05-23_11:47:07.328-0500: [N] mmfsd is shutting down. 2019-05-23_11:47:07.328-0500: [N] Reason for shutdown: Reset old shared segment Shortly after the GPFS is back up without any intervention: 2019-05-23_11:47:52.685-0500: [N] Remounted gpfs1 2019-05-23_11:47:52.691-0500: [N] mmfsd ready I'm supposing this has to do with memory usage??... Thanks, Siji Saula HPC System Administrator Center for Computationally Assisted Science & Technology NORTH DAKOTA STATE UNIVERSITY Research 2 Building ? Room 220B Dept 4100, PO Box 6050 / Fargo, ND 58108-6050 p:701.231.7749 www.ccast.ndsu.edu | www.ndsu.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Thu May 23 19:16:33 2019 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Thu, 23 May 2019 14:16:33 -0400 Subject: [gpfsug-discuss] Reason for shutdown: Reset old shared segment In-Reply-To: References: Message-ID: (Somewhat educated guess.) Somehow a previous incarnation of the mmfsd daemon was killed, but left its shared segment laying about. When GPFS is restarted, it discovers the old segment and deallocates it, etc, etc... Then the safest, easiest thing to do after going down that error recover path is to quit and (re)start GPFS as if none of that ever happened. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpergamin at ddn.com Wed May 29 12:54:46 2019 From: rpergamin at ddn.com (Ran Pergamin) Date: Wed, 29 May 2019 11:54:46 +0000 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Message-ID: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Hi All, My customer has some nodes in the cluster which current have their second IB port disabled. Spectrum scale 4.2.3 update 13. Port 1 is defined in verbs port, yet sysmoncon monitor and reports error on port 2 despite not being used. I found an old listing claiming it will be solved in in 4.2.3-update5, yet nothing in 4.2.3-update7 release notes, about it. https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html Filters in sensor file say filters are not support + apply to ALL nodes, so no relevant where I need to ignore it. Any idea how can I disable the check of sensor on mlx4_0/2 on some of the nodes ? Node name: cff003-ib0.chemfarm Node status: DEGRADED Status Change: 2019-05-29 12:29:49 Component Status Status Change Reasons ------------------------------------------------------------------------------------------------------------------------------------------------- GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small NETWORK DEGRADED 2019-05-29 12:29:49 ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), ib_rdma_nic_unrecognized(mlx4_0/2) ib0 HEALTHY 2019-05-29 12:29:49 - mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, ib_rdma_nic_down, ib_rdma_nic_unrecognized FILESYSTEM HEALTHY 2019-05-29 12:29:48 - apps HEALTHY 2019-05-29 12:29:48 - data HEALTHY 2019-05-29 12:29:48 - PERFMON HEALTHY 2019-05-29 12:29:33 - THRESHOLD HEALTHY 2019-05-29 12:29:18 - Thanks ! Regards, Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From spectrumscale at kiranghag.com Wed May 29 13:14:17 2019 From: spectrumscale at kiranghag.com (KG) Date: Wed, 29 May 2019 17:44:17 +0530 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. In-Reply-To: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> References: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Message-ID: This is a per node setting so you should be able to set correct port for each node (mmchconfig -N) On Wed, May 29, 2019 at 5:24 PM Ran Pergamin wrote: > Hi All, > > My customer has some nodes in the cluster which current have their second > IB port disabled. > Spectrum scale 4.2.3 update 13. > > Port 1 is defined in verbs port, yet sysmoncon monitor and reports error > on port 2 despite not being used. > > I found an old listing claiming it will be solved in in 4.2.3-update5, yet > nothing in 4.2.3-update7 release notes, about it. > > > https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html > > Filters in sensor file say filters are not support + apply to ALL nodes, > so no relevant where I need to ignore it. > > Any idea how can I disable the check of sensor on mlx4_0/2 on some of the > nodes ? > > > > Node name: cff003-ib0.chemfarm > > Node status: DEGRADED > > Status Change: 2019-05-29 12:29:49 > > > > Component Status Status Change Reasons > > > ------------------------------------------------------------------------------------------------------------------------------------------------- > > GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small > > NETWORK DEGRADED 2019-05-29 12:29:49 > ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), > ib_rdma_nic_unrecognized(mlx4_0/2) > > ib0 HEALTHY 2019-05-29 12:29:49 - > > mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - > > * mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, > ib_rdma_nic_down, ib_rdma_nic_unrecognized* > > FILESYSTEM HEALTHY 2019-05-29 12:29:48 - > > apps HEALTHY 2019-05-29 12:29:48 - > > data HEALTHY 2019-05-29 12:29:48 - > > PERFMON HEALTHY 2019-05-29 12:29:33 - > > THRESHOLD HEALTHY 2019-05-29 12:29:18 - > > > > > Thanks ! > > Regards, > Ran > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From MDIETZ at de.ibm.com Wed May 29 13:19:51 2019 From: MDIETZ at de.ibm.com (Mathias Dietz) Date: Wed, 29 May 2019 14:19:51 +0200 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. In-Reply-To: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> References: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Message-ID: Hi Ran, please double check that port 2 config is not yet active for the running mmfsd daemon. When changing the verbsPorts, the daemon keeps using the old value until a restart is done. mmdiag --config | grep verbsPorts Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: Ran Pergamin To: gpfsug main discussion list Date: 29/05/2019 13:54 Subject: [EXTERNAL] [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, My customer has some nodes in the cluster which current have their second IB port disabled. Spectrum scale 4.2.3 update 13. Port 1 is defined in verbs port, yet sysmoncon monitor and reports error on port 2 despite not being used. I found an old listing claiming it will be solved in in 4.2.3-update5, yet nothing in 4.2.3-update7 release notes, about it. https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html Filters in sensor file say filters are not support + apply to ALL nodes, so no relevant where I need to ignore it. Any idea how can I disable the check of sensor on mlx4_0/2 on some of the nodes ? Node name: cff003-ib0.chemfarm Node status: DEGRADED Status Change: 2019-05-29 12:29:49 Component Status Status Change Reasons ------------------------------------------------------------------------------------------------------------------------------------------------- GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small NETWORK DEGRADED 2019-05-29 12:29:49 ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), ib_rdma_nic_unrecognized(mlx4_0/2) ib0 HEALTHY 2019-05-29 12:29:49 - mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, ib_rdma_nic_down, ib_rdma_nic_unrecognized FILESYSTEM HEALTHY 2019-05-29 12:29:48 - apps HEALTHY 2019-05-29 12:29:48 - data HEALTHY 2019-05-29 12:29:48 - PERFMON HEALTHY 2019-05-29 12:29:33 - THRESHOLD HEALTHY 2019-05-29 12:29:18 - Thanks ! Regards, Ran _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=9dCEbNr27klWay2AcOfvOE1xq50K-CyRUu4qQx4HOlk&m=nFF5UhMPmV8schGYYE3L6ZG86b1SiY3-eXi4mz3CQxE&s=Y2emO_gUxLk44-GrE4_tOeQKWZsH1fZgNP4tELnjx_g&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpergamin at ddn.com Wed May 29 13:26:40 2019 From: rpergamin at ddn.com (Ran Pergamin) Date: Wed, 29 May 2019 12:26:40 +0000 Subject: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. In-Reply-To: References: <5DCB5F6F-F07B-4243-871D-F6E16AEA756F@ddn.com> Message-ID: Thanks All. Solved it. The other port Link Layer was in autosense rather than IB. Once changed the Link Layer to IB the false report cleared. I assume that?s the auth fix that was applied. Regards, Ran From: on behalf of Mathias Dietz Reply-To: gpfsug main discussion list Date: Wednesday, 29 May 2019 at 15:20 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Hi Ran, please double check that port 2 config is not yet active for the running mmfsd daemon. When changing the verbsPorts, the daemon keeps using the old value until a restart is done. mmdiag --config | grep verbsPorts Mit freundlichen Gr??en / Kind regards Mathias Dietz Spectrum Scale Development - Release Lead Architect (4.2.x) Spectrum Scale RAS Architect --------------------------------------------------------------------------- IBM Deutschland Am Weiher 24 65451 Kelsterbach Phone: +49 70342744105 Mobile: +49-15152801035 E-Mail: mdietz at de.ibm.com ----------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Koederitz, Gesch?ftsf?hrung: Dirk WittkoppSitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: Ran Pergamin To: gpfsug main discussion list Date: 29/05/2019 13:54 Subject: [EXTERNAL] [gpfsug-discuss] How to ignore ib_rdma_nic_unrecognized event on nodes where an IB link is not used. Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi All, My customer has some nodes in the cluster which current have their second IB port disabled. Spectrum scale 4.2.3 update 13. Port 1 is defined in verbs port, yet sysmoncon monitor and reports error on port 2 despite not being used. I found an old listing claiming it will be solved in in 4.2.3-update5, yet nothing in 4.2.3-update7 release notes, about it. https://www.spectrumscale.org/pipermail/gpfsug-discuss/2018-January/004395.html Filters in sensor file say filters are not support + apply to ALL nodes, so no relevant where I need to ignore it. Any idea how can I disable the check of sensor on mlx4_0/2 on some of the nodes ? Node name: cff003-ib0.chemfarm Node status: DEGRADED Status Change: 2019-05-29 12:29:49 Component Status Status Change Reasons ------------------------------------------------------------------------------------------------------------------------------------------------- GPFS TIPS 2019-05-29 12:29:48 gpfs_pagepool_small NETWORK DEGRADED 2019-05-29 12:29:49 ib_rdma_link_down(mlx4_0/2), ib_rdma_nic_down(mlx4_0/2), ib_rdma_nic_unrecognized(mlx4_0/2) ib0 HEALTHY 2019-05-29 12:29:49 - mlx4_0/1 HEALTHY 2019-05-29 12:29:49 - mlx4_0/2 FAILED 2019-05-29 12:29:49 ib_rdma_link_down, ib_rdma_nic_down, ib_rdma_nic_unrecognized FILESYSTEM HEALTHY 2019-05-29 12:29:48 - apps HEALTHY 2019-05-29 12:29:48 - data HEALTHY 2019-05-29 12:29:48 - PERFMON HEALTHY 2019-05-29 12:29:33 - THRESHOLD HEALTHY 2019-05-29 12:29:18 - Thanks ! Regards, Ran _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweil at wustl.edu Fri May 31 19:56:38 2019 From: mweil at wustl.edu (Weil, Matthew) Date: Fri, 31 May 2019 18:56:38 +0000 Subject: [gpfsug-discuss] Gateway role on a NSD server Message-ID: Hello all, How important is it to separate these two roles.? planning on using AFM and I am wondering if we should have the gateways on different nodes than the NSDs.? Any opinions?? What about fail overs and maintenance?? Could one role effect the other? Thanks Matt From cblack at nygenome.org Fri May 31 20:09:46 2019 From: cblack at nygenome.org (Christopher Black) Date: Fri, 31 May 2019 19:09:46 +0000 Subject: [gpfsug-discuss] Gateway role on a NSD server Message-ID: <59BC2553-2F56-4863-A353-C2E2062DA92D@nygenome.org> We've done it both ways. You will get better performance and fewer challenges of ensuring processes and memory don't step on eachother if afm gateway node is not also doing nsd server work. However, using an nsd server that mounts two filesystems (one via mmremotefs from another cluster) did work. Best, Chris ?On 5/31/19, 2:56 PM, "gpfsug-discuss-bounces at spectrumscale.org on behalf of Weil, Matthew" wrote: Hello all, How important is it to separate these two roles. planning on using AFM and I am wondering if we should have the gateways on different nodes than the NSDs. Any opinions? What about fail overs and maintenance? Could one role effect the other? Thanks Matt _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=C9X8xNkG_lwP_-eFHTGejw&r=DopWM-bvfskhBn2zeglfyyw5U2pumni6m_QzQFYFepU&m=ZRGpE3XENwtAlhLHRmvswDiYLgHX5WHNzqGhdZmqMCw&s=23djes6DK8Uzh7SLRQwUA-KphzsnVONiU4ieADwQwMA&e= ________________________________ This message is for the recipient?s use only, and may contain confidential, privileged or protected information. Any unauthorized use or dissemination of this communication is prohibited. If you received this message in error, please immediately notify the sender and destroy all copies of this message. The recipient should check this email and any attachments for the presence of viruses, as we accept no liability for any damage caused by any virus transmitted by this email.