From martin.gasthuber at desy.de Mon Nov 2 13:53:49 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Mon, 2 Nov 2015 14:53:49 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz Message-ID: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Hi, we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? best regards, Martin From jonathan at buzzard.me.uk Mon Nov 2 14:20:06 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 02 Nov 2015 14:20:06 +0000 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <1446474006.17909.120.camel@buzzard.phy.strath.ac.uk> On Mon, 2015-11-02 at 14:53 +0100, Martin Gasthuber wrote: > Hi, > > we are currently in discussion with our local network security people > about the plan to make certain data accessible to outside scientists > via ftp - this implies that the host running the ftp daemon runs with > their ethernet ports inside a dmz. On the other hand, all NSD access is > through IB (and should stay that way). The biggest concerns are around > the possible intrude from that ftp host (running as GPFS client) > through the IB infrastructure to other cluster nodes and possible > causing big troubles on the scientific data. Did anybody here has > similar constrains and possible solutions to mitigate that risk ? > Would it not make sense to export it via NFS over Ethernet from the GPFS cluster to the FTP node, firewall it up the wazoo and avoid the server licenses anyway? Note if you offer remote access to your "cluster" to local users already the additional attack surface from an FTP server is minimal to begin with. All said and done, one however suspects that 99.999% of hackers have precisely zero experience with Infiniband and thus would struggle to be able to exploit the IB fabric beyond using IPoIB. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From frederik.ferner at diamond.ac.uk Mon Nov 2 14:46:49 2015 From: frederik.ferner at diamond.ac.uk (Frederik Ferner) Date: Mon, 2 Nov 2015 14:46:49 +0000 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <56377759.4060904@diamond.ac.uk> On 02/11/15 13:53, Martin Gasthuber wrote: > we are currently in discussion with our local network security people > about the plan to make certain data accessible to outside scientists > via ftp - this implies that the host running the ftp daemon runs with > their ethernet ports inside a dmz. On the other hand, all NSD access > is through IB (and should stay that way). The biggest concerns are > around the possible intrude from that ftp host (running as GPFS > client) through the IB infrastructure to other cluster nodes and > possible causing big troubles on the scientific data. Did anybody > here has similar constrains and possible solutions to mitigate that > risk ? Martin, we have a very similar situation here at Diamond with our GridFTP/Globus endpoint. We have a machine with full access to our high performance file systems in our internal network, which then exports those over NFS over a private point to point fibre to a machine in the DMZ. This is also firewalled with IPTables on the link on the internal machine to only allow NFS traffic. This has so far provided sufficient performance to our users. Kind regards, Frederik -- Frederik Ferner Senior Computer Systems Administrator (storage) phone: +44 1235 77 8624 Diamond Light Source Ltd. mob: +44 7917 08 5110 Duty Sys Admin can be reached on x8596 (Apologies in advance for the lines below. Some bits are a legal requirement and I have no control over them.) -- This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd. Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message. Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom From service at metamodul.com Mon Nov 2 15:00:07 2015 From: service at metamodul.com (MetaService) Date: Mon, 02 Nov 2015 16:00:07 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <1446476407.7183.108.camel@pluto> I would think about to use a dedicated GPFS remote cluster. Advantage: - If required the remote cluster could be shutdown without to impact the home cluster. - You can add additional types of access onto the remote cluster - You could implement a HA solution to make the access types HA. but you must be aware that you need a GPFS server license. Cheers Hajo From ewahl at osc.edu Mon Nov 2 15:22:19 2015 From: ewahl at osc.edu (Wahl, Edward) Date: Mon, 2 Nov 2015 15:22:19 +0000 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> First off let me recommend vsftpd. We've used that in a few single point to point cases to excellent results. Next, I'm going to agree with Johnathan here, any hacker that someone gains advantage on an FTP server will probably not have the knowledge to take advantage of the IB, however there are some steps you could take to mitigate this on a node such as you are thinking of: -Perhaps an NFS share from an NSD across IB instead of being a native GPFS client? This would remove any possibility of escalation exploits gaining access to other servers via SSH keys on the IB fabric but will reduce this nodes speed of access. On the other hand almost any IB faster than SDR probably is going to wait on the external network unless it's 40Gb or 100Gb attached. -firewalled access and/or narrow corridor for ftp access. This is pretty much a must. -fail2ban like product checking the ftp logs. Takes some work, but if the firewall isn't narrow enough this is worth it. Ed Wahl OSC ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [martin.gasthuber at desy.de] Sent: Monday, November 02, 2015 8:53 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS (partly) inside dmz Hi, we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? best regards, Martin _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From martin.gasthuber at desy.de Mon Nov 2 20:49:02 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Mon, 2 Nov 2015 21:49:02 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: the path via NFS is already checked - problem here is not the bandwidth, although the WAN ports allows for 2 x 10GE, its the file rate we need to optimize. With NFS, in between GPFS and FTP, we saw ~2 times less file download rate. My concern are also not really about raw IB access and misuse - its because IPoIB, in order to minimize the risk, we had to reconfigure all other cluster nodes to refuse IP connects through the IB ports from that node - more work, less fun ! Probably we had to go the slower NFS way ;-) best regards, Martin > On 2 Nov, 2015, at 16:22, Wahl, Edward wrote: > > First off let me recommend vsftpd. We've used that in a few single point to point cases to excellent results. > > Next, I'm going to agree with Johnathan here, any hacker that someone gains advantage on an FTP server will probably not have the knowledge to take advantage of the IB, however there are some steps you could take to mitigate this on a node such as you are thinking of: > > -Perhaps an NFS share from an NSD across IB instead of being a native GPFS client? This would remove any possibility of escalation exploits gaining access to other servers via SSH keys on the IB fabric but will reduce this nodes speed of access. On the other hand almost any IB faster than SDR probably is going to wait on the external network unless it's 40Gb or 100Gb attached. > > -firewalled access and/or narrow corridor for ftp access. This is pretty much a must. > > -fail2ban like product checking the ftp logs. Takes some work, but if the firewall isn't narrow enough this is worth it. > > Ed Wahl > OSC > > > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [martin.gasthuber at desy.de] > Sent: Monday, November 02, 2015 8:53 AM > To: gpfsug main discussion list > Subject: [gpfsug-discuss] GPFS (partly) inside dmz > > Hi, > > we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? > > best regards, > Martin > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From peserocka at gmail.com Tue Nov 3 02:32:56 2015 From: peserocka at gmail.com (Pete Sero) Date: Tue, 3 Nov 2015 10:32:56 +0800 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: Have you tested prefetching reads on the NFS server node? That should help for streaming reads as ultimatively initial by the ftp user. ? Peter On 2015 Nov 3 Tue, at 04:49, Martin Gasthuber wrote: > the path via NFS is already checked - problem here is not the bandwidth, although the WAN ports allows for 2 x 10GE, its the file rate we need to optimize. With NFS, in between GPFS and FTP, we saw ~2 times less file download rate. My concern are also not really about raw IB access and misuse - its because IPoIB, in order to minimize the risk, we had to reconfigure all other cluster nodes to refuse IP connects through the IB ports from that node - more work, less fun ! Probably we had to go the slower NFS way ;-) > > best regards, > Martin >> On 2 Nov, 2015, at 16:22, Wahl, Edward wrote: >> >> First off let me recommend vsftpd. We've used that in a few single point to point cases to excellent results. >> >> Next, I'm going to agree with Johnathan here, any hacker that someone gains advantage on an FTP server will probably not have the knowledge to take advantage of the IB, however there are some steps you could take to mitigate this on a node such as you are thinking of: >> >> -Perhaps an NFS share from an NSD across IB instead of being a native GPFS client? This would remove any possibility of escalation exploits gaining access to other servers via SSH keys on the IB fabric but will reduce this nodes speed of access. On the other hand almost any IB faster than SDR probably is going to wait on the external network unless it's 40Gb or 100Gb attached. >> >> -firewalled access and/or narrow corridor for ftp access. This is pretty much a must. >> >> -fail2ban like product checking the ftp logs. Takes some work, but if the firewall isn't narrow enough this is worth it. >> >> Ed Wahl >> OSC >> >> >> ________________________________________ >> From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [martin.gasthuber at desy.de] >> Sent: Monday, November 02, 2015 8:53 AM >> To: gpfsug main discussion list >> Subject: [gpfsug-discuss] GPFS (partly) inside dmz >> >> Hi, >> >> we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? >> >> best regards, >> Martin >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From janfrode at tanso.net Tue Nov 3 09:16:09 2015 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Nov 2015 10:16:09 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: I would be very weary about stretching a cluster between DMZ's. IMHO the nodes are too tighly connected for that. I just saw the Desy/GPFS talk at IBM technical university in Cannes, and it was mentioned that you had moved from 60 MB/s to 600 MB/s from un-tuned to tuned NFS over 10GbE. Sounded quite impressive. Are you saying putting FTP on top of those 600 MB/s kills the performance / download rate? Maybe AFM, with readonly Cache, would allow you to get better performance by caching the content on the FTP-servers ? Then all you should need of openings between the DMZ's would be the NFS-port for a readonly export.. -jf On Mon, Nov 2, 2015 at 9:49 PM, Martin Gasthuber wrote: > the path via NFS is already checked - problem here is not the bandwidth, > although the WAN ports allows for 2 x 10GE, its the file rate we need to > optimize. With NFS, in between GPFS and FTP, we saw ~2 times less file > download rate. My concern are also not really about raw IB access and > misuse - its because IPoIB, in order to minimize the risk, we had to > reconfigure all other cluster nodes to refuse IP connects through the IB > ports from that node - more work, less fun ! Probably we had to go the > slower NFS way ;-) > > best regards, > Martin > > On 2 Nov, 2015, at 16:22, Wahl, Edward wrote: > > > > First off let me recommend vsftpd. We've used that in a few single > point to point cases to excellent results. > > > > Next, I'm going to agree with Johnathan here, any hacker that someone > gains advantage on an FTP server will probably not have the knowledge to > take advantage of the IB, however there are some steps you could take to > mitigate this on a node such as you are thinking of: > > > > -Perhaps an NFS share from an NSD across IB instead of being a native > GPFS client? This would remove any possibility of escalation exploits > gaining access to other servers via SSH keys on the IB fabric but will > reduce this nodes speed of access. On the other hand almost any IB faster > than SDR probably is going to wait on the external network unless it's 40Gb > or 100Gb attached. > > > > -firewalled access and/or narrow corridor for ftp access. This is pretty > much a must. > > > > -fail2ban like product checking the ftp logs. Takes some work, but if > the firewall isn't narrow enough this is worth it. > > > > Ed Wahl > > OSC > > > > > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [ > gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [ > martin.gasthuber at desy.de] > > Sent: Monday, November 02, 2015 8:53 AM > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] GPFS (partly) inside dmz > > > > Hi, > > > > we are currently in discussion with our local network security people > about the plan to make certain data accessible to outside scientists via > ftp - this implies that the host running the ftp daemon runs with their > ethernet ports inside a dmz. On the other hand, all NSD access is through > IB (and should stay that way). The biggest concerns are around the possible > intrude from that ftp host (running as GPFS client) through the IB > infrastructure to other cluster nodes and possible causing big troubles on > the scientific data. Did anybody here has similar constrains and possible > solutions to mitigate that risk ? > > > > best regards, > > Martin > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Wed Nov 4 18:18:21 2015 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Wed, 4 Nov 2015 18:18:21 +0000 Subject: [gpfsug-discuss] AFM performance under load Message-ID: <563A4BED.1040801@ed.ac.uk> Hi folks, We're trying to get our AFM stack to remain responsive when under a heavy write load from the cache -> home. It looks like read operations won't get scheduled when there's a large write queue, and operations like "ls" in a directory which isn't currently valid in the cache can take several minutes to return. Does anyone have any ideas on how to stop AFM lookups running slowly when the AFM queues are big? ----------- Orlando -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From S.J.Thompson at bham.ac.uk Thu Nov 5 16:51:00 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 5 Nov 2015 16:51:00 +0000 Subject: [gpfsug-discuss] Running the gui Message-ID: Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? Thanks Simon From Robert.Oesterlin at nuance.com Thu Nov 5 16:55:42 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 5 Nov 2015 16:55:42 +0000 Subject: [gpfsug-discuss] Running the gui Message-ID: <2DD690DB-6510-4C5F-848A-91FC15DA6C84@nuance.com> Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Nov 5 17:10:46 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 5 Nov 2015 17:10:46 +0000 Subject: [gpfsug-discuss] Running the gui In-Reply-To: <2DD690DB-6510-4C5F-848A-91FC15DA6C84@nuance.com> References: <2DD690DB-6510-4C5F-848A-91FC15DA6C84@nuance.com> Message-ID: Yeah. Works and requires. What I'm trying to figure out. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 05 November 2015 16:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? From duersch at us.ibm.com Mon Nov 9 16:27:54 2015 From: duersch at us.ibm.com (Steve Duersch) Date: Mon, 9 Nov 2015 11:27:54 -0500 Subject: [gpfsug-discuss] Running the GUI In-Reply-To: References: Message-ID: I have confirmed that the GUI will run on a client license and is fully supported there. It can be any node. Steve Duersch Spectrum Scale (GPFS) FVTest IBM Poughkeepsie, New York Date: Thu, 5 Nov 2015 16:51:00 +0000 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Subject: [gpfsug-discuss] Running the gui Message-ID: Content-Type: text/plain; charset="us-ascii" Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? Thanks Simon From: gpfsug-discuss-request at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Date: 11/06/2015 07:00 AM Subject: gpfsug-discuss Digest, Vol 46, Issue 4 Sent by: gpfsug-discuss-bounces at spectrumscale.org Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Running the gui (Simon Thompson (Research Computing - IT Services)) 2. Re: Running the gui (Oesterlin, Robert) 3. Re: Running the gui (Simon Thompson (Research Computing - IT Services)) ---------------------------------------------------------------------- Message: 1 Date: Thu, 5 Nov 2015 16:51:00 +0000 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Subject: [gpfsug-discuss] Running the gui Message-ID: Content-Type: text/plain; charset="us-ascii" Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? Thanks Simon ------------------------------ Message: 2 Date: Thu, 5 Nov 2015 16:55:42 +0000 From: "Oesterlin, Robert" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Message-ID: <2DD690DB-6510-4C5F-848A-91FC15DA6C84 at nuance.com> Content-Type: text/plain; charset="utf-8" Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20151105/e39af88a/attachment-0001.html > ------------------------------ Message: 3 Date: Thu, 5 Nov 2015 17:10:46 +0000 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Message-ID: Content-Type: text/plain; charset="Windows-1252" Yeah. Works and requires. What I'm trying to figure out. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 05 November 2015 16:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 46, Issue 4 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From st.graf at fz-juelich.de Tue Nov 10 07:53:19 2015 From: st.graf at fz-juelich.de (Stephan Graf) Date: Tue, 10 Nov 2015 08:53:19 +0100 Subject: [gpfsug-discuss] ILM and Backup Question In-Reply-To: <201510281336.t9SDaiNa015723@d01av01.pok.ibm.com> References: <81E9FF09-D666-4BD1-A727-39AF4ED1F54B@iu.edu> <562DE7B5.7080303@fz-juelich.de> <201510262114.t9QLENpG024083@d01av01.pok.ibm.com> <562F21B7.8040007@fz-juelich.de> <201510271526.t9RFQ2Bw027971@d03av02.boulder.ibm.com> <563081E9.2090605@fz-juelich.de> <201510281336.t9SDaiNa015723@d01av01.pok.ibm.com> Message-ID: <5641A26F.4070405@fz-juelich.de> Hi Wayne. Just to come back to the mmbackup performance. Here the way we call it and the performance results: MTHREADS=1 QOPT="" # we check the lust run and set this to '-q' if required' /usr/lpp/mmfs/bin/mmbackup /$FS -S $SNAPFILE -g /work/root/mmbackup -a 4 $QOPT -m $MTHREADS -B 1000 -N justt sms04c1 --noquote --tsm-servers home -v -------------------------------------------------------- mmbackup: Backup of /homeb begins at Mon Nov 9 00:03:30 MEZ 2015. -------------------------------------------------------- ... Mon Nov 9 00:03:35 2015 mmbackup:Scanning file system homeb Mon Nov 9 03:07:17 2015 mmbackup:File system scan of homeb is complete. Mon Nov 9 03:07:17 2015 mmbackup:Calculating backup and expire lists for server home Mon Nov 9 03:07:17 2015 mmbackup:Determining file system changes for homeb [home]. Mon Nov 9 03:44:33 2015 mmbackup:changed=126305, expired=10086, unsupported=0 for server [home] Mon Nov 9 03:44:33 2015 mmbackup:Finished calculating lists [126305 changed, 10086 expired] for server home. Mon Nov 9 03:44:33 2015 mmbackup:Sending files to the TSM server [126305 changed, 10086 expired]. Mon Nov 9 03:44:33 2015 mmbackup:Performing expire operations Mon Nov 9 03:45:32 2015 mmbackup:Completed policy expiry run with 0 policy errors, 0 files failed, 0 severe errors, returning r c=0. Mon Nov 9 03:45:32 2015 mmbackup:Policy for expiry returned 0 Highest TSM error 0 Mon Nov 9 03:45:32 2015 mmbackup:Performing backup operations Mon Nov 9 04:54:29 2015 mmbackup:Completed policy backup run with 0 policy errors, 0 files failed, 0 severe errors, returning r c=0. Mon Nov 9 04:54:29 2015 mmbackup:Policy for backup returned 0 Highest TSM error 0 Total number of objects inspected: 137562 Total number of objects backed up: 127476 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 10086 Total number of objects failed: 0 Total number of bytes transferred: 427 GB Total number of objects encrypted: 0 Total number of bytes inspected: 459986708656 Total number of bytes transferred: 459989351070 Mon Nov 9 04:54:29 2015 mmbackup:analyzing: results from home. Mon Nov 9 04:54:29 2015 mmbackup:Analyzing audit log file /homeb/mmbackup.audit.homeb.home Mon Nov 9 05:02:46 2015 mmbackup:updating /homeb/.mmbackupShadow.1.home with /homeb/.mmbackupCfg/tmpfile2.mmbackup.homeb Mon Nov 9 05:02:46 2015 mmbackup:Copying updated shadow file to the TSM server Mon Nov 9 05:03:51 2015 mmbackup:Done working with files for TSM Server: home. Mon Nov 9 05:03:51 2015 mmbackup:Completed backup and expire jobs. Mon Nov 9 05:03:51 2015 mmbackup:TSM server home had 0 failures or excluded paths and returned 0. Its shadow database has been updated. Shadow DB state:updated Mon Nov 9 05:03:51 2015 mmbackup:Completed successfully. exit 0 ---------------------------------------------------------- mmbackup: Backup of /homeb completed successfully at Mon Nov 9 05:03:51 MEZ 2015. ---------------------------------------------------------- Stephan On 10/28/15 14:36, Wayne Sawdon wrote: > > You have to use both options even if -N is only the local node. Sorry, > > -Wayne > > > > Inactive hide details for Stephan Graf ---10/28/2015 01:06:36 AM---Hi > Wayne! We are using -g, and we only want to run it on oneStephan Graf > ---10/28/2015 01:06:36 AM---Hi Wayne! We are using -g, and we only > want to run it on one node, so we don't use the -N option. > > From: Stephan Graf > To: > Date: 10/28/2015 01:06 AM > Subject: Re: [gpfsug-discuss] ILM and Backup Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Hi Wayne! > > We are using -g, and we only want to run it on one node, so we don't > use the -N option. > > Stephan > > On 10/27/15 16:25, Wayne Sawdon wrote: > > > > From: Stephan Graf __ > > > > We are running the mmbackup on an AIX system > > oslevel -s > > 6100-07-10-1415 > > Current GPFS build: "4.1.0.8 ". > > > > So we only use one node for the policy run. > > > > Even on one node you should see a speedup using -g and -N. > > -Wayne > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From makaplan at us.ibm.com Tue Nov 10 16:20:18 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 10 Nov 2015 11:20:18 -0500 Subject: [gpfsug-discuss] ILM and Backup Question In-Reply-To: <5641A26F.4070405@fz-juelich.de> References: <81E9FF09-D666-4BD1-A727-39AF4ED1F54B@iu.edu> <562DE7B5.7080303@fz-juelich.de> <201510262114.t9QLENpG024083@d01av01.pok.ibm.com> <562F21B7.8040007@fz-juelich.de> <201510271526.t9RFQ2Bw027971@d03av02.boulder.ibm.com> <563081E9.2090605@fz-juelich.de> <201510281336.t9SDaiNa015723@d01av01.pok.ibm.com> <5641A26F.4070405@fz-juelich.de> Message-ID: <201511101620.tAAGKRg0010175@d03av03.boulder.ibm.com> OOPS... mmbackup uses mmapplypolicy. Unfortunately the script "mmapplypolicy" is a little "too smart". When you use the "-N mynode" parameter it sees that you are referring to just the node upon which you are executing and does not pass the -N argument to the underlying tsapolicy command. (Not my idea, just telling you what's there.) So right now, to force the parallelized inode scan on a single node, please just use the tsapolicy command with -N and -g. tsapolicy doesn't do such smart argument checking, it is also missing the nodefile, nodeclass, defaultHelperNodes stuff ... those are some of the "value add" of the mmapplypolicy script. If you're running the parallel version and with message level -L 1 you will see this message: [I] 2015-11-10 at 15:57:47.871 Parallel-piped sort and policy evaluation. 5 files scanned. Otherwise you will see this message: [I] 2015-11-10 at 15:49:44.816 Policy evaluation. 5 files scanned. But ... if you're running mmapplypolicy under mmbackup... a little more hacking is required. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Nov 11 13:01:30 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 11 Nov 2015 13:01:30 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda Message-ID: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. Here is the agenda: 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden 2:10 - 2:30 GUI Demo- Ben Randall 2:30 - 3:00 Product quality improvement updates - Hye-Young 3:00 - 3:15 Break 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? Panelists: Bob Oesterlin, Nuance (Arxscan) Wolfgang Bring, Julich (homegrown) Mark Weghorst, Travelport (open source based on Graphana & FluxDB) 5:45 ?Welcome Reception by DSS (sponsoring reception) Bob Oesterlin Sr Storage Engineer, Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From service at metamodul.com Wed Nov 11 16:57:49 2015 From: service at metamodul.com (service at metamodul.com) Date: Wed, 11 Nov 2015 17:57:49 +0100 (CET) Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) Message-ID: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> @IBM GPFS and HA GPFS has now the so called protocol nodes which do provide a HA environment for NFS and SAMBA. I assume its based on the CTDB since the CTDB is currently supporting a few protocols already.* What i would like to see is a generic HA interface using GPFS. It could be based on the CTDB , native GPFS callbacks or any service providing HA functionality based on a clustered FS. Such a service would allow - only with minor extentions - to make almost any service (Oracle,DB2,FTP,SSH,NFS,CRON,TSM a.s.o ) HA. So IMHO the current approach is a little bit shortsighted. GPFS and System i I looking forward the day we have a SQL interface/API to GPFS. Thus storing DB objects natively on a GPFS thus not using any kind of addional DB files. Now if you would have such an interface what about a general modern language which supportr SQL and is multi node runable ? Who knows ... Maybe the AS/400 gets reinvented cheers Hajo Reference: * https://ctdb.samba.org/documentation.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed Nov 11 19:12:05 2015 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 11 Nov 2015 11:12:05 -0800 Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) In-Reply-To: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> References: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> Message-ID: <201511111921.tABJLbrG011143@d01av04.pok.ibm.com> It is probably not what you are looking for, but I did implement a two node HA solution using callbacks for SNMP. You could do something like that in the near term. https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Implementing%20a%20GPFS%20HA%20SNMP%20configuration%20using%20Callbacks Scott Fadden Spectrum Scale - Technical Marketing Phone: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/storage/spectrum/scale From: "service at metamodul.com" To: gpfsug main discussion list Date: 11/11/2015 08:58 AM Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) Sent by: gpfsug-discuss-bounces at spectrumscale.org @IBM GPFS and HA GPFS has now the so called protocol nodes which do provide a HA environment for NFS and SAMBA. I assume its based on the CTDB since the CTDB is currently supporting a few protocols already.* What i would like to see is a generic HA interface using GPFS. It could be based on the CTDB , native GPFS callbacks or any service providing HA functionality based on a clustered FS. Such a service would allow - only with minor extentions - to make almost any service (Oracle,DB2,FTP,SSH,NFS,CRON,TSM a.s.o ) HA. So IMHO the current approach is a little bit shortsighted. GPFS and System i I looking forward the day we have a SQL interface/API to GPFS. Thus storing DB objects natively on a GPFS thus not using any kind of addional DB files. Now if you would have such an interface what about a general modern language which supportr SQL and is multi node runable ? Who knows ... Maybe the AS/400 gets reinvented cheers Hajo Reference: * https://ctdb.samba.org/documentation.html _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From sfadden at us.ibm.com Wed Nov 11 19:12:05 2015 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 11 Nov 2015 11:12:05 -0800 Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) In-Reply-To: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> References: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> Message-ID: <201511111920.tABJK1Fq016276@d01av05.pok.ibm.com> It is probably not what you are looking for, but I did implement a two node HA solution using callbacks for SNMP. You could do something like that in the near term. https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Implementing%20a%20GPFS%20HA%20SNMP%20configuration%20using%20Callbacks Scott Fadden Spectrum Scale - Technical Marketing Phone: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/storage/spectrum/scale From: "service at metamodul.com" To: gpfsug main discussion list Date: 11/11/2015 08:58 AM Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) Sent by: gpfsug-discuss-bounces at spectrumscale.org @IBM GPFS and HA GPFS has now the so called protocol nodes which do provide a HA environment for NFS and SAMBA. I assume its based on the CTDB since the CTDB is currently supporting a few protocols already.* What i would like to see is a generic HA interface using GPFS. It could be based on the CTDB , native GPFS callbacks or any service providing HA functionality based on a clustered FS. Such a service would allow - only with minor extentions - to make almost any service (Oracle,DB2,FTP,SSH,NFS,CRON,TSM a.s.o ) HA. So IMHO the current approach is a little bit shortsighted. GPFS and System i I looking forward the day we have a SQL interface/API to GPFS. Thus storing DB objects natively on a GPFS thus not using any kind of addional DB files. Now if you would have such an interface what about a general modern language which supportr SQL and is multi node runable ? Who knows ... Maybe the AS/400 gets reinvented cheers Hajo Reference: * https://ctdb.samba.org/documentation.html _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From RWelp at uk.ibm.com Thu Nov 12 20:11:27 2015 From: RWelp at uk.ibm.com (Richard Welp) Date: Thu, 12 Nov 2015 20:11:27 +0000 Subject: [gpfsug-discuss] Meet the Devs - Edinburgh Message-ID: Hello All, I recently posted a blog entry to the User Group website outlining the Meet the Devs meeting we had in Edinburgh. If you are interested - here is a link to the recap-> http://www.spectrumscale.org/meet-the-devs-edinburgh/ Thanks, Rick =================== Rick Welp Software Engineer Master Inventor Email: rwelp at uk.ibm.com phone: +44 0161 214 0461 IBM Systems - Manchester Lab IBM UK Limited -------------------------- Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From volobuev at us.ibm.com Fri Nov 13 00:08:22 2015 From: volobuev at us.ibm.com (Yuri L Volobuev) Date: Thu, 12 Nov 2015 16:08:22 -0800 Subject: [gpfsug-discuss] NSD Server Design and Tuning Message-ID: <201511130008.tAD08Rih003504@d03av03.boulder.ibm.com> Hi The subject of GPFS NSD server tuning, and the underlying design that dictates tuning choices, has been coming up repeatedly in various forums, including this mailing list. ?Clearly, this topic hasn't been documented in sufficient detail. ?It is my sincere hope that the new document on the subject is going to provide some relief: https://ibm.biz/BdHq5v As always, feedback is welcome. yuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Fri Nov 13 13:33:01 2015 From: carlz at us.ibm.com (Carl Zetie) Date: Fri, 13 Nov 2015 08:33:01 -0500 Subject: [gpfsug-discuss] Announce: You can now file PUBLIC enhancement requests for Scale Message-ID: <201511131333.tADDXGUL010059@d01av02.pok.ibm.com> In response to requests from the community, we've added a new way to submit Public enhancement requests (RFEs) for Scale. In the past, RFEs were private, which was great for business-sensitive requests, but meant that other people couldn't effectively vote on them; and requests would often be duplicated because people couldn't see the detail of existing requests. So now we have TWO ways to submit a request. When you go to the RFE page on developerworks (https://www.ibm.com/developerworks/rfe/), you'll find two entries for Scale in the "products": one for Private RFEs (same as previously), and one for Public RFEs. Simply choose the visibility you want. Internally, they all go into the same evaluation process. A couple of notes: - Even with a public request, certain fields are still private, including Company Name and Business Justification - All existing requests remain Private. If you have one that you want flipped, please contact me off-list with the request number regards, Carl Carl Zetie Product Manager for Spectrum Scale, IBM (540) 882 9353 ][ 15750 Brookhill Ct, Waterford VA 20197 carlz at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri Nov 13 20:33:55 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 13 Nov 2015 20:33:55 +0000 Subject: [gpfsug-discuss] NSD Server Design and Tuning In-Reply-To: <201511130008.tAD08Rih003504@d03av03.boulder.ibm.com> References: <201511130008.tAD08Rih003504@d03av03.boulder.ibm.com> Message-ID: Yuri - this a fantastic document! Thanks for taking the time to put it together. I'll probably have a lot more questions after I really look at my NSD configuration. Encourage the Spectrum Scale team to do more of these. Bob Oesterlin Sr Storage Engineer, Nuance Communications _____________________________ From: Yuri L Volobuev > Sent: Thursday, November 12, 2015 6:08 PM Subject: [gpfsug-discuss] NSD Server Design and Tuning To: > Hi The subject of GPFS NSD server tuning, and the underlying design that dictates tuning choices, has been coming up repeatedly in various forums, including this mailing list. Clearly, this topic hasn't been documented in sufficient detail. It is my sincere hope that the new document on the subject is going to provide some relief: https://ibm.biz/BdHq5v As always, feedback is welcome. yuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsallen at alcf.anl.gov Fri Nov 13 21:21:36 2015 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 13 Nov 2015 21:21:36 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> References: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> Message-ID: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Hi Bob, For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? Thanks, Ben > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > Here is the agenda: > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > 2:10 - 2:30 GUI Demo- Ben Randall > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > 3:00 - 3:15 Break > > 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar > 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > Panelists: > Bob Oesterlin, Nuance (Arxscan) > Wolfgang Bring, Julich (homegrown) > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > Bob Oesterlin > Sr Storage Engineer, Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From bsallen at alcf.anl.gov Fri Nov 13 21:21:36 2015 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 13 Nov 2015 21:21:36 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> References: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> Message-ID: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Hi Bob, For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? Thanks, Ben > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > Here is the agenda: > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > 2:10 - 2:30 GUI Demo- Ben Randall > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > 3:00 - 3:15 Break > > 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar > 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > Panelists: > Bob Oesterlin, Nuance (Arxscan) > Wolfgang Bring, Julich (homegrown) > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > Bob Oesterlin > Sr Storage Engineer, Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Nov 13 21:34:58 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 13 Nov 2015 21:34:58 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> References: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com>, <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Message-ID: Hi Ben, We always try and ask if people are happy for people to have their slides posted online afterwards. Obviously if there are nda slides in the deck then we cant share. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Allen, Benjamin S. [bsallen at alcf.anl.gov] Sent: 13 November 2015 21:21 To: gpfsug main discussion list Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda Hi Bob, For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? Thanks, Ben > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > Here is the agenda: > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > 2:10 - 2:30 GUI Demo- Ben Randall > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > 3:00 - 3:15 Break > > 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar > 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > Panelists: > Bob Oesterlin, Nuance (Arxscan) > Wolfgang Bring, Julich (homegrown) > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > Bob Oesterlin > Sr Storage Engineer, Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From kallbac at iu.edu Fri Nov 13 21:44:22 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Fri, 13 Nov 2015 16:44:22 -0500 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Message-ID: <6b0611d1-24fd-4145-96dd-aff7d751a8ae@email.android.com> We will collect as many as we can and put up with a blog post. Kristy On Nov 13, 2015 4:21 PM, "Allen, Benjamin S." wrote: > > Hi Bob, > > For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? > > Thanks, > > Ben > > > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > > > Here is the agenda: > > > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > > 2:10 - 2:30 GUI Demo- Ben Randall > > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > > > 3:00 - 3:15 Break > > > > 3:10 to 3:35 The? Hartree Centre, Past, present and future - Colin Morey of UK HPC > > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar? > > 4:25 to 4:50? "Large Data Ingest Architecture? - Martin Gasthuber of DESY > > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > > Panelists: > > Bob Oesterlin, Nuance (Arxscan) > > Wolfgang Bring, Julich? (homegrown) > > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > > > > Bob Oesterlin > > Sr Storage Engineer, Nuance Communications > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From bsallen at alcf.anl.gov Fri Nov 13 22:22:29 2015 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 13 Nov 2015 22:22:29 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <6b0611d1-24fd-4145-96dd-aff7d751a8ae@email.android.com> References: <6b0611d1-24fd-4145-96dd-aff7d751a8ae@email.android.com> Message-ID: <2602E279-E811-4AB4-8E77-746D96B28B34@alcf.anl.gov> Thanks Kristy and Simon. Ben > On Nov 13, 2015, at 3:44 PM, Kristy Kallback-Rose wrote: > > We will collect as many as we can and put up with a blog post. > > Kristy > > On Nov 13, 2015 4:21 PM, "Allen, Benjamin S." wrote: >> >> Hi Bob, >> >> For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? >> >> Thanks, >> >> Ben >> >>> On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: >>> >>> The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! >>> >>> The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. >>> >>> Here is the agenda: >>> >>> 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose >>> 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali >>> 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden >>> 2:10 - 2:30 GUI Demo- Ben Randall >>> 2:30 - 3:00 Product quality improvement updates - Hye-Young >>> >>> 3:00 - 3:15 Break >>> >>> 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC >>> 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport >>> 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar >>> 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY >>> 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? >>> Panelists: >>> Bob Oesterlin, Nuance (Arxscan) >>> Wolfgang Bring, Julich (homegrown) >>> Mark Weghorst, Travelport (open source based on Graphana & FluxDB) >>> >>> 5:45 ?Welcome Reception by DSS (sponsoring reception) >>> >>> >>> Bob Oesterlin >>> Sr Storage Engineer, Nuance Communications >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Robert.Oesterlin at nuance.com Sun Nov 15 00:55:56 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Sun, 15 Nov 2015 00:55:56 +0000 Subject: [gpfsug-discuss] Announce: You can now file PUBLIC enhancement requests for Scale In-Reply-To: <201511131333.tADDXGUL010059@d01av02.pok.ibm.com> References: <201511131333.tADDXGUL010059@d01av02.pok.ibm.com> Message-ID: Great news Carl ? thanks for you help in getting this in place. Bob Oesterlin Sr Storage Engineer, Nuance Communications From: > on behalf of Carl Zetie > Reply-To: gpfsug main discussion list > Date: Friday, November 13, 2015 at 7:33 AM To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Announce: You can now file PUBLIC enhancement requests for Scale In response to requests from the community, we've added a new way to submit Public enhancement requests (RFEs) for Scale. In the past, RFEs were private, which was great for business-sensitive requests, but meant that other people couldn't effectively vote on them; and requests would often be duplicated because people couldn't see the detail of existing requests. So now we have TWO ways to submit a request. When you go to the RFE page on developerworks (https://www.ibm.com/developerworks/rfe/), you'll find two entries for Scale in the "products": one for Private RFEs (same as previously), and one for Public RFEs. Simply choose the visibility you want. Internally, they all go into the same evaluation process. A couple of notes: - Even with a public request, certain fields are still private, including Company Name and Business Justification - All existing requests remain Private. If you have one that you want flipped, please contact me off-list with the request number regards, Carl Carl Zetie Product Manager for Spectrum Scale, IBM (540) 882 9353 ][ 15750 Brookhill Ct, Waterford VA 20197 carlz at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Mon Nov 16 12:26:52 2015 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Mon, 16 Nov 2015 06:26:52 -0600 Subject: [gpfsug-discuss] SC15 UG Survery Message-ID: Hi, For those at yesterday's meeting at SC15, just a reminder that there is an online survey for feedback at: http://www.surveymonkey.com/r/SSUGSC15 Thanks to all the speakers yesterday and to Kristy, Bob and the IBM people (Doug, Pallavi) for making it happen. Simon From service at metamodul.com Mon Nov 16 18:13:05 2015 From: service at metamodul.com (service at metamodul.com) Date: Mon, 16 Nov 2015 19:13:05 +0100 (CET) Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) In-Reply-To: <201511111920.tABJK3Ga016406@d01av05.pok.ibm.com> References: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> <201511111920.tABJK3Ga016406@d01av05.pok.ibm.com> Message-ID: <772407947.175151.1447697585599.JavaMail.open-xchange@oxbaltgw02.schlund.de> Hi Scott, > > It is probably not what you are looking for, but I did implement a two node > HA solution using callbacks for SNMP. ... I knew about and wrote even my own generic HA API for GPFS based on the very old GPFS callbacks ( preumount .... ) I am trying to make IBM aware that they have a very nice product ( GPFS ) which just needs a little HA API on top to be able to provide generic HA application support out of the box. I must admit that i could rewrite my own HA API ( A script and a config file .... ) for GPFS but i have no time or money for it. I must also admit that i am not the best shell script writer .... Cheers Hajo -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From chair at spectrumscale.org Mon Nov 16 23:47:51 2015 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Mon, 16 Nov 2015 17:47:51 -0600 Subject: [gpfsug-discuss] SC15 User Groups Slides Message-ID: Hi All, Slides from the SC15 user group meeting in Austin have been posted to the UG website at: http://www.spectrumscale.org/presentations/ Simon From cphoffma at lanl.gov Fri Nov 20 16:52:23 2015 From: cphoffma at lanl.gov (Hoffman, Christopher P) Date: Fri, 20 Nov 2015 16:52:23 +0000 Subject: [gpfsug-discuss] GPFS API Question Message-ID: Greetings, I hope this is the correct place to post this, if not I apologize. I'm attempting work with extended attributes on gpfs using the C API interface. I'm wanting to be able to read attributes and then based off that value, change the attribute. What I've done so far is a policy scan that collects certain inodes based of an xattr value. From there I collect inode numbers. Just to clarify, I'm trying to not work with a path name of any sorts, just inode. There are these functions: int gpfs_igetattrsx(gpfs_ifile_t *ifile, int flags, void *buffer, int bufferSize, int *attrSize); and int gpfs_iputattrsx(gpfs_ifile_t *ifile, int flags, void *buffer, const char *pathName); I'm looking at how to use iputattrsx but the void *buffer part confuses me on what struct to use. I've been playing with igetattrsx to try to attempt and figure out what struct to use based off the data I am seeing. I've come across gpfsGetSetXAttr_t but haven't had any luck using it. My question is, is this even possible to manipulate custom XATTRs via the gpfs api? If so any ideas on what am I doing wrong? Thanks, Christopher -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Nov 20 17:39:04 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 20 Nov 2015 12:39:04 -0500 Subject: [gpfsug-discuss] GPFS API Question - extended attributes In-Reply-To: References: Message-ID: <201511201739.tAKHdBBG006478@d01av03.pok.ibm.com> If you're using policy rules and the xattr() SQL function, then you should consider using the setXattr() SQL function, to set or change the value of any particular extended attributes. Notice that the doc says: gpfs_igetattrs() subroutine: Retrieves extended file attributes in opaque format. What it does is pickup all the extended attributes of a given file and return them in a "blob". The structure of the blob is undocumented, so you should not use it to set individual extended attributes. The intended use is for backup and restore of a file's extended attributes, and you get an ACL also as a bonus. The doc says: "This subroutine is intended for use by a backup program to save all extended file attributes (ACLs, attributes, and so forth)." If you are determined to use a C API to manipulate extended attributes, I personally recommend that you first see and try if the standard OS methods will work for you. That means your code will work for any file system that can be mounted on you OS that supports extended attributes. BUT, unfortunately I have found that some extended attribute names with special prefix values cannot be accessed with the standard Linux or AIX or Posix commands or APIs. In that case you need to use the gpfs API, GPFS_FCNTL_SET_XATTR (see gpfs_fcntl.h) Which is indeed what setXattr() is using and what the mmchattr command ultimately uses. Notice that setXattr() requires you pass the new value as an SQL string. So what if you need to store a numeric value as a "binary" value? Well first figure out how to represent the value as a hexadecimal constant and then use this notation: setXattr('user.whatever', X'0123456789ABCDEF') In some common situations you can use the m4 processor to build or tear down binary and/or hexadecimal values and strings. For some examples of how to do that add this to a test policy rules file: debugfile(/tmp/m4xdeb) dumpdef And peek into the resulting m4xdeb file! -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Tue Nov 24 12:48:29 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Tue, 24 Nov 2015 12:48:29 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome Message-ID: Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon From Robert.Oesterlin at nuance.com Tue Nov 24 13:30:11 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 24 Nov 2015 13:30:11 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome Message-ID: <4D197A26-6843-4903-AB89-08F121136F03@nuance.com> It?s listed as an ?optional ? package for Linux nodes, according to the documentation ? but I can?t find it documented either. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Tuesday, November 24, 2015 at 6:48 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] 4.2.0 and callhome Does anyone know what the call home rpm packages in the 4.2.0 release do? -------------- next part -------------- An HTML attachment was scrubbed... URL: From PAULROBE at uk.ibm.com Tue Nov 24 13:45:54 2015 From: PAULROBE at uk.ibm.com (Paul Roberts) Date: Tue, 24 Nov 2015 13:45:54 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome In-Reply-To: References: Message-ID: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Tue Nov 24 13:51:53 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Tue, 24 Nov 2015 13:51:53 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome In-Reply-To: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> References: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> Message-ID: Thanks for the pointer Paul. It appears that search for anything in the docs, doesn't work ... Simon From: > on behalf of Paul Roberts > Reply-To: gpfsug main discussion list > Date: Tuesday, 24 November 2015 at 13:45 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Tue Nov 24 16:35:56 2015 From: knop at us.ibm.com (Felipe Knop) Date: Tue, 24 Nov 2015 11:35:56 -0500 Subject: [gpfsug-discuss] 4.2.0 and callhome In-Reply-To: References: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> Message-ID: <201511241636.tAOGa62F002867@d01av03.pok.ibm.com> Simon, all, The Call Home facility is described in the Advanced Administration Guide http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Chapter 24. Understanding the call home function A problem has been identified with the indexing facility for the Spectrum Scale 4.2 publications . The team is working to rectify that. Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/24/2015 08:52 AM Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for the pointer Paul. It appears that search for anything in the docs, doesn't work ... Simon From: on behalf of Paul Roberts Reply-To: gpfsug main discussion list Date: Tuesday, 24 November 2015 at 13:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.m.killen at leeds.ac.uk Wed Nov 25 17:52:30 2015 From: s.m.killen at leeds.ac.uk (Sean Killen) Date: Wed, 25 Nov 2015 17:52:30 +0000 Subject: [gpfsug-discuss] Introduction Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hello everyone, Just joined the list to be part of the community, so here is a bit about me. I'm Sean Killen and I work in the Faculty of Biological Sciences at the University of Leeds. I am responsible for Research Computing, UNIX/Linux, Storage and Virtualisation. I am new to GPFS /SpectrumScale and am currently evaluating a setup with a view to acquiring it primarily to manage a multi-PetaByte storage system for Research Data coming from our new Electron Microscopes, but also with a view to rolling it out to manage and curate all the research data within the Faculty and beyond. Yours - -- Sean - --? - ------------------------------------------------------------------- ??? Dr Sean M Killen ??? Research Computing Manager, IT ??? Faculty of Biological Sciences ??? University of Leeds ??? LEEDS ??? LS2 9JT ??? United Kingdom ??? Tel: +44 (0)113 3433148 ??? Mob: +44 (0)776 8670907 ??? Fax: +44 (0)113 3438465 ??? GnuPG Key ID: ee0d36f0 - ------------------------------------------------------------------- -----BEGIN PGP SIGNATURE----- iGcEAREKACcgHFMgTSBLaWxsZW4gPHNlYW5Aa2lsbGVucy5jby51az4FAlZV9VUA CgkQEm087+4NNvA+xACg61vxW34Li7tMV8dwNPXy+muO834Anj6ZM2y0j6MWHbRr WFZqTG99oeD+ =GSNu -----END PGP SIGNATURE----- From tpathare at sidra.org Thu Nov 26 15:47:17 2015 From: tpathare at sidra.org (Tushar Pathare) Date: Thu, 26 Nov 2015 15:47:17 +0000 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. Message-ID: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> Hello Team, Is it possible to share the data on GPFS and disabling data copy. It is possible through ACLs. Tushar B Pathare High Performance Computing (HPC) Administrator General Parallel File System Scientific Computing Bioinformatics Division Research Sidra Medical and Research Centre PO Box 26999 | Doha, Qatar Burj Doha Tower,Floor 8 D +974 44042250 | M +974 74793547 tpathare at sidra.org | www.sidra.org [cid:C4701480-241B-4973-B378-C72FB3BC9FFB] Disclaimer: This email and its attachments may be confidential and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, any reading, printing, storage, disclosure, copying or any other action taken in respect of this e-mail is prohibited and may be unlawful. If you are not the intended recipient, please notify the sender immediately by using the reply function and then permanently delete what you have received. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Sidra Medical and Research Center. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 142717 bytes Desc: image001.png URL: From jonathan at buzzard.me.uk Thu Nov 26 23:21:22 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Thu, 26 Nov 2015 23:21:22 +0000 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> References: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> Message-ID: <565793F2.5070407@buzzard.me.uk> On 26/11/15 15:47, Tushar Pathare wrote: > Hello Team, > > Is it possible to share the data on GPFS and disabling data copy. > > It is possible through ACLs. > I don't believe that what you are asking is technically possible in any mainstream operating system/file system combination. It certainly cannot be achieved with ACL's whether Posix, NSFv4 or NTFS. The only way to achieve this sort of thing is using digital rights management which is way beyond the scope of a file system in itself. These are all application specific. In addition these are invariable all a busted flush anyway. Torrents of movies etc. are all the proof one needs of this. The short and curlies are if then end user can view the data in any meaningful way to them, then they can make a copy of that data. From a file system perspective you can't defeat the following command line. $ cat readonly_file > my_evil_copy JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From chair at spectrumscale.org Fri Nov 27 16:01:42 2015 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Fri, 27 Nov 2015 16:01:42 +0000 Subject: [gpfsug-discuss] User group etiquette Message-ID: Hi All, I'd just like to remind all users of the user group that this group is intended to be a technically focussed group and is not intended as a sales lead opportunity. In the past we've had good relationships with many vendors who have engaged in technical discussion on the list and I'd like to see this continue, just recently we've had some complaints that *several* vendors have used the group as a way of trying to generate sales leads. Please can I gently remind all members of the group that the user group is a technical forum. If we continue to receive complaints that posts to the mailing list are being used as sales leads then we'll start to ban offenders from participating in the group. I'm really sorry that we're having to do this, but strongly believe that as a user community we should be focussed on the technical aspects of the products in use. Simon (Chair) From bhill at physics.ucsd.edu Fri Nov 27 22:03:00 2015 From: bhill at physics.ucsd.edu (Bryan Hill) Date: Fri, 27 Nov 2015 14:03:00 -0800 Subject: [gpfsug-discuss] Switching from Standard to Advanced Message-ID: Hello group: Is there any special procedure or caveats involved in going from Standard Edition to Advanced Edition (besides purchasing the license, of course)? Can the Advanced Edition RPM?s (I?m on RedHat EL 6.7) simply be installed in place over the Standard Edition? I would like to implement the new AFM-based DR feature in version 4.1.1, but this requires the Advanced Edition. Thanks, Bryan --- Bryan Hill Lead System Administrator UCSD Physics Computing Facility 9500 Gilman Dr. # 0319 La Jolla, CA 92093 +1-858-534-5538 bhill at ucsd.edu From daniel.kidger at uk.ibm.com Sat Nov 28 12:56:40 2015 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Sat, 28 Nov 2015 12:56:40 +0000 Subject: [gpfsug-discuss] Switching from Standard to Advanced In-Reply-To: References: Message-ID: <201511281257.tASCvaAW027707@d06av12.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Sat Nov 28 17:49:42 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Sat, 28 Nov 2015 12:49:42 -0500 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: <565793F2.5070407@buzzard.me.uk> References: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> <565793F2.5070407@buzzard.me.uk> Message-ID: <201511281749.tASHnmaU009090@d01av03.pok.ibm.com> In some ways, Jon Buzzard's answer is correct. However, outside of GPFS consider: 1) It is certainly possible to provide a user-id that has at most read access to any files and devices. A user that cannot write any files on any device, but perhaps can view them with some applications on some display only devices. 2) Regardless of (1), I always say, much as Jon, "If you can read it, you can copy it!" Consider even in a secured facility on a secure, armored terminal with no means of electrical interfacing, subject to strip search, a spy can commit important secrets to memory. Or short of strip search, one can always transcribe (copy!) to paper, canvas, parchment, film, or photograph or otherwise "screen scrape" and copy an image and/or audio to any storage device. It has also been reported that spy agencies have devices that can screen scrape at a distance, by processing electro-magnetic signals (Radio, Microwave, ...) emanated from ordinary PCs, CRTs, and the like. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From kraemerf at de.ibm.com Sun Nov 29 18:32:39 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Sun, 29 Nov 2015 19:32:39 +0100 Subject: [gpfsug-discuss] FYI - IBM Redbooks Message-ID: <201511291832.tATIWpIX023706@d06av11.portsmouth.uk.ibm.com> IBM Spectrum Scale (formerly GPFS) Revised: November 17, 2015 ISBN: 0738440736 550 pages Explore the book online at http://www.redbooks.ibm.com/redbooks/pdfs/sg248254.pdf Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Sun Nov 29 18:34:38 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Sun, 29 Nov 2015 19:34:38 +0100 Subject: [gpfsug-discuss] FYI - IBM Redpaper Message-ID: <201511291845.tATIjVeo017922@d06av08.portsmouth.uk.ibm.com> Implementing IBM Spectrum Scale Revised: November 20, 2015 More details are available at http://www.redbooks.ibm.com/redpapers/pdfs/redp5254.pdf Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From service at metamodul.com Sun Nov 29 21:22:49 2015 From: service at metamodul.com (service at metamodul.com) Date: Sun, 29 Nov 2015 22:22:49 +0100 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. Message-ID: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> I think you talk about something like the novell ci copy inhibit attribut?https://www.novell.com/documentation/oes11/stor_filesys_lx/data/bs3fkbm.html. With the current GPFS it is imho not possible. Might be able in case leight weight callbacks gets introduced. Together with self defined user attributs it might be able. Hajo Von Samsung Mobile gesendet
-------- Urspr?ngliche Nachricht --------
Von: Tushar Pathare
Datum:2015.11.26 16:47 (GMT+01:00)
An: gpfsug-discuss at spectrumscale.org
Betreff: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy.
Hello Team, Is it possible to share the data on GPFS and disabling data copy. It is possible through ACLs. Tushar B Pathare High Performance Computing (HPC) Administrator General Parallel File System Scientific Computing Bioinformatics Division Research Sidra Medical and Research Centre PO Box 26999 | Doha, Qatar Burj Doha Tower,Floor 8 D +974 44042250 | M +974 74793547 tpathare at sidra.org | www.sidra.org Disclaimer: This email and its attachments may be confidential and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, any reading, printing, storage, disclosure, copying or any other action taken in respect of this e-mail is prohibited and may be unlawful. If you are not the intended recipient, please notify the sender immediately by using the reply function and then permanently delete what you have received. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Sidra Medical and Research Center. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdeluca at gmail.com Sun Nov 29 21:45:52 2015 From: bdeluca at gmail.com (Ben De Luca) Date: Sun, 29 Nov 2015 23:45:52 +0200 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> References: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> Message-ID: How can some one have thought of implementing this, if the data can be read to memory it can be written from it...... On 29 November 2015 at 23:22, service at metamodul.com wrote: > I think you talk about something like the novell ci copy inhibit attribut > https://www.novell.com/documentation/oes11/stor_filesys_lx/data/bs3fkbm.html > . > With the current GPFS it is imho not possible. Might be able in case > leight weight callbacks gets introduced. Together with self defined user > attributs it might be able. > Hajo > > > Von Samsung Mobile gesendet > > > -------- Urspr?ngliche Nachricht -------- > Von: Tushar Pathare > Datum:2015.11.26 16:47 (GMT+01:00) > An: gpfsug-discuss at spectrumscale.org > Betreff: [gpfsug-discuss] How can we give read access to GPFS data with > restricting data copy. > > Hello Team, > > Is it possible to share the data on GPFS and disabling data copy. > > It is possible through ACLs. > > > > > > *Tushar B Pathare* > > High Performance Computing (HPC) Administrator > > General Parallel File System > > Scientific Computing > > Bioinformatics Division > > Research > > > > *Sidra Medical and Research Centre* > > PO Box 26999 | Doha, Qatar > > Burj Doha Tower,Floor 8 > > D +974 44042250 | M +974 74793547 > > tpathare at sidra.org | www.sidra.org > > > > > > [image: cid:C4701480-241B-4973-B378-C72FB3BC9FFB] > > > Disclaimer: This email and its attachments may be confidential and are > intended solely for the use of the individual to whom it is addressed. If > you are not the intended recipient, any reading, printing, storage, > disclosure, copying or any other action taken in respect of this e-mail is > prohibited and may be unlawful. If you are not the intended recipient, > please notify the sender immediately by using the reply function and then > permanently delete what you have received. Any views or opinions expressed > are solely those of the author and do not necessarily represent those of > Sidra Medical and Research Center. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Sun Nov 29 21:54:35 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Sun, 29 Nov 2015 21:54:35 +0000 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: References: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> Message-ID: <565B741B.1010003@buzzard.me.uk> On 29/11/15 21:45, Ben De Luca wrote: > How can some one have thought of implementing this, if the data can be > read to memory it can be written from it...... > That's my point. Also unless it is encrypted on the wire I can just dump it with tcpdump, I guess the issue is how high you want to make the hurdles. You and I on this list might see DRM as a waste of time the rest of population won't find it anywhere near as simple. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Robert.Oesterlin at nuance.com Sun Nov 29 23:08:06 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Sun, 29 Nov 2015 23:08:06 +0000 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? Message-ID: I noticed that IBM only shipped the zimon performance sensors for RH7 with version 4.2 . This is a HUGE disappointment ? most of my NSD servers are still and RH 6.6 (and the clients). gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm pmswift-4.2.0-0.noarch.rpm Can IBM comment on support for RH6 system with the Performance sensors? I understand the collector node must be at RH7. Making the performance sensor RH7 only means many users won?t be able to take advantage of this function. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Mon Nov 30 03:27:42 2015 From: knop at us.ibm.com (Felipe Knop) Date: Sun, 29 Nov 2015 22:27:42 -0500 Subject: [gpfsug-discuss] Spectrum Scale 4.2 publications: indexing fixed Message-ID: <201511300327.tAU3ReiE005929@d01av01.pok.ibm.com> All, The indexing problem reported below has now been fixed. Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 ----- Forwarded by Felipe Knop/Poughkeepsie/IBM on 11/29/2015 10:21 PM ----- From: Felipe Knop/Poughkeepsie/IBM To: gpfsug main discussion list Date: 11/24/2015 11:36 AM Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Simon, all, The Call Home facility is described in the Advanced Administration Guide http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Chapter 24. Understanding the call home function A problem has been identified with the indexing facility for the Spectrum Scale 4.2 publications . The team is working to rectify that. Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/24/2015 08:52 AM Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for the pointer Paul. It appears that search for anything in the docs, doesn't work ... Simon From: on behalf of Paul Roberts Reply-To: gpfsug main discussion list Date: Tuesday, 24 November 2015 at 13:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tomasz.Wolski at ts.fujitsu.com Mon Nov 30 10:45:36 2015 From: Tomasz.Wolski at ts.fujitsu.com (Tomasz.Wolski at ts.fujitsu.com) Date: Mon, 30 Nov 2015 10:45:36 +0000 Subject: [gpfsug-discuss] IO performance of replicated GPFS filesystem Message-ID: <8b3278e23a5b42a3be80629ee18f307b@R01UKEXCASM223.r01.fujitsu.local> Hi All, I could use some help of the experts here :) Please correct me if I'm wrong: I suspect that GPFS filesystem READ performance is better when filesystem is replicated to i.e. two failure groups, where these failure groups are placed on separate RAID controllers. In this case WRITE performance should be worse, since the same data must go to two locations. What about situation where GPFS filesystem has two metadataOnly NSDs which are also replicated? Does metadata READ performance increase in this way as well (and WRITE decreases)? Best regards, Tomasz Wolski -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Mon Nov 30 11:11:44 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 30 Nov 2015 11:11:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? In-Reply-To: References: , Message-ID: Thanks Alexander! I'm assuming these can be requested directly from IBM until then via PMR process. (no need to respond if this is the case) Bob Oesterlin Sr Storage Engineer, Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.Wolf-Reber at de.ibm.com Mon Nov 30 12:52:10 2015 From: A.Wolf-Reber at de.ibm.com (Alexander Wolf) Date: Mon, 30 Nov 2015 13:52:10 +0100 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? In-Reply-To: References: Message-ID: This was a mistake. The RHEL6 sensor packages should have been included but where somehow not picked up in the final image. We will fix this with the next PTF. Mit freundlichen Gr??en / Kind regards IBM Spectrum Scale Dr. Alexander Wolf-Reber Spectrum Scale GUI development lead Department M069 / Spectrum Scale Software Development +49-6131-84-6521 a.wolf-reber at de.ibm.com IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz / Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ----- Original message ----- From: "Oesterlin, Robert" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? Date: Mon, Nov 30, 2015 12:08 AM I noticed that IBM only shipped the zimon performance sensors for RH7 with version 4.2 . This is a HUGE disappointment ? most of my NSD servers are still and RH 6.6 (and the clients). gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm pmswift-4.2.0-0.noarch.rpm Can IBM comment on support for RH6 system with the Performance sensors? I understand the collector node must be at RH7. Making the performance sensor RH7 only means many users won?t be able to take advantage of this function. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From bbanister at jumptrading.com Mon Nov 30 16:01:58 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 30 Nov 2015 16:01:58 +0000 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? In-Reply-To: References: Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05DAB217@CHI-EXCHANGEW1.w2k.jumptrading.com> Please let us know if there is an APAR number we can track for this, thanks! -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Alexander Wolf Sent: Monday, November 30, 2015 6:52 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? This was a mistake. The RHEL6 sensor packages should have been included but where somehow not picked up in the final image. We will fix this with the next PTF. Mit freundlichen Gr??en / Kind regards IBM Spectrum Scale Dr. Alexander Wolf-Reber Spectrum Scale GUI development lead Department M069 / Spectrum Scale Software Development +49-6131-84-6521 a.wolf-reber at de.ibm.com IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz / Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ----- Original message ----- From: "Oesterlin, Robert" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? Date: Mon, Nov 30, 2015 12:08 AM I noticed that IBM only shipped the zimon performance sensors for RH7 with version 4.2 . This is a HUGE disappointment ? most of my NSD servers are still and RH 6.6 (and the clients). gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm pmswift-4.2.0-0.noarch.rpm Can IBM comment on support for RH6 system with the Performance sensors? I understand the collector node must be at RH7. Making the performance sensor RH7 only means many users won?t be able to take advantage of this function. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From S.J.Thompson at bham.ac.uk Mon Nov 30 16:27:34 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 30 Nov 2015 16:27:34 +0000 Subject: [gpfsug-discuss] Placement policies and copies Message-ID: Hi, I have a file system which has the default number of data copies set to 2. I now have some data Id like to have which only has 1 copy made. I know that files and directories don't inherit 1 copy based on their parent. Can I do this with a placement rule to change the number of copies to 1? I don't really want to have to find the file afterwards and fix up as that requires an mmrestripefs to clear the second copy. Or if I have a pool which only has nsd disks in a single failure group and use a placement policy for that, would that work? Or will gpfs forever warn me that due to fs changes I have data at risk? Thanks Simon From makaplan at us.ibm.com Mon Nov 30 17:58:23 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Nov 2015 12:58:23 -0500 Subject: [gpfsug-discuss] Placement policies and copies In-Reply-To: References: Message-ID: <201511301758.tAUHwYn9018800@d01av01.pok.ibm.com> From the Advanced Admin book: File placement rules: RULE [?RuleName?] SET POOL ?PoolName? [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET (?FilesetName?[,?FilesetName?]...)] [WHERE SqlExpression] So, use REPLICATE(1) That's for new files as they are being created. You can use mmapplypolicy and the MIGRATE rule to change the replication factor of files that already exist. --marc of GPFS. From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/30/2015 11:27 AM Subject: [gpfsug-discuss] Placement policies and copies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I have a file system which has the default number of data copies set to 2. I now have some data Id like to have which only has 1 copy made. I know that files and directories don't inherit 1 copy based on their parent. Can I do this with a placement rule to change the number of copies to 1? I don't really want to have to find the file afterwards and fix up as that requires an mmrestripefs to clear the second copy. Or if I have a pool which only has nsd disks in a single failure group and use a placement policy for that, would that work? Or will gpfs forever warn me that due to fs changes I have data at risk? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweil at genome.wustl.edu Mon Nov 30 18:42:21 2015 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 30 Nov 2015 12:42:21 -0600 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Message-ID: <565C988D.5060604@genome.wustl.edu> Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. From puneetc at us.ibm.com Mon Nov 30 18:53:04 2015 From: puneetc at us.ibm.com (Puneet Chaudhary) Date: Mon, 30 Nov 2015 13:53:04 -0500 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <565C988D.5060604@genome.wustl.edu> References: <565C988D.5060604@genome.wustl.edu> Message-ID: <201511301853.tAUIrARZ004937@d03av05.boulder.ibm.com> Matt, GPFS version 4.1.0-8 and prior had an issue with RHEL 7.1 systemd. Red Hat introduced new changes is systemd that led to this issue. Subsequently Red Hat issued an errata and reverted the changes to systemd ( https://rhn.redhat.com/errata/RHBA-2015-0738.html). Please update the level of systemd on your nodes which will address the issue. Regards, Puneet Chaudhary Scalable I/O Development General Parallel File System (GPFS) and Technical Computing (TC) Solutions Enablement Phone: 1-720-342-1546 | Mobile: 1-845-475-8806 IBM E-mail: puneetc at us.ibm.com 2455 South Rd Poughkeepsie, NY 12601-5400 United States From: Matt Weil To: gpfsug main discussion list Date: 11/30/2015 01:42 PM Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 09076871.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Mon Nov 30 18:55:42 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 30 Nov 2015 18:55:42 +0000 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <565C988D.5060604@genome.wustl.edu> References: <565C988D.5060604@genome.wustl.edu> Message-ID: I'm sure I read about this, possibly the release notes or faq. Cant find it right now, but I did find a post on devworks: https://www.ibm.com/developerworks/community/forums/html/threadTopic?id=00104bb5-acf5-4036-93ba-29ea7b1d43b7 So sounds like you need a higher gpfs version, or possibly a rhel patch. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Matt Weil [mweil at genome.wustl.edu] Sent: 30 November 2015 18:42 To: gpfsug main discussion list Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From kywang at us.ibm.com Mon Nov 30 19:00:13 2015 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Mon, 30 Nov 2015 14:00:13 -0500 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <565C988D.5060604@genome.wustl.edu> References: <565C988D.5060604@genome.wustl.edu> Message-ID: <201511301900.tAUJ0LSl007722@d03av05.boulder.ibm.com> It is appears to be a known problem that is fixed in GPFS 4.1.1.0, where RHEL 7.1 has been tested with. This is the detail on the issue: Problem: one systemd commit ff502445 is in RHEL7.1/SLES12 systemd, now new systemd will try to check the status of the BindsTo device. If the BindsTo device is inactive, systemd will fail the mount job and unmount the file system. Unfortunately, the mknod device will be always marked as inactive by systemd, and GPFS invokes mknod to create block device under /dev, so hit the unmount issue. Fix: Udev/systemd reads device info from kernel sysfs, while device created by mknod does not register in kernel, that is why, systemd fails to read the device info and device status keeps as inactive. Under new distros, a new tsctl setPseudoDisk command implemented, takes the role of mknod, will register the pseudo device for each GPFS file system in kernel sysfs before mounting, to make systemd happy. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: Matt Weil To: gpfsug main discussion list Date: 11/30/2015 01:42 PM Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stijn.deweirdt at ugent.be Mon Nov 30 19:31:49 2015 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Mon, 30 Nov 2015 20:31:49 +0100 Subject: [gpfsug-discuss] HDFS protocol in 4.2 Message-ID: <565CA425.9070109@ugent.be> hi all, the gpfs 4.2.0 advanced administration guide has a section on HDFS protocol. while reading it, i'm a bit puzzled if this has any advantage for a non-FPO site. we are are still experimenting with the "regular" gpfs hadoop connector, so it would be nice to hear any advantages (besides protocol transparency) over the hadoop connector. in particular performance comes to mind ;) the admin guide advises to enable local read, which seems understandable for FPO, but what does this mean for a non-FPO site? sending data over RPC is proabably worse performance wise compare to the gpfs hadoop binding. also, are there any other advantages possible with a proper name and data node services from hdfs protocol? (like zero copy shuffle on gpfs, something that didn't seem to exist with the connector during some tests we ran, and which was a bit disappointing, beging a shared filesystem and all that) many thanks, stijn From S.J.Thompson at bham.ac.uk Mon Nov 30 20:19:39 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 30 Nov 2015 20:19:39 +0000 Subject: [gpfsug-discuss] Placement policies and copies In-Reply-To: <201511301758.tAUHwYn9018800@d01av01.pok.ibm.com> References: , <201511301758.tAUHwYn9018800@d01av01.pok.ibm.com> Message-ID: Hi Marc, Thanks. With the migrate option, does it remove the second copy if already present? Or do you still need to do an mmrestripefs to reclaim the space? Related: if the storage pool has multiple failure groups, will GPFS place the data into a single pool, or will it spray the data over all NSD disks in all failure groups? I think I'll stick to using a pool with NSD disks in a single failure group, so I know where the files are, but would be useful to know. I assume that if the pool then goes offline, I won't lose my whole FS, just not have access to the non replicated fileset? Thanks Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 30 November 2015 17:58 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Placement policies and copies >From the Advanced Admin book: File placement rules: RULE [?RuleName?] SET POOL ?PoolName? [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET (?FilesetName?[,?FilesetName?]...)] [WHERE SqlExpression] So, use REPLICATE(1) That's for new files as they are being created. You can use mmapplypolicy and the MIGRATE rule to change the replication factor of files that already exist. --marc of GPFS. From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/30/2015 11:27 AM Subject: [gpfsug-discuss] Placement policies and copies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I have a file system which has the default number of data copies set to 2. I now have some data Id like to have which only has 1 copy made. I know that files and directories don't inherit 1 copy based on their parent. Can I do this with a placement rule to change the number of copies to 1? I don't really want to have to find the file afterwards and fix up as that requires an mmrestripefs to clear the second copy. Or if I have a pool which only has nsd disks in a single failure group and use a placement policy for that, would that work? Or will gpfs forever warn me that due to fs changes I have data at risk? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweil at genome.wustl.edu Mon Nov 30 22:13:16 2015 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 30 Nov 2015 16:13:16 -0600 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <201511301900.tAUJ0LSl007722@d03av05.boulder.ibm.com> References: <565C988D.5060604@genome.wustl.edu> <201511301900.tAUJ0LSl007722@d03av05.boulder.ibm.com> Message-ID: <565CC9FC.8080506@genome.wustl.edu> Thanks. That was the problem. On 11/30/15 1:00 PM, Kuei-Yu Wang-Knop wrote: > > It is appears to be a known problem that is fixed in GPFS 4.1.1.0, > where RHEL 7.1 has been tested with. > > This is the detail on the issue: > > Problem: one systemd commit ff502445 is in RHEL7.1/SLES12 systemd, > now new systemd will try to check the status of the BindsTo device. > If the BindsTo device is inactive, systemd will fail the mount job > and unmount the file system. Unfortunately, the mknod device will > be always marked as inactive by systemd, and GPFS invokes mknod to > create block device under /dev, so hit the unmount issue. > > Fix: Udev/systemd reads device info from kernel sysfs, while device > created by mknod does not register in kernel, that is why, systemd > fails to read the device info and device status keeps as inactive. > Under new distros, a new tsctl setPseudoDisk command implemented, > takes the role of mknod, will register the pseudo device for each > GPFS file system in kernel sysfs before mounting, to make systemd > happy. > > > ------------------------------------ > Kuei-Yu Wang-Knop > IBM Scalable I/O development > (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com > > > Inactive hide details for Matt Weil ---11/30/2015 01:42:08 PM---Hello > all, Not sure if this is the a good place but we are expeMatt Weil > ---11/30/2015 01:42:08 PM---Hello all, Not sure if this is the a good > place but we are experiencing a strange > > From: Matt Weil > To: gpfsug main discussion list > Date: 11/30/2015 01:42 PM > Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file > systems PMR 70339, 122, 000 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Hello all, > > Not sure if this is the a good place but we are experiencing a strange > issue. > > It appears that systemd is un-mounting the file system immediately after > it is mounted. > > #strace of systemd shows that the device is not there. Systemd sees > that the path is failed and umounts the device. Our only work around > currently is to link /usr/bin/umount to true. Then the device stays > mounted. > > 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, > 235), ...}) = 0 > 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 > ENOENT (No such file or directory) > 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No > such file or directory) > 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 > > # It appears that the major min numbers have been changed > [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 > lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> > ../../devices/virtual/block/dm-239 > [root at gennsd4 system]# ls -l /dev/aggr3 > brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 > [root at gennsd4 system]# ls /sys/dev/block/239:235 > ls: cannot access /sys/dev/block/239:235: No such file or directory > > [root at gennsd4 system]# rpm -qa | grep gpfs > gpfs.gpl-4.1.0-7.noarch > gpfs.gskit-8.0.50-32.x86_64 > gpfs.msg.en_US-4.1.0-7.noarch > gpfs.docs-4.1.0-7.noarch > gpfs.base-4.1.0-7.x86_64 > gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 > gpfs.ext-4.1.0-7.x86_64 > [root at gennsd4 system]# rpm -qa | grep systemd > systemd-sysv-219-19.el7.x86_64 > systemd-libs-219-19.el7.x86_64 > systemd-219-19.el7.x86_64 > systemd-python-219-19.el7.x86_64 > > any help would be appreciated. > > Thanks > > Matt > > ____ > This email message is a private communication. The information > transmitted, including attachments, is intended only for the person or > entity to which it is addressed and may contain confidential, > privileged, and/or proprietary material. Any review, duplication, > retransmission, distribution, or other use of, or taking of any action > in reliance upon, this information by persons or entities other than > the intended recipient is unauthorized by the sender and is > prohibited. If you have received this message in error, please contact > the sender immediately by return email and delete the original message > from all computer systems. Thank you. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From martin.gasthuber at desy.de Mon Nov 2 13:53:49 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Mon, 2 Nov 2015 14:53:49 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz Message-ID: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Hi, we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? best regards, Martin From jonathan at buzzard.me.uk Mon Nov 2 14:20:06 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 02 Nov 2015 14:20:06 +0000 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <1446474006.17909.120.camel@buzzard.phy.strath.ac.uk> On Mon, 2015-11-02 at 14:53 +0100, Martin Gasthuber wrote: > Hi, > > we are currently in discussion with our local network security people > about the plan to make certain data accessible to outside scientists > via ftp - this implies that the host running the ftp daemon runs with > their ethernet ports inside a dmz. On the other hand, all NSD access is > through IB (and should stay that way). The biggest concerns are around > the possible intrude from that ftp host (running as GPFS client) > through the IB infrastructure to other cluster nodes and possible > causing big troubles on the scientific data. Did anybody here has > similar constrains and possible solutions to mitigate that risk ? > Would it not make sense to export it via NFS over Ethernet from the GPFS cluster to the FTP node, firewall it up the wazoo and avoid the server licenses anyway? Note if you offer remote access to your "cluster" to local users already the additional attack surface from an FTP server is minimal to begin with. All said and done, one however suspects that 99.999% of hackers have precisely zero experience with Infiniband and thus would struggle to be able to exploit the IB fabric beyond using IPoIB. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From frederik.ferner at diamond.ac.uk Mon Nov 2 14:46:49 2015 From: frederik.ferner at diamond.ac.uk (Frederik Ferner) Date: Mon, 2 Nov 2015 14:46:49 +0000 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <56377759.4060904@diamond.ac.uk> On 02/11/15 13:53, Martin Gasthuber wrote: > we are currently in discussion with our local network security people > about the plan to make certain data accessible to outside scientists > via ftp - this implies that the host running the ftp daemon runs with > their ethernet ports inside a dmz. On the other hand, all NSD access > is through IB (and should stay that way). The biggest concerns are > around the possible intrude from that ftp host (running as GPFS > client) through the IB infrastructure to other cluster nodes and > possible causing big troubles on the scientific data. Did anybody > here has similar constrains and possible solutions to mitigate that > risk ? Martin, we have a very similar situation here at Diamond with our GridFTP/Globus endpoint. We have a machine with full access to our high performance file systems in our internal network, which then exports those over NFS over a private point to point fibre to a machine in the DMZ. This is also firewalled with IPTables on the link on the internal machine to only allow NFS traffic. This has so far provided sufficient performance to our users. Kind regards, Frederik -- Frederik Ferner Senior Computer Systems Administrator (storage) phone: +44 1235 77 8624 Diamond Light Source Ltd. mob: +44 7917 08 5110 Duty Sys Admin can be reached on x8596 (Apologies in advance for the lines below. Some bits are a legal requirement and I have no control over them.) -- This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd. Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message. Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom From service at metamodul.com Mon Nov 2 15:00:07 2015 From: service at metamodul.com (MetaService) Date: Mon, 02 Nov 2015 16:00:07 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <1446476407.7183.108.camel@pluto> I would think about to use a dedicated GPFS remote cluster. Advantage: - If required the remote cluster could be shutdown without to impact the home cluster. - You can add additional types of access onto the remote cluster - You could implement a HA solution to make the access types HA. but you must be aware that you need a GPFS server license. Cheers Hajo From ewahl at osc.edu Mon Nov 2 15:22:19 2015 From: ewahl at osc.edu (Wahl, Edward) Date: Mon, 2 Nov 2015 15:22:19 +0000 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> First off let me recommend vsftpd. We've used that in a few single point to point cases to excellent results. Next, I'm going to agree with Johnathan here, any hacker that someone gains advantage on an FTP server will probably not have the knowledge to take advantage of the IB, however there are some steps you could take to mitigate this on a node such as you are thinking of: -Perhaps an NFS share from an NSD across IB instead of being a native GPFS client? This would remove any possibility of escalation exploits gaining access to other servers via SSH keys on the IB fabric but will reduce this nodes speed of access. On the other hand almost any IB faster than SDR probably is going to wait on the external network unless it's 40Gb or 100Gb attached. -firewalled access and/or narrow corridor for ftp access. This is pretty much a must. -fail2ban like product checking the ftp logs. Takes some work, but if the firewall isn't narrow enough this is worth it. Ed Wahl OSC ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [martin.gasthuber at desy.de] Sent: Monday, November 02, 2015 8:53 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS (partly) inside dmz Hi, we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? best regards, Martin _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From martin.gasthuber at desy.de Mon Nov 2 20:49:02 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Mon, 2 Nov 2015 21:49:02 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: the path via NFS is already checked - problem here is not the bandwidth, although the WAN ports allows for 2 x 10GE, its the file rate we need to optimize. With NFS, in between GPFS and FTP, we saw ~2 times less file download rate. My concern are also not really about raw IB access and misuse - its because IPoIB, in order to minimize the risk, we had to reconfigure all other cluster nodes to refuse IP connects through the IB ports from that node - more work, less fun ! Probably we had to go the slower NFS way ;-) best regards, Martin > On 2 Nov, 2015, at 16:22, Wahl, Edward wrote: > > First off let me recommend vsftpd. We've used that in a few single point to point cases to excellent results. > > Next, I'm going to agree with Johnathan here, any hacker that someone gains advantage on an FTP server will probably not have the knowledge to take advantage of the IB, however there are some steps you could take to mitigate this on a node such as you are thinking of: > > -Perhaps an NFS share from an NSD across IB instead of being a native GPFS client? This would remove any possibility of escalation exploits gaining access to other servers via SSH keys on the IB fabric but will reduce this nodes speed of access. On the other hand almost any IB faster than SDR probably is going to wait on the external network unless it's 40Gb or 100Gb attached. > > -firewalled access and/or narrow corridor for ftp access. This is pretty much a must. > > -fail2ban like product checking the ftp logs. Takes some work, but if the firewall isn't narrow enough this is worth it. > > Ed Wahl > OSC > > > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [martin.gasthuber at desy.de] > Sent: Monday, November 02, 2015 8:53 AM > To: gpfsug main discussion list > Subject: [gpfsug-discuss] GPFS (partly) inside dmz > > Hi, > > we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? > > best regards, > Martin > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From peserocka at gmail.com Tue Nov 3 02:32:56 2015 From: peserocka at gmail.com (Pete Sero) Date: Tue, 3 Nov 2015 10:32:56 +0800 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: Have you tested prefetching reads on the NFS server node? That should help for streaming reads as ultimatively initial by the ftp user. ? Peter On 2015 Nov 3 Tue, at 04:49, Martin Gasthuber wrote: > the path via NFS is already checked - problem here is not the bandwidth, although the WAN ports allows for 2 x 10GE, its the file rate we need to optimize. With NFS, in between GPFS and FTP, we saw ~2 times less file download rate. My concern are also not really about raw IB access and misuse - its because IPoIB, in order to minimize the risk, we had to reconfigure all other cluster nodes to refuse IP connects through the IB ports from that node - more work, less fun ! Probably we had to go the slower NFS way ;-) > > best regards, > Martin >> On 2 Nov, 2015, at 16:22, Wahl, Edward wrote: >> >> First off let me recommend vsftpd. We've used that in a few single point to point cases to excellent results. >> >> Next, I'm going to agree with Johnathan here, any hacker that someone gains advantage on an FTP server will probably not have the knowledge to take advantage of the IB, however there are some steps you could take to mitigate this on a node such as you are thinking of: >> >> -Perhaps an NFS share from an NSD across IB instead of being a native GPFS client? This would remove any possibility of escalation exploits gaining access to other servers via SSH keys on the IB fabric but will reduce this nodes speed of access. On the other hand almost any IB faster than SDR probably is going to wait on the external network unless it's 40Gb or 100Gb attached. >> >> -firewalled access and/or narrow corridor for ftp access. This is pretty much a must. >> >> -fail2ban like product checking the ftp logs. Takes some work, but if the firewall isn't narrow enough this is worth it. >> >> Ed Wahl >> OSC >> >> >> ________________________________________ >> From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [martin.gasthuber at desy.de] >> Sent: Monday, November 02, 2015 8:53 AM >> To: gpfsug main discussion list >> Subject: [gpfsug-discuss] GPFS (partly) inside dmz >> >> Hi, >> >> we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? >> >> best regards, >> Martin >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From janfrode at tanso.net Tue Nov 3 09:16:09 2015 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Nov 2015 10:16:09 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: I would be very weary about stretching a cluster between DMZ's. IMHO the nodes are too tighly connected for that. I just saw the Desy/GPFS talk at IBM technical university in Cannes, and it was mentioned that you had moved from 60 MB/s to 600 MB/s from un-tuned to tuned NFS over 10GbE. Sounded quite impressive. Are you saying putting FTP on top of those 600 MB/s kills the performance / download rate? Maybe AFM, with readonly Cache, would allow you to get better performance by caching the content on the FTP-servers ? Then all you should need of openings between the DMZ's would be the NFS-port for a readonly export.. -jf On Mon, Nov 2, 2015 at 9:49 PM, Martin Gasthuber wrote: > the path via NFS is already checked - problem here is not the bandwidth, > although the WAN ports allows for 2 x 10GE, its the file rate we need to > optimize. With NFS, in between GPFS and FTP, we saw ~2 times less file > download rate. My concern are also not really about raw IB access and > misuse - its because IPoIB, in order to minimize the risk, we had to > reconfigure all other cluster nodes to refuse IP connects through the IB > ports from that node - more work, less fun ! Probably we had to go the > slower NFS way ;-) > > best regards, > Martin > > On 2 Nov, 2015, at 16:22, Wahl, Edward wrote: > > > > First off let me recommend vsftpd. We've used that in a few single > point to point cases to excellent results. > > > > Next, I'm going to agree with Johnathan here, any hacker that someone > gains advantage on an FTP server will probably not have the knowledge to > take advantage of the IB, however there are some steps you could take to > mitigate this on a node such as you are thinking of: > > > > -Perhaps an NFS share from an NSD across IB instead of being a native > GPFS client? This would remove any possibility of escalation exploits > gaining access to other servers via SSH keys on the IB fabric but will > reduce this nodes speed of access. On the other hand almost any IB faster > than SDR probably is going to wait on the external network unless it's 40Gb > or 100Gb attached. > > > > -firewalled access and/or narrow corridor for ftp access. This is pretty > much a must. > > > > -fail2ban like product checking the ftp logs. Takes some work, but if > the firewall isn't narrow enough this is worth it. > > > > Ed Wahl > > OSC > > > > > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [ > gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [ > martin.gasthuber at desy.de] > > Sent: Monday, November 02, 2015 8:53 AM > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] GPFS (partly) inside dmz > > > > Hi, > > > > we are currently in discussion with our local network security people > about the plan to make certain data accessible to outside scientists via > ftp - this implies that the host running the ftp daemon runs with their > ethernet ports inside a dmz. On the other hand, all NSD access is through > IB (and should stay that way). The biggest concerns are around the possible > intrude from that ftp host (running as GPFS client) through the IB > infrastructure to other cluster nodes and possible causing big troubles on > the scientific data. Did anybody here has similar constrains and possible > solutions to mitigate that risk ? > > > > best regards, > > Martin > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Wed Nov 4 18:18:21 2015 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Wed, 4 Nov 2015 18:18:21 +0000 Subject: [gpfsug-discuss] AFM performance under load Message-ID: <563A4BED.1040801@ed.ac.uk> Hi folks, We're trying to get our AFM stack to remain responsive when under a heavy write load from the cache -> home. It looks like read operations won't get scheduled when there's a large write queue, and operations like "ls" in a directory which isn't currently valid in the cache can take several minutes to return. Does anyone have any ideas on how to stop AFM lookups running slowly when the AFM queues are big? ----------- Orlando -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From S.J.Thompson at bham.ac.uk Thu Nov 5 16:51:00 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 5 Nov 2015 16:51:00 +0000 Subject: [gpfsug-discuss] Running the gui Message-ID: Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? Thanks Simon From Robert.Oesterlin at nuance.com Thu Nov 5 16:55:42 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 5 Nov 2015 16:55:42 +0000 Subject: [gpfsug-discuss] Running the gui Message-ID: <2DD690DB-6510-4C5F-848A-91FC15DA6C84@nuance.com> Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Nov 5 17:10:46 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 5 Nov 2015 17:10:46 +0000 Subject: [gpfsug-discuss] Running the gui In-Reply-To: <2DD690DB-6510-4C5F-848A-91FC15DA6C84@nuance.com> References: <2DD690DB-6510-4C5F-848A-91FC15DA6C84@nuance.com> Message-ID: Yeah. Works and requires. What I'm trying to figure out. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 05 November 2015 16:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? From duersch at us.ibm.com Mon Nov 9 16:27:54 2015 From: duersch at us.ibm.com (Steve Duersch) Date: Mon, 9 Nov 2015 11:27:54 -0500 Subject: [gpfsug-discuss] Running the GUI In-Reply-To: References: Message-ID: I have confirmed that the GUI will run on a client license and is fully supported there. It can be any node. Steve Duersch Spectrum Scale (GPFS) FVTest IBM Poughkeepsie, New York Date: Thu, 5 Nov 2015 16:51:00 +0000 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Subject: [gpfsug-discuss] Running the gui Message-ID: Content-Type: text/plain; charset="us-ascii" Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? Thanks Simon From: gpfsug-discuss-request at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Date: 11/06/2015 07:00 AM Subject: gpfsug-discuss Digest, Vol 46, Issue 4 Sent by: gpfsug-discuss-bounces at spectrumscale.org Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Running the gui (Simon Thompson (Research Computing - IT Services)) 2. Re: Running the gui (Oesterlin, Robert) 3. Re: Running the gui (Simon Thompson (Research Computing - IT Services)) ---------------------------------------------------------------------- Message: 1 Date: Thu, 5 Nov 2015 16:51:00 +0000 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Subject: [gpfsug-discuss] Running the gui Message-ID: Content-Type: text/plain; charset="us-ascii" Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? Thanks Simon ------------------------------ Message: 2 Date: Thu, 5 Nov 2015 16:55:42 +0000 From: "Oesterlin, Robert" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Message-ID: <2DD690DB-6510-4C5F-848A-91FC15DA6C84 at nuance.com> Content-Type: text/plain; charset="utf-8" Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20151105/e39af88a/attachment-0001.html > ------------------------------ Message: 3 Date: Thu, 5 Nov 2015 17:10:46 +0000 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Message-ID: Content-Type: text/plain; charset="Windows-1252" Yeah. Works and requires. What I'm trying to figure out. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 05 November 2015 16:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 46, Issue 4 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From st.graf at fz-juelich.de Tue Nov 10 07:53:19 2015 From: st.graf at fz-juelich.de (Stephan Graf) Date: Tue, 10 Nov 2015 08:53:19 +0100 Subject: [gpfsug-discuss] ILM and Backup Question In-Reply-To: <201510281336.t9SDaiNa015723@d01av01.pok.ibm.com> References: <81E9FF09-D666-4BD1-A727-39AF4ED1F54B@iu.edu> <562DE7B5.7080303@fz-juelich.de> <201510262114.t9QLENpG024083@d01av01.pok.ibm.com> <562F21B7.8040007@fz-juelich.de> <201510271526.t9RFQ2Bw027971@d03av02.boulder.ibm.com> <563081E9.2090605@fz-juelich.de> <201510281336.t9SDaiNa015723@d01av01.pok.ibm.com> Message-ID: <5641A26F.4070405@fz-juelich.de> Hi Wayne. Just to come back to the mmbackup performance. Here the way we call it and the performance results: MTHREADS=1 QOPT="" # we check the lust run and set this to '-q' if required' /usr/lpp/mmfs/bin/mmbackup /$FS -S $SNAPFILE -g /work/root/mmbackup -a 4 $QOPT -m $MTHREADS -B 1000 -N justt sms04c1 --noquote --tsm-servers home -v -------------------------------------------------------- mmbackup: Backup of /homeb begins at Mon Nov 9 00:03:30 MEZ 2015. -------------------------------------------------------- ... Mon Nov 9 00:03:35 2015 mmbackup:Scanning file system homeb Mon Nov 9 03:07:17 2015 mmbackup:File system scan of homeb is complete. Mon Nov 9 03:07:17 2015 mmbackup:Calculating backup and expire lists for server home Mon Nov 9 03:07:17 2015 mmbackup:Determining file system changes for homeb [home]. Mon Nov 9 03:44:33 2015 mmbackup:changed=126305, expired=10086, unsupported=0 for server [home] Mon Nov 9 03:44:33 2015 mmbackup:Finished calculating lists [126305 changed, 10086 expired] for server home. Mon Nov 9 03:44:33 2015 mmbackup:Sending files to the TSM server [126305 changed, 10086 expired]. Mon Nov 9 03:44:33 2015 mmbackup:Performing expire operations Mon Nov 9 03:45:32 2015 mmbackup:Completed policy expiry run with 0 policy errors, 0 files failed, 0 severe errors, returning r c=0. Mon Nov 9 03:45:32 2015 mmbackup:Policy for expiry returned 0 Highest TSM error 0 Mon Nov 9 03:45:32 2015 mmbackup:Performing backup operations Mon Nov 9 04:54:29 2015 mmbackup:Completed policy backup run with 0 policy errors, 0 files failed, 0 severe errors, returning r c=0. Mon Nov 9 04:54:29 2015 mmbackup:Policy for backup returned 0 Highest TSM error 0 Total number of objects inspected: 137562 Total number of objects backed up: 127476 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 10086 Total number of objects failed: 0 Total number of bytes transferred: 427 GB Total number of objects encrypted: 0 Total number of bytes inspected: 459986708656 Total number of bytes transferred: 459989351070 Mon Nov 9 04:54:29 2015 mmbackup:analyzing: results from home. Mon Nov 9 04:54:29 2015 mmbackup:Analyzing audit log file /homeb/mmbackup.audit.homeb.home Mon Nov 9 05:02:46 2015 mmbackup:updating /homeb/.mmbackupShadow.1.home with /homeb/.mmbackupCfg/tmpfile2.mmbackup.homeb Mon Nov 9 05:02:46 2015 mmbackup:Copying updated shadow file to the TSM server Mon Nov 9 05:03:51 2015 mmbackup:Done working with files for TSM Server: home. Mon Nov 9 05:03:51 2015 mmbackup:Completed backup and expire jobs. Mon Nov 9 05:03:51 2015 mmbackup:TSM server home had 0 failures or excluded paths and returned 0. Its shadow database has been updated. Shadow DB state:updated Mon Nov 9 05:03:51 2015 mmbackup:Completed successfully. exit 0 ---------------------------------------------------------- mmbackup: Backup of /homeb completed successfully at Mon Nov 9 05:03:51 MEZ 2015. ---------------------------------------------------------- Stephan On 10/28/15 14:36, Wayne Sawdon wrote: > > You have to use both options even if -N is only the local node. Sorry, > > -Wayne > > > > Inactive hide details for Stephan Graf ---10/28/2015 01:06:36 AM---Hi > Wayne! We are using -g, and we only want to run it on oneStephan Graf > ---10/28/2015 01:06:36 AM---Hi Wayne! We are using -g, and we only > want to run it on one node, so we don't use the -N option. > > From: Stephan Graf > To: > Date: 10/28/2015 01:06 AM > Subject: Re: [gpfsug-discuss] ILM and Backup Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Hi Wayne! > > We are using -g, and we only want to run it on one node, so we don't > use the -N option. > > Stephan > > On 10/27/15 16:25, Wayne Sawdon wrote: > > > > From: Stephan Graf __ > > > > We are running the mmbackup on an AIX system > > oslevel -s > > 6100-07-10-1415 > > Current GPFS build: "4.1.0.8 ". > > > > So we only use one node for the policy run. > > > > Even on one node you should see a speedup using -g and -N. > > -Wayne > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From makaplan at us.ibm.com Tue Nov 10 16:20:18 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 10 Nov 2015 11:20:18 -0500 Subject: [gpfsug-discuss] ILM and Backup Question In-Reply-To: <5641A26F.4070405@fz-juelich.de> References: <81E9FF09-D666-4BD1-A727-39AF4ED1F54B@iu.edu> <562DE7B5.7080303@fz-juelich.de> <201510262114.t9QLENpG024083@d01av01.pok.ibm.com> <562F21B7.8040007@fz-juelich.de> <201510271526.t9RFQ2Bw027971@d03av02.boulder.ibm.com> <563081E9.2090605@fz-juelich.de> <201510281336.t9SDaiNa015723@d01av01.pok.ibm.com> <5641A26F.4070405@fz-juelich.de> Message-ID: <201511101620.tAAGKRg0010175@d03av03.boulder.ibm.com> OOPS... mmbackup uses mmapplypolicy. Unfortunately the script "mmapplypolicy" is a little "too smart". When you use the "-N mynode" parameter it sees that you are referring to just the node upon which you are executing and does not pass the -N argument to the underlying tsapolicy command. (Not my idea, just telling you what's there.) So right now, to force the parallelized inode scan on a single node, please just use the tsapolicy command with -N and -g. tsapolicy doesn't do such smart argument checking, it is also missing the nodefile, nodeclass, defaultHelperNodes stuff ... those are some of the "value add" of the mmapplypolicy script. If you're running the parallel version and with message level -L 1 you will see this message: [I] 2015-11-10 at 15:57:47.871 Parallel-piped sort and policy evaluation. 5 files scanned. Otherwise you will see this message: [I] 2015-11-10 at 15:49:44.816 Policy evaluation. 5 files scanned. But ... if you're running mmapplypolicy under mmbackup... a little more hacking is required. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Nov 11 13:01:30 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 11 Nov 2015 13:01:30 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda Message-ID: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. Here is the agenda: 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden 2:10 - 2:30 GUI Demo- Ben Randall 2:30 - 3:00 Product quality improvement updates - Hye-Young 3:00 - 3:15 Break 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? Panelists: Bob Oesterlin, Nuance (Arxscan) Wolfgang Bring, Julich (homegrown) Mark Weghorst, Travelport (open source based on Graphana & FluxDB) 5:45 ?Welcome Reception by DSS (sponsoring reception) Bob Oesterlin Sr Storage Engineer, Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From service at metamodul.com Wed Nov 11 16:57:49 2015 From: service at metamodul.com (service at metamodul.com) Date: Wed, 11 Nov 2015 17:57:49 +0100 (CET) Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) Message-ID: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> @IBM GPFS and HA GPFS has now the so called protocol nodes which do provide a HA environment for NFS and SAMBA. I assume its based on the CTDB since the CTDB is currently supporting a few protocols already.* What i would like to see is a generic HA interface using GPFS. It could be based on the CTDB , native GPFS callbacks or any service providing HA functionality based on a clustered FS. Such a service would allow - only with minor extentions - to make almost any service (Oracle,DB2,FTP,SSH,NFS,CRON,TSM a.s.o ) HA. So IMHO the current approach is a little bit shortsighted. GPFS and System i I looking forward the day we have a SQL interface/API to GPFS. Thus storing DB objects natively on a GPFS thus not using any kind of addional DB files. Now if you would have such an interface what about a general modern language which supportr SQL and is multi node runable ? Who knows ... Maybe the AS/400 gets reinvented cheers Hajo Reference: * https://ctdb.samba.org/documentation.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed Nov 11 19:12:05 2015 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 11 Nov 2015 11:12:05 -0800 Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) In-Reply-To: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> References: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> Message-ID: <201511111921.tABJLbrG011143@d01av04.pok.ibm.com> It is probably not what you are looking for, but I did implement a two node HA solution using callbacks for SNMP. You could do something like that in the near term. https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Implementing%20a%20GPFS%20HA%20SNMP%20configuration%20using%20Callbacks Scott Fadden Spectrum Scale - Technical Marketing Phone: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/storage/spectrum/scale From: "service at metamodul.com" To: gpfsug main discussion list Date: 11/11/2015 08:58 AM Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) Sent by: gpfsug-discuss-bounces at spectrumscale.org @IBM GPFS and HA GPFS has now the so called protocol nodes which do provide a HA environment for NFS and SAMBA. I assume its based on the CTDB since the CTDB is currently supporting a few protocols already.* What i would like to see is a generic HA interface using GPFS. It could be based on the CTDB , native GPFS callbacks or any service providing HA functionality based on a clustered FS. Such a service would allow - only with minor extentions - to make almost any service (Oracle,DB2,FTP,SSH,NFS,CRON,TSM a.s.o ) HA. So IMHO the current approach is a little bit shortsighted. GPFS and System i I looking forward the day we have a SQL interface/API to GPFS. Thus storing DB objects natively on a GPFS thus not using any kind of addional DB files. Now if you would have such an interface what about a general modern language which supportr SQL and is multi node runable ? Who knows ... Maybe the AS/400 gets reinvented cheers Hajo Reference: * https://ctdb.samba.org/documentation.html _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From sfadden at us.ibm.com Wed Nov 11 19:12:05 2015 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 11 Nov 2015 11:12:05 -0800 Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) In-Reply-To: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> References: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> Message-ID: <201511111920.tABJK1Fq016276@d01av05.pok.ibm.com> It is probably not what you are looking for, but I did implement a two node HA solution using callbacks for SNMP. You could do something like that in the near term. https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Implementing%20a%20GPFS%20HA%20SNMP%20configuration%20using%20Callbacks Scott Fadden Spectrum Scale - Technical Marketing Phone: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/storage/spectrum/scale From: "service at metamodul.com" To: gpfsug main discussion list Date: 11/11/2015 08:58 AM Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) Sent by: gpfsug-discuss-bounces at spectrumscale.org @IBM GPFS and HA GPFS has now the so called protocol nodes which do provide a HA environment for NFS and SAMBA. I assume its based on the CTDB since the CTDB is currently supporting a few protocols already.* What i would like to see is a generic HA interface using GPFS. It could be based on the CTDB , native GPFS callbacks or any service providing HA functionality based on a clustered FS. Such a service would allow - only with minor extentions - to make almost any service (Oracle,DB2,FTP,SSH,NFS,CRON,TSM a.s.o ) HA. So IMHO the current approach is a little bit shortsighted. GPFS and System i I looking forward the day we have a SQL interface/API to GPFS. Thus storing DB objects natively on a GPFS thus not using any kind of addional DB files. Now if you would have such an interface what about a general modern language which supportr SQL and is multi node runable ? Who knows ... Maybe the AS/400 gets reinvented cheers Hajo Reference: * https://ctdb.samba.org/documentation.html _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From RWelp at uk.ibm.com Thu Nov 12 20:11:27 2015 From: RWelp at uk.ibm.com (Richard Welp) Date: Thu, 12 Nov 2015 20:11:27 +0000 Subject: [gpfsug-discuss] Meet the Devs - Edinburgh Message-ID: Hello All, I recently posted a blog entry to the User Group website outlining the Meet the Devs meeting we had in Edinburgh. If you are interested - here is a link to the recap-> http://www.spectrumscale.org/meet-the-devs-edinburgh/ Thanks, Rick =================== Rick Welp Software Engineer Master Inventor Email: rwelp at uk.ibm.com phone: +44 0161 214 0461 IBM Systems - Manchester Lab IBM UK Limited -------------------------- Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From volobuev at us.ibm.com Fri Nov 13 00:08:22 2015 From: volobuev at us.ibm.com (Yuri L Volobuev) Date: Thu, 12 Nov 2015 16:08:22 -0800 Subject: [gpfsug-discuss] NSD Server Design and Tuning Message-ID: <201511130008.tAD08Rih003504@d03av03.boulder.ibm.com> Hi The subject of GPFS NSD server tuning, and the underlying design that dictates tuning choices, has been coming up repeatedly in various forums, including this mailing list. ?Clearly, this topic hasn't been documented in sufficient detail. ?It is my sincere hope that the new document on the subject is going to provide some relief: https://ibm.biz/BdHq5v As always, feedback is welcome. yuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Fri Nov 13 13:33:01 2015 From: carlz at us.ibm.com (Carl Zetie) Date: Fri, 13 Nov 2015 08:33:01 -0500 Subject: [gpfsug-discuss] Announce: You can now file PUBLIC enhancement requests for Scale Message-ID: <201511131333.tADDXGUL010059@d01av02.pok.ibm.com> In response to requests from the community, we've added a new way to submit Public enhancement requests (RFEs) for Scale. In the past, RFEs were private, which was great for business-sensitive requests, but meant that other people couldn't effectively vote on them; and requests would often be duplicated because people couldn't see the detail of existing requests. So now we have TWO ways to submit a request. When you go to the RFE page on developerworks (https://www.ibm.com/developerworks/rfe/), you'll find two entries for Scale in the "products": one for Private RFEs (same as previously), and one for Public RFEs. Simply choose the visibility you want. Internally, they all go into the same evaluation process. A couple of notes: - Even with a public request, certain fields are still private, including Company Name and Business Justification - All existing requests remain Private. If you have one that you want flipped, please contact me off-list with the request number regards, Carl Carl Zetie Product Manager for Spectrum Scale, IBM (540) 882 9353 ][ 15750 Brookhill Ct, Waterford VA 20197 carlz at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri Nov 13 20:33:55 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 13 Nov 2015 20:33:55 +0000 Subject: [gpfsug-discuss] NSD Server Design and Tuning In-Reply-To: <201511130008.tAD08Rih003504@d03av03.boulder.ibm.com> References: <201511130008.tAD08Rih003504@d03av03.boulder.ibm.com> Message-ID: Yuri - this a fantastic document! Thanks for taking the time to put it together. I'll probably have a lot more questions after I really look at my NSD configuration. Encourage the Spectrum Scale team to do more of these. Bob Oesterlin Sr Storage Engineer, Nuance Communications _____________________________ From: Yuri L Volobuev > Sent: Thursday, November 12, 2015 6:08 PM Subject: [gpfsug-discuss] NSD Server Design and Tuning To: > Hi The subject of GPFS NSD server tuning, and the underlying design that dictates tuning choices, has been coming up repeatedly in various forums, including this mailing list. Clearly, this topic hasn't been documented in sufficient detail. It is my sincere hope that the new document on the subject is going to provide some relief: https://ibm.biz/BdHq5v As always, feedback is welcome. yuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsallen at alcf.anl.gov Fri Nov 13 21:21:36 2015 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 13 Nov 2015 21:21:36 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> References: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> Message-ID: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Hi Bob, For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? Thanks, Ben > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > Here is the agenda: > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > 2:10 - 2:30 GUI Demo- Ben Randall > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > 3:00 - 3:15 Break > > 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar > 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > Panelists: > Bob Oesterlin, Nuance (Arxscan) > Wolfgang Bring, Julich (homegrown) > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > Bob Oesterlin > Sr Storage Engineer, Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From bsallen at alcf.anl.gov Fri Nov 13 21:21:36 2015 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 13 Nov 2015 21:21:36 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> References: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> Message-ID: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Hi Bob, For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? Thanks, Ben > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > Here is the agenda: > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > 2:10 - 2:30 GUI Demo- Ben Randall > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > 3:00 - 3:15 Break > > 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar > 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > Panelists: > Bob Oesterlin, Nuance (Arxscan) > Wolfgang Bring, Julich (homegrown) > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > Bob Oesterlin > Sr Storage Engineer, Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Nov 13 21:34:58 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 13 Nov 2015 21:34:58 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> References: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com>, <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Message-ID: Hi Ben, We always try and ask if people are happy for people to have their slides posted online afterwards. Obviously if there are nda slides in the deck then we cant share. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Allen, Benjamin S. [bsallen at alcf.anl.gov] Sent: 13 November 2015 21:21 To: gpfsug main discussion list Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda Hi Bob, For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? Thanks, Ben > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > Here is the agenda: > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > 2:10 - 2:30 GUI Demo- Ben Randall > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > 3:00 - 3:15 Break > > 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar > 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > Panelists: > Bob Oesterlin, Nuance (Arxscan) > Wolfgang Bring, Julich (homegrown) > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > Bob Oesterlin > Sr Storage Engineer, Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From kallbac at iu.edu Fri Nov 13 21:44:22 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Fri, 13 Nov 2015 16:44:22 -0500 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Message-ID: <6b0611d1-24fd-4145-96dd-aff7d751a8ae@email.android.com> We will collect as many as we can and put up with a blog post. Kristy On Nov 13, 2015 4:21 PM, "Allen, Benjamin S." wrote: > > Hi Bob, > > For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? > > Thanks, > > Ben > > > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > > > Here is the agenda: > > > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > > 2:10 - 2:30 GUI Demo- Ben Randall > > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > > > 3:00 - 3:15 Break > > > > 3:10 to 3:35 The? Hartree Centre, Past, present and future - Colin Morey of UK HPC > > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar? > > 4:25 to 4:50? "Large Data Ingest Architecture? - Martin Gasthuber of DESY > > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > > Panelists: > > Bob Oesterlin, Nuance (Arxscan) > > Wolfgang Bring, Julich? (homegrown) > > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > > > > Bob Oesterlin > > Sr Storage Engineer, Nuance Communications > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From bsallen at alcf.anl.gov Fri Nov 13 22:22:29 2015 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 13 Nov 2015 22:22:29 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <6b0611d1-24fd-4145-96dd-aff7d751a8ae@email.android.com> References: <6b0611d1-24fd-4145-96dd-aff7d751a8ae@email.android.com> Message-ID: <2602E279-E811-4AB4-8E77-746D96B28B34@alcf.anl.gov> Thanks Kristy and Simon. Ben > On Nov 13, 2015, at 3:44 PM, Kristy Kallback-Rose wrote: > > We will collect as many as we can and put up with a blog post. > > Kristy > > On Nov 13, 2015 4:21 PM, "Allen, Benjamin S." wrote: >> >> Hi Bob, >> >> For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? >> >> Thanks, >> >> Ben >> >>> On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: >>> >>> The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! >>> >>> The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. >>> >>> Here is the agenda: >>> >>> 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose >>> 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali >>> 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden >>> 2:10 - 2:30 GUI Demo- Ben Randall >>> 2:30 - 3:00 Product quality improvement updates - Hye-Young >>> >>> 3:00 - 3:15 Break >>> >>> 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC >>> 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport >>> 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar >>> 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY >>> 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? >>> Panelists: >>> Bob Oesterlin, Nuance (Arxscan) >>> Wolfgang Bring, Julich (homegrown) >>> Mark Weghorst, Travelport (open source based on Graphana & FluxDB) >>> >>> 5:45 ?Welcome Reception by DSS (sponsoring reception) >>> >>> >>> Bob Oesterlin >>> Sr Storage Engineer, Nuance Communications >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Robert.Oesterlin at nuance.com Sun Nov 15 00:55:56 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Sun, 15 Nov 2015 00:55:56 +0000 Subject: [gpfsug-discuss] Announce: You can now file PUBLIC enhancement requests for Scale In-Reply-To: <201511131333.tADDXGUL010059@d01av02.pok.ibm.com> References: <201511131333.tADDXGUL010059@d01av02.pok.ibm.com> Message-ID: Great news Carl ? thanks for you help in getting this in place. Bob Oesterlin Sr Storage Engineer, Nuance Communications From: > on behalf of Carl Zetie > Reply-To: gpfsug main discussion list > Date: Friday, November 13, 2015 at 7:33 AM To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Announce: You can now file PUBLIC enhancement requests for Scale In response to requests from the community, we've added a new way to submit Public enhancement requests (RFEs) for Scale. In the past, RFEs were private, which was great for business-sensitive requests, but meant that other people couldn't effectively vote on them; and requests would often be duplicated because people couldn't see the detail of existing requests. So now we have TWO ways to submit a request. When you go to the RFE page on developerworks (https://www.ibm.com/developerworks/rfe/), you'll find two entries for Scale in the "products": one for Private RFEs (same as previously), and one for Public RFEs. Simply choose the visibility you want. Internally, they all go into the same evaluation process. A couple of notes: - Even with a public request, certain fields are still private, including Company Name and Business Justification - All existing requests remain Private. If you have one that you want flipped, please contact me off-list with the request number regards, Carl Carl Zetie Product Manager for Spectrum Scale, IBM (540) 882 9353 ][ 15750 Brookhill Ct, Waterford VA 20197 carlz at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Mon Nov 16 12:26:52 2015 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Mon, 16 Nov 2015 06:26:52 -0600 Subject: [gpfsug-discuss] SC15 UG Survery Message-ID: Hi, For those at yesterday's meeting at SC15, just a reminder that there is an online survey for feedback at: http://www.surveymonkey.com/r/SSUGSC15 Thanks to all the speakers yesterday and to Kristy, Bob and the IBM people (Doug, Pallavi) for making it happen. Simon From service at metamodul.com Mon Nov 16 18:13:05 2015 From: service at metamodul.com (service at metamodul.com) Date: Mon, 16 Nov 2015 19:13:05 +0100 (CET) Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) In-Reply-To: <201511111920.tABJK3Ga016406@d01av05.pok.ibm.com> References: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> <201511111920.tABJK3Ga016406@d01av05.pok.ibm.com> Message-ID: <772407947.175151.1447697585599.JavaMail.open-xchange@oxbaltgw02.schlund.de> Hi Scott, > > It is probably not what you are looking for, but I did implement a two node > HA solution using callbacks for SNMP. ... I knew about and wrote even my own generic HA API for GPFS based on the very old GPFS callbacks ( preumount .... ) I am trying to make IBM aware that they have a very nice product ( GPFS ) which just needs a little HA API on top to be able to provide generic HA application support out of the box. I must admit that i could rewrite my own HA API ( A script and a config file .... ) for GPFS but i have no time or money for it. I must also admit that i am not the best shell script writer .... Cheers Hajo -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From chair at spectrumscale.org Mon Nov 16 23:47:51 2015 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Mon, 16 Nov 2015 17:47:51 -0600 Subject: [gpfsug-discuss] SC15 User Groups Slides Message-ID: Hi All, Slides from the SC15 user group meeting in Austin have been posted to the UG website at: http://www.spectrumscale.org/presentations/ Simon From cphoffma at lanl.gov Fri Nov 20 16:52:23 2015 From: cphoffma at lanl.gov (Hoffman, Christopher P) Date: Fri, 20 Nov 2015 16:52:23 +0000 Subject: [gpfsug-discuss] GPFS API Question Message-ID: Greetings, I hope this is the correct place to post this, if not I apologize. I'm attempting work with extended attributes on gpfs using the C API interface. I'm wanting to be able to read attributes and then based off that value, change the attribute. What I've done so far is a policy scan that collects certain inodes based of an xattr value. From there I collect inode numbers. Just to clarify, I'm trying to not work with a path name of any sorts, just inode. There are these functions: int gpfs_igetattrsx(gpfs_ifile_t *ifile, int flags, void *buffer, int bufferSize, int *attrSize); and int gpfs_iputattrsx(gpfs_ifile_t *ifile, int flags, void *buffer, const char *pathName); I'm looking at how to use iputattrsx but the void *buffer part confuses me on what struct to use. I've been playing with igetattrsx to try to attempt and figure out what struct to use based off the data I am seeing. I've come across gpfsGetSetXAttr_t but haven't had any luck using it. My question is, is this even possible to manipulate custom XATTRs via the gpfs api? If so any ideas on what am I doing wrong? Thanks, Christopher -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Nov 20 17:39:04 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 20 Nov 2015 12:39:04 -0500 Subject: [gpfsug-discuss] GPFS API Question - extended attributes In-Reply-To: References: Message-ID: <201511201739.tAKHdBBG006478@d01av03.pok.ibm.com> If you're using policy rules and the xattr() SQL function, then you should consider using the setXattr() SQL function, to set or change the value of any particular extended attributes. Notice that the doc says: gpfs_igetattrs() subroutine: Retrieves extended file attributes in opaque format. What it does is pickup all the extended attributes of a given file and return them in a "blob". The structure of the blob is undocumented, so you should not use it to set individual extended attributes. The intended use is for backup and restore of a file's extended attributes, and you get an ACL also as a bonus. The doc says: "This subroutine is intended for use by a backup program to save all extended file attributes (ACLs, attributes, and so forth)." If you are determined to use a C API to manipulate extended attributes, I personally recommend that you first see and try if the standard OS methods will work for you. That means your code will work for any file system that can be mounted on you OS that supports extended attributes. BUT, unfortunately I have found that some extended attribute names with special prefix values cannot be accessed with the standard Linux or AIX or Posix commands or APIs. In that case you need to use the gpfs API, GPFS_FCNTL_SET_XATTR (see gpfs_fcntl.h) Which is indeed what setXattr() is using and what the mmchattr command ultimately uses. Notice that setXattr() requires you pass the new value as an SQL string. So what if you need to store a numeric value as a "binary" value? Well first figure out how to represent the value as a hexadecimal constant and then use this notation: setXattr('user.whatever', X'0123456789ABCDEF') In some common situations you can use the m4 processor to build or tear down binary and/or hexadecimal values and strings. For some examples of how to do that add this to a test policy rules file: debugfile(/tmp/m4xdeb) dumpdef And peek into the resulting m4xdeb file! -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Tue Nov 24 12:48:29 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Tue, 24 Nov 2015 12:48:29 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome Message-ID: Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon From Robert.Oesterlin at nuance.com Tue Nov 24 13:30:11 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 24 Nov 2015 13:30:11 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome Message-ID: <4D197A26-6843-4903-AB89-08F121136F03@nuance.com> It?s listed as an ?optional ? package for Linux nodes, according to the documentation ? but I can?t find it documented either. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Tuesday, November 24, 2015 at 6:48 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] 4.2.0 and callhome Does anyone know what the call home rpm packages in the 4.2.0 release do? -------------- next part -------------- An HTML attachment was scrubbed... URL: From PAULROBE at uk.ibm.com Tue Nov 24 13:45:54 2015 From: PAULROBE at uk.ibm.com (Paul Roberts) Date: Tue, 24 Nov 2015 13:45:54 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome In-Reply-To: References: Message-ID: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Tue Nov 24 13:51:53 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Tue, 24 Nov 2015 13:51:53 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome In-Reply-To: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> References: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> Message-ID: Thanks for the pointer Paul. It appears that search for anything in the docs, doesn't work ... Simon From: > on behalf of Paul Roberts > Reply-To: gpfsug main discussion list > Date: Tuesday, 24 November 2015 at 13:45 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Tue Nov 24 16:35:56 2015 From: knop at us.ibm.com (Felipe Knop) Date: Tue, 24 Nov 2015 11:35:56 -0500 Subject: [gpfsug-discuss] 4.2.0 and callhome In-Reply-To: References: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> Message-ID: <201511241636.tAOGa62F002867@d01av03.pok.ibm.com> Simon, all, The Call Home facility is described in the Advanced Administration Guide http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Chapter 24. Understanding the call home function A problem has been identified with the indexing facility for the Spectrum Scale 4.2 publications . The team is working to rectify that. Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/24/2015 08:52 AM Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for the pointer Paul. It appears that search for anything in the docs, doesn't work ... Simon From: on behalf of Paul Roberts Reply-To: gpfsug main discussion list Date: Tuesday, 24 November 2015 at 13:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.m.killen at leeds.ac.uk Wed Nov 25 17:52:30 2015 From: s.m.killen at leeds.ac.uk (Sean Killen) Date: Wed, 25 Nov 2015 17:52:30 +0000 Subject: [gpfsug-discuss] Introduction Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hello everyone, Just joined the list to be part of the community, so here is a bit about me. I'm Sean Killen and I work in the Faculty of Biological Sciences at the University of Leeds. I am responsible for Research Computing, UNIX/Linux, Storage and Virtualisation. I am new to GPFS /SpectrumScale and am currently evaluating a setup with a view to acquiring it primarily to manage a multi-PetaByte storage system for Research Data coming from our new Electron Microscopes, but also with a view to rolling it out to manage and curate all the research data within the Faculty and beyond. Yours - -- Sean - --? - ------------------------------------------------------------------- ??? Dr Sean M Killen ??? Research Computing Manager, IT ??? Faculty of Biological Sciences ??? University of Leeds ??? LEEDS ??? LS2 9JT ??? United Kingdom ??? Tel: +44 (0)113 3433148 ??? Mob: +44 (0)776 8670907 ??? Fax: +44 (0)113 3438465 ??? GnuPG Key ID: ee0d36f0 - ------------------------------------------------------------------- -----BEGIN PGP SIGNATURE----- iGcEAREKACcgHFMgTSBLaWxsZW4gPHNlYW5Aa2lsbGVucy5jby51az4FAlZV9VUA CgkQEm087+4NNvA+xACg61vxW34Li7tMV8dwNPXy+muO834Anj6ZM2y0j6MWHbRr WFZqTG99oeD+ =GSNu -----END PGP SIGNATURE----- From tpathare at sidra.org Thu Nov 26 15:47:17 2015 From: tpathare at sidra.org (Tushar Pathare) Date: Thu, 26 Nov 2015 15:47:17 +0000 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. Message-ID: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> Hello Team, Is it possible to share the data on GPFS and disabling data copy. It is possible through ACLs. Tushar B Pathare High Performance Computing (HPC) Administrator General Parallel File System Scientific Computing Bioinformatics Division Research Sidra Medical and Research Centre PO Box 26999 | Doha, Qatar Burj Doha Tower,Floor 8 D +974 44042250 | M +974 74793547 tpathare at sidra.org | www.sidra.org [cid:C4701480-241B-4973-B378-C72FB3BC9FFB] Disclaimer: This email and its attachments may be confidential and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, any reading, printing, storage, disclosure, copying or any other action taken in respect of this e-mail is prohibited and may be unlawful. If you are not the intended recipient, please notify the sender immediately by using the reply function and then permanently delete what you have received. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Sidra Medical and Research Center. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 142717 bytes Desc: image001.png URL: From jonathan at buzzard.me.uk Thu Nov 26 23:21:22 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Thu, 26 Nov 2015 23:21:22 +0000 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> References: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> Message-ID: <565793F2.5070407@buzzard.me.uk> On 26/11/15 15:47, Tushar Pathare wrote: > Hello Team, > > Is it possible to share the data on GPFS and disabling data copy. > > It is possible through ACLs. > I don't believe that what you are asking is technically possible in any mainstream operating system/file system combination. It certainly cannot be achieved with ACL's whether Posix, NSFv4 or NTFS. The only way to achieve this sort of thing is using digital rights management which is way beyond the scope of a file system in itself. These are all application specific. In addition these are invariable all a busted flush anyway. Torrents of movies etc. are all the proof one needs of this. The short and curlies are if then end user can view the data in any meaningful way to them, then they can make a copy of that data. From a file system perspective you can't defeat the following command line. $ cat readonly_file > my_evil_copy JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From chair at spectrumscale.org Fri Nov 27 16:01:42 2015 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Fri, 27 Nov 2015 16:01:42 +0000 Subject: [gpfsug-discuss] User group etiquette Message-ID: Hi All, I'd just like to remind all users of the user group that this group is intended to be a technically focussed group and is not intended as a sales lead opportunity. In the past we've had good relationships with many vendors who have engaged in technical discussion on the list and I'd like to see this continue, just recently we've had some complaints that *several* vendors have used the group as a way of trying to generate sales leads. Please can I gently remind all members of the group that the user group is a technical forum. If we continue to receive complaints that posts to the mailing list are being used as sales leads then we'll start to ban offenders from participating in the group. I'm really sorry that we're having to do this, but strongly believe that as a user community we should be focussed on the technical aspects of the products in use. Simon (Chair) From bhill at physics.ucsd.edu Fri Nov 27 22:03:00 2015 From: bhill at physics.ucsd.edu (Bryan Hill) Date: Fri, 27 Nov 2015 14:03:00 -0800 Subject: [gpfsug-discuss] Switching from Standard to Advanced Message-ID: Hello group: Is there any special procedure or caveats involved in going from Standard Edition to Advanced Edition (besides purchasing the license, of course)? Can the Advanced Edition RPM?s (I?m on RedHat EL 6.7) simply be installed in place over the Standard Edition? I would like to implement the new AFM-based DR feature in version 4.1.1, but this requires the Advanced Edition. Thanks, Bryan --- Bryan Hill Lead System Administrator UCSD Physics Computing Facility 9500 Gilman Dr. # 0319 La Jolla, CA 92093 +1-858-534-5538 bhill at ucsd.edu From daniel.kidger at uk.ibm.com Sat Nov 28 12:56:40 2015 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Sat, 28 Nov 2015 12:56:40 +0000 Subject: [gpfsug-discuss] Switching from Standard to Advanced In-Reply-To: References: Message-ID: <201511281257.tASCvaAW027707@d06av12.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Sat Nov 28 17:49:42 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Sat, 28 Nov 2015 12:49:42 -0500 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: <565793F2.5070407@buzzard.me.uk> References: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> <565793F2.5070407@buzzard.me.uk> Message-ID: <201511281749.tASHnmaU009090@d01av03.pok.ibm.com> In some ways, Jon Buzzard's answer is correct. However, outside of GPFS consider: 1) It is certainly possible to provide a user-id that has at most read access to any files and devices. A user that cannot write any files on any device, but perhaps can view them with some applications on some display only devices. 2) Regardless of (1), I always say, much as Jon, "If you can read it, you can copy it!" Consider even in a secured facility on a secure, armored terminal with no means of electrical interfacing, subject to strip search, a spy can commit important secrets to memory. Or short of strip search, one can always transcribe (copy!) to paper, canvas, parchment, film, or photograph or otherwise "screen scrape" and copy an image and/or audio to any storage device. It has also been reported that spy agencies have devices that can screen scrape at a distance, by processing electro-magnetic signals (Radio, Microwave, ...) emanated from ordinary PCs, CRTs, and the like. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From kraemerf at de.ibm.com Sun Nov 29 18:32:39 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Sun, 29 Nov 2015 19:32:39 +0100 Subject: [gpfsug-discuss] FYI - IBM Redbooks Message-ID: <201511291832.tATIWpIX023706@d06av11.portsmouth.uk.ibm.com> IBM Spectrum Scale (formerly GPFS) Revised: November 17, 2015 ISBN: 0738440736 550 pages Explore the book online at http://www.redbooks.ibm.com/redbooks/pdfs/sg248254.pdf Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Sun Nov 29 18:34:38 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Sun, 29 Nov 2015 19:34:38 +0100 Subject: [gpfsug-discuss] FYI - IBM Redpaper Message-ID: <201511291845.tATIjVeo017922@d06av08.portsmouth.uk.ibm.com> Implementing IBM Spectrum Scale Revised: November 20, 2015 More details are available at http://www.redbooks.ibm.com/redpapers/pdfs/redp5254.pdf Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From service at metamodul.com Sun Nov 29 21:22:49 2015 From: service at metamodul.com (service at metamodul.com) Date: Sun, 29 Nov 2015 22:22:49 +0100 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. Message-ID: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> I think you talk about something like the novell ci copy inhibit attribut?https://www.novell.com/documentation/oes11/stor_filesys_lx/data/bs3fkbm.html. With the current GPFS it is imho not possible. Might be able in case leight weight callbacks gets introduced. Together with self defined user attributs it might be able. Hajo Von Samsung Mobile gesendet
-------- Urspr?ngliche Nachricht --------
Von: Tushar Pathare
Datum:2015.11.26 16:47 (GMT+01:00)
An: gpfsug-discuss at spectrumscale.org
Betreff: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy.
Hello Team, Is it possible to share the data on GPFS and disabling data copy. It is possible through ACLs. Tushar B Pathare High Performance Computing (HPC) Administrator General Parallel File System Scientific Computing Bioinformatics Division Research Sidra Medical and Research Centre PO Box 26999 | Doha, Qatar Burj Doha Tower,Floor 8 D +974 44042250 | M +974 74793547 tpathare at sidra.org | www.sidra.org Disclaimer: This email and its attachments may be confidential and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, any reading, printing, storage, disclosure, copying or any other action taken in respect of this e-mail is prohibited and may be unlawful. If you are not the intended recipient, please notify the sender immediately by using the reply function and then permanently delete what you have received. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Sidra Medical and Research Center. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdeluca at gmail.com Sun Nov 29 21:45:52 2015 From: bdeluca at gmail.com (Ben De Luca) Date: Sun, 29 Nov 2015 23:45:52 +0200 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> References: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> Message-ID: How can some one have thought of implementing this, if the data can be read to memory it can be written from it...... On 29 November 2015 at 23:22, service at metamodul.com wrote: > I think you talk about something like the novell ci copy inhibit attribut > https://www.novell.com/documentation/oes11/stor_filesys_lx/data/bs3fkbm.html > . > With the current GPFS it is imho not possible. Might be able in case > leight weight callbacks gets introduced. Together with self defined user > attributs it might be able. > Hajo > > > Von Samsung Mobile gesendet > > > -------- Urspr?ngliche Nachricht -------- > Von: Tushar Pathare > Datum:2015.11.26 16:47 (GMT+01:00) > An: gpfsug-discuss at spectrumscale.org > Betreff: [gpfsug-discuss] How can we give read access to GPFS data with > restricting data copy. > > Hello Team, > > Is it possible to share the data on GPFS and disabling data copy. > > It is possible through ACLs. > > > > > > *Tushar B Pathare* > > High Performance Computing (HPC) Administrator > > General Parallel File System > > Scientific Computing > > Bioinformatics Division > > Research > > > > *Sidra Medical and Research Centre* > > PO Box 26999 | Doha, Qatar > > Burj Doha Tower,Floor 8 > > D +974 44042250 | M +974 74793547 > > tpathare at sidra.org | www.sidra.org > > > > > > [image: cid:C4701480-241B-4973-B378-C72FB3BC9FFB] > > > Disclaimer: This email and its attachments may be confidential and are > intended solely for the use of the individual to whom it is addressed. If > you are not the intended recipient, any reading, printing, storage, > disclosure, copying or any other action taken in respect of this e-mail is > prohibited and may be unlawful. If you are not the intended recipient, > please notify the sender immediately by using the reply function and then > permanently delete what you have received. Any views or opinions expressed > are solely those of the author and do not necessarily represent those of > Sidra Medical and Research Center. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Sun Nov 29 21:54:35 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Sun, 29 Nov 2015 21:54:35 +0000 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: References: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> Message-ID: <565B741B.1010003@buzzard.me.uk> On 29/11/15 21:45, Ben De Luca wrote: > How can some one have thought of implementing this, if the data can be > read to memory it can be written from it...... > That's my point. Also unless it is encrypted on the wire I can just dump it with tcpdump, I guess the issue is how high you want to make the hurdles. You and I on this list might see DRM as a waste of time the rest of population won't find it anywhere near as simple. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Robert.Oesterlin at nuance.com Sun Nov 29 23:08:06 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Sun, 29 Nov 2015 23:08:06 +0000 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? Message-ID: I noticed that IBM only shipped the zimon performance sensors for RH7 with version 4.2 . This is a HUGE disappointment ? most of my NSD servers are still and RH 6.6 (and the clients). gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm pmswift-4.2.0-0.noarch.rpm Can IBM comment on support for RH6 system with the Performance sensors? I understand the collector node must be at RH7. Making the performance sensor RH7 only means many users won?t be able to take advantage of this function. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Mon Nov 30 03:27:42 2015 From: knop at us.ibm.com (Felipe Knop) Date: Sun, 29 Nov 2015 22:27:42 -0500 Subject: [gpfsug-discuss] Spectrum Scale 4.2 publications: indexing fixed Message-ID: <201511300327.tAU3ReiE005929@d01av01.pok.ibm.com> All, The indexing problem reported below has now been fixed. Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 ----- Forwarded by Felipe Knop/Poughkeepsie/IBM on 11/29/2015 10:21 PM ----- From: Felipe Knop/Poughkeepsie/IBM To: gpfsug main discussion list Date: 11/24/2015 11:36 AM Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Simon, all, The Call Home facility is described in the Advanced Administration Guide http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Chapter 24. Understanding the call home function A problem has been identified with the indexing facility for the Spectrum Scale 4.2 publications . The team is working to rectify that. Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/24/2015 08:52 AM Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for the pointer Paul. It appears that search for anything in the docs, doesn't work ... Simon From: on behalf of Paul Roberts Reply-To: gpfsug main discussion list Date: Tuesday, 24 November 2015 at 13:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tomasz.Wolski at ts.fujitsu.com Mon Nov 30 10:45:36 2015 From: Tomasz.Wolski at ts.fujitsu.com (Tomasz.Wolski at ts.fujitsu.com) Date: Mon, 30 Nov 2015 10:45:36 +0000 Subject: [gpfsug-discuss] IO performance of replicated GPFS filesystem Message-ID: <8b3278e23a5b42a3be80629ee18f307b@R01UKEXCASM223.r01.fujitsu.local> Hi All, I could use some help of the experts here :) Please correct me if I'm wrong: I suspect that GPFS filesystem READ performance is better when filesystem is replicated to i.e. two failure groups, where these failure groups are placed on separate RAID controllers. In this case WRITE performance should be worse, since the same data must go to two locations. What about situation where GPFS filesystem has two metadataOnly NSDs which are also replicated? Does metadata READ performance increase in this way as well (and WRITE decreases)? Best regards, Tomasz Wolski -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Mon Nov 30 11:11:44 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 30 Nov 2015 11:11:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? In-Reply-To: References: , Message-ID: Thanks Alexander! I'm assuming these can be requested directly from IBM until then via PMR process. (no need to respond if this is the case) Bob Oesterlin Sr Storage Engineer, Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.Wolf-Reber at de.ibm.com Mon Nov 30 12:52:10 2015 From: A.Wolf-Reber at de.ibm.com (Alexander Wolf) Date: Mon, 30 Nov 2015 13:52:10 +0100 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? In-Reply-To: References: Message-ID: This was a mistake. The RHEL6 sensor packages should have been included but where somehow not picked up in the final image. We will fix this with the next PTF. Mit freundlichen Gr??en / Kind regards IBM Spectrum Scale Dr. Alexander Wolf-Reber Spectrum Scale GUI development lead Department M069 / Spectrum Scale Software Development +49-6131-84-6521 a.wolf-reber at de.ibm.com IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz / Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ----- Original message ----- From: "Oesterlin, Robert" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? Date: Mon, Nov 30, 2015 12:08 AM I noticed that IBM only shipped the zimon performance sensors for RH7 with version 4.2 . This is a HUGE disappointment ? most of my NSD servers are still and RH 6.6 (and the clients). gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm pmswift-4.2.0-0.noarch.rpm Can IBM comment on support for RH6 system with the Performance sensors? I understand the collector node must be at RH7. Making the performance sensor RH7 only means many users won?t be able to take advantage of this function. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From bbanister at jumptrading.com Mon Nov 30 16:01:58 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 30 Nov 2015 16:01:58 +0000 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? In-Reply-To: References: Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05DAB217@CHI-EXCHANGEW1.w2k.jumptrading.com> Please let us know if there is an APAR number we can track for this, thanks! -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Alexander Wolf Sent: Monday, November 30, 2015 6:52 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? This was a mistake. The RHEL6 sensor packages should have been included but where somehow not picked up in the final image. We will fix this with the next PTF. Mit freundlichen Gr??en / Kind regards IBM Spectrum Scale Dr. Alexander Wolf-Reber Spectrum Scale GUI development lead Department M069 / Spectrum Scale Software Development +49-6131-84-6521 a.wolf-reber at de.ibm.com IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz / Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ----- Original message ----- From: "Oesterlin, Robert" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? Date: Mon, Nov 30, 2015 12:08 AM I noticed that IBM only shipped the zimon performance sensors for RH7 with version 4.2 . This is a HUGE disappointment ? most of my NSD servers are still and RH 6.6 (and the clients). gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm pmswift-4.2.0-0.noarch.rpm Can IBM comment on support for RH6 system with the Performance sensors? I understand the collector node must be at RH7. Making the performance sensor RH7 only means many users won?t be able to take advantage of this function. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From S.J.Thompson at bham.ac.uk Mon Nov 30 16:27:34 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 30 Nov 2015 16:27:34 +0000 Subject: [gpfsug-discuss] Placement policies and copies Message-ID: Hi, I have a file system which has the default number of data copies set to 2. I now have some data Id like to have which only has 1 copy made. I know that files and directories don't inherit 1 copy based on their parent. Can I do this with a placement rule to change the number of copies to 1? I don't really want to have to find the file afterwards and fix up as that requires an mmrestripefs to clear the second copy. Or if I have a pool which only has nsd disks in a single failure group and use a placement policy for that, would that work? Or will gpfs forever warn me that due to fs changes I have data at risk? Thanks Simon From makaplan at us.ibm.com Mon Nov 30 17:58:23 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Nov 2015 12:58:23 -0500 Subject: [gpfsug-discuss] Placement policies and copies In-Reply-To: References: Message-ID: <201511301758.tAUHwYn9018800@d01av01.pok.ibm.com> From the Advanced Admin book: File placement rules: RULE [?RuleName?] SET POOL ?PoolName? [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET (?FilesetName?[,?FilesetName?]...)] [WHERE SqlExpression] So, use REPLICATE(1) That's for new files as they are being created. You can use mmapplypolicy and the MIGRATE rule to change the replication factor of files that already exist. --marc of GPFS. From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/30/2015 11:27 AM Subject: [gpfsug-discuss] Placement policies and copies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I have a file system which has the default number of data copies set to 2. I now have some data Id like to have which only has 1 copy made. I know that files and directories don't inherit 1 copy based on their parent. Can I do this with a placement rule to change the number of copies to 1? I don't really want to have to find the file afterwards and fix up as that requires an mmrestripefs to clear the second copy. Or if I have a pool which only has nsd disks in a single failure group and use a placement policy for that, would that work? Or will gpfs forever warn me that due to fs changes I have data at risk? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweil at genome.wustl.edu Mon Nov 30 18:42:21 2015 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 30 Nov 2015 12:42:21 -0600 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Message-ID: <565C988D.5060604@genome.wustl.edu> Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. From puneetc at us.ibm.com Mon Nov 30 18:53:04 2015 From: puneetc at us.ibm.com (Puneet Chaudhary) Date: Mon, 30 Nov 2015 13:53:04 -0500 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <565C988D.5060604@genome.wustl.edu> References: <565C988D.5060604@genome.wustl.edu> Message-ID: <201511301853.tAUIrARZ004937@d03av05.boulder.ibm.com> Matt, GPFS version 4.1.0-8 and prior had an issue with RHEL 7.1 systemd. Red Hat introduced new changes is systemd that led to this issue. Subsequently Red Hat issued an errata and reverted the changes to systemd ( https://rhn.redhat.com/errata/RHBA-2015-0738.html). Please update the level of systemd on your nodes which will address the issue. Regards, Puneet Chaudhary Scalable I/O Development General Parallel File System (GPFS) and Technical Computing (TC) Solutions Enablement Phone: 1-720-342-1546 | Mobile: 1-845-475-8806 IBM E-mail: puneetc at us.ibm.com 2455 South Rd Poughkeepsie, NY 12601-5400 United States From: Matt Weil To: gpfsug main discussion list Date: 11/30/2015 01:42 PM Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 09076871.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Mon Nov 30 18:55:42 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 30 Nov 2015 18:55:42 +0000 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <565C988D.5060604@genome.wustl.edu> References: <565C988D.5060604@genome.wustl.edu> Message-ID: I'm sure I read about this, possibly the release notes or faq. Cant find it right now, but I did find a post on devworks: https://www.ibm.com/developerworks/community/forums/html/threadTopic?id=00104bb5-acf5-4036-93ba-29ea7b1d43b7 So sounds like you need a higher gpfs version, or possibly a rhel patch. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Matt Weil [mweil at genome.wustl.edu] Sent: 30 November 2015 18:42 To: gpfsug main discussion list Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From kywang at us.ibm.com Mon Nov 30 19:00:13 2015 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Mon, 30 Nov 2015 14:00:13 -0500 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <565C988D.5060604@genome.wustl.edu> References: <565C988D.5060604@genome.wustl.edu> Message-ID: <201511301900.tAUJ0LSl007722@d03av05.boulder.ibm.com> It is appears to be a known problem that is fixed in GPFS 4.1.1.0, where RHEL 7.1 has been tested with. This is the detail on the issue: Problem: one systemd commit ff502445 is in RHEL7.1/SLES12 systemd, now new systemd will try to check the status of the BindsTo device. If the BindsTo device is inactive, systemd will fail the mount job and unmount the file system. Unfortunately, the mknod device will be always marked as inactive by systemd, and GPFS invokes mknod to create block device under /dev, so hit the unmount issue. Fix: Udev/systemd reads device info from kernel sysfs, while device created by mknod does not register in kernel, that is why, systemd fails to read the device info and device status keeps as inactive. Under new distros, a new tsctl setPseudoDisk command implemented, takes the role of mknod, will register the pseudo device for each GPFS file system in kernel sysfs before mounting, to make systemd happy. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: Matt Weil To: gpfsug main discussion list Date: 11/30/2015 01:42 PM Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stijn.deweirdt at ugent.be Mon Nov 30 19:31:49 2015 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Mon, 30 Nov 2015 20:31:49 +0100 Subject: [gpfsug-discuss] HDFS protocol in 4.2 Message-ID: <565CA425.9070109@ugent.be> hi all, the gpfs 4.2.0 advanced administration guide has a section on HDFS protocol. while reading it, i'm a bit puzzled if this has any advantage for a non-FPO site. we are are still experimenting with the "regular" gpfs hadoop connector, so it would be nice to hear any advantages (besides protocol transparency) over the hadoop connector. in particular performance comes to mind ;) the admin guide advises to enable local read, which seems understandable for FPO, but what does this mean for a non-FPO site? sending data over RPC is proabably worse performance wise compare to the gpfs hadoop binding. also, are there any other advantages possible with a proper name and data node services from hdfs protocol? (like zero copy shuffle on gpfs, something that didn't seem to exist with the connector during some tests we ran, and which was a bit disappointing, beging a shared filesystem and all that) many thanks, stijn From S.J.Thompson at bham.ac.uk Mon Nov 30 20:19:39 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 30 Nov 2015 20:19:39 +0000 Subject: [gpfsug-discuss] Placement policies and copies In-Reply-To: <201511301758.tAUHwYn9018800@d01av01.pok.ibm.com> References: , <201511301758.tAUHwYn9018800@d01av01.pok.ibm.com> Message-ID: Hi Marc, Thanks. With the migrate option, does it remove the second copy if already present? Or do you still need to do an mmrestripefs to reclaim the space? Related: if the storage pool has multiple failure groups, will GPFS place the data into a single pool, or will it spray the data over all NSD disks in all failure groups? I think I'll stick to using a pool with NSD disks in a single failure group, so I know where the files are, but would be useful to know. I assume that if the pool then goes offline, I won't lose my whole FS, just not have access to the non replicated fileset? Thanks Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 30 November 2015 17:58 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Placement policies and copies >From the Advanced Admin book: File placement rules: RULE [?RuleName?] SET POOL ?PoolName? [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET (?FilesetName?[,?FilesetName?]...)] [WHERE SqlExpression] So, use REPLICATE(1) That's for new files as they are being created. You can use mmapplypolicy and the MIGRATE rule to change the replication factor of files that already exist. --marc of GPFS. From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/30/2015 11:27 AM Subject: [gpfsug-discuss] Placement policies and copies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I have a file system which has the default number of data copies set to 2. I now have some data Id like to have which only has 1 copy made. I know that files and directories don't inherit 1 copy based on their parent. Can I do this with a placement rule to change the number of copies to 1? I don't really want to have to find the file afterwards and fix up as that requires an mmrestripefs to clear the second copy. Or if I have a pool which only has nsd disks in a single failure group and use a placement policy for that, would that work? Or will gpfs forever warn me that due to fs changes I have data at risk? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweil at genome.wustl.edu Mon Nov 30 22:13:16 2015 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 30 Nov 2015 16:13:16 -0600 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <201511301900.tAUJ0LSl007722@d03av05.boulder.ibm.com> References: <565C988D.5060604@genome.wustl.edu> <201511301900.tAUJ0LSl007722@d03av05.boulder.ibm.com> Message-ID: <565CC9FC.8080506@genome.wustl.edu> Thanks. That was the problem. On 11/30/15 1:00 PM, Kuei-Yu Wang-Knop wrote: > > It is appears to be a known problem that is fixed in GPFS 4.1.1.0, > where RHEL 7.1 has been tested with. > > This is the detail on the issue: > > Problem: one systemd commit ff502445 is in RHEL7.1/SLES12 systemd, > now new systemd will try to check the status of the BindsTo device. > If the BindsTo device is inactive, systemd will fail the mount job > and unmount the file system. Unfortunately, the mknod device will > be always marked as inactive by systemd, and GPFS invokes mknod to > create block device under /dev, so hit the unmount issue. > > Fix: Udev/systemd reads device info from kernel sysfs, while device > created by mknod does not register in kernel, that is why, systemd > fails to read the device info and device status keeps as inactive. > Under new distros, a new tsctl setPseudoDisk command implemented, > takes the role of mknod, will register the pseudo device for each > GPFS file system in kernel sysfs before mounting, to make systemd > happy. > > > ------------------------------------ > Kuei-Yu Wang-Knop > IBM Scalable I/O development > (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com > > > Inactive hide details for Matt Weil ---11/30/2015 01:42:08 PM---Hello > all, Not sure if this is the a good place but we are expeMatt Weil > ---11/30/2015 01:42:08 PM---Hello all, Not sure if this is the a good > place but we are experiencing a strange > > From: Matt Weil > To: gpfsug main discussion list > Date: 11/30/2015 01:42 PM > Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file > systems PMR 70339, 122, 000 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Hello all, > > Not sure if this is the a good place but we are experiencing a strange > issue. > > It appears that systemd is un-mounting the file system immediately after > it is mounted. > > #strace of systemd shows that the device is not there. Systemd sees > that the path is failed and umounts the device. Our only work around > currently is to link /usr/bin/umount to true. Then the device stays > mounted. > > 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, > 235), ...}) = 0 > 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 > ENOENT (No such file or directory) > 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No > such file or directory) > 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 > > # It appears that the major min numbers have been changed > [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 > lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> > ../../devices/virtual/block/dm-239 > [root at gennsd4 system]# ls -l /dev/aggr3 > brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 > [root at gennsd4 system]# ls /sys/dev/block/239:235 > ls: cannot access /sys/dev/block/239:235: No such file or directory > > [root at gennsd4 system]# rpm -qa | grep gpfs > gpfs.gpl-4.1.0-7.noarch > gpfs.gskit-8.0.50-32.x86_64 > gpfs.msg.en_US-4.1.0-7.noarch > gpfs.docs-4.1.0-7.noarch > gpfs.base-4.1.0-7.x86_64 > gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 > gpfs.ext-4.1.0-7.x86_64 > [root at gennsd4 system]# rpm -qa | grep systemd > systemd-sysv-219-19.el7.x86_64 > systemd-libs-219-19.el7.x86_64 > systemd-219-19.el7.x86_64 > systemd-python-219-19.el7.x86_64 > > any help would be appreciated. > > Thanks > > Matt > > ____ > This email message is a private communication. The information > transmitted, including attachments, is intended only for the person or > entity to which it is addressed and may contain confidential, > privileged, and/or proprietary material. Any review, duplication, > retransmission, distribution, or other use of, or taking of any action > in reliance upon, this information by persons or entities other than > the intended recipient is unauthorized by the sender and is > prohibited. If you have received this message in error, please contact > the sender immediately by return email and delete the original message > from all computer systems. Thank you. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From martin.gasthuber at desy.de Mon Nov 2 13:53:49 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Mon, 2 Nov 2015 14:53:49 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz Message-ID: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Hi, we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? best regards, Martin From jonathan at buzzard.me.uk Mon Nov 2 14:20:06 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Mon, 02 Nov 2015 14:20:06 +0000 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <1446474006.17909.120.camel@buzzard.phy.strath.ac.uk> On Mon, 2015-11-02 at 14:53 +0100, Martin Gasthuber wrote: > Hi, > > we are currently in discussion with our local network security people > about the plan to make certain data accessible to outside scientists > via ftp - this implies that the host running the ftp daemon runs with > their ethernet ports inside a dmz. On the other hand, all NSD access is > through IB (and should stay that way). The biggest concerns are around > the possible intrude from that ftp host (running as GPFS client) > through the IB infrastructure to other cluster nodes and possible > causing big troubles on the scientific data. Did anybody here has > similar constrains and possible solutions to mitigate that risk ? > Would it not make sense to export it via NFS over Ethernet from the GPFS cluster to the FTP node, firewall it up the wazoo and avoid the server licenses anyway? Note if you offer remote access to your "cluster" to local users already the additional attack surface from an FTP server is minimal to begin with. All said and done, one however suspects that 99.999% of hackers have precisely zero experience with Infiniband and thus would struggle to be able to exploit the IB fabric beyond using IPoIB. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From frederik.ferner at diamond.ac.uk Mon Nov 2 14:46:49 2015 From: frederik.ferner at diamond.ac.uk (Frederik Ferner) Date: Mon, 2 Nov 2015 14:46:49 +0000 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <56377759.4060904@diamond.ac.uk> On 02/11/15 13:53, Martin Gasthuber wrote: > we are currently in discussion with our local network security people > about the plan to make certain data accessible to outside scientists > via ftp - this implies that the host running the ftp daemon runs with > their ethernet ports inside a dmz. On the other hand, all NSD access > is through IB (and should stay that way). The biggest concerns are > around the possible intrude from that ftp host (running as GPFS > client) through the IB infrastructure to other cluster nodes and > possible causing big troubles on the scientific data. Did anybody > here has similar constrains and possible solutions to mitigate that > risk ? Martin, we have a very similar situation here at Diamond with our GridFTP/Globus endpoint. We have a machine with full access to our high performance file systems in our internal network, which then exports those over NFS over a private point to point fibre to a machine in the DMZ. This is also firewalled with IPTables on the link on the internal machine to only allow NFS traffic. This has so far provided sufficient performance to our users. Kind regards, Frederik -- Frederik Ferner Senior Computer Systems Administrator (storage) phone: +44 1235 77 8624 Diamond Light Source Ltd. mob: +44 7917 08 5110 Duty Sys Admin can be reached on x8596 (Apologies in advance for the lines below. Some bits are a legal requirement and I have no control over them.) -- This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd. Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message. Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom From service at metamodul.com Mon Nov 2 15:00:07 2015 From: service at metamodul.com (MetaService) Date: Mon, 02 Nov 2015 16:00:07 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <1446476407.7183.108.camel@pluto> I would think about to use a dedicated GPFS remote cluster. Advantage: - If required the remote cluster could be shutdown without to impact the home cluster. - You can add additional types of access onto the remote cluster - You could implement a HA solution to make the access types HA. but you must be aware that you need a GPFS server license. Cheers Hajo From ewahl at osc.edu Mon Nov 2 15:22:19 2015 From: ewahl at osc.edu (Wahl, Edward) Date: Mon, 2 Nov 2015 15:22:19 +0000 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> Message-ID: <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> First off let me recommend vsftpd. We've used that in a few single point to point cases to excellent results. Next, I'm going to agree with Johnathan here, any hacker that someone gains advantage on an FTP server will probably not have the knowledge to take advantage of the IB, however there are some steps you could take to mitigate this on a node such as you are thinking of: -Perhaps an NFS share from an NSD across IB instead of being a native GPFS client? This would remove any possibility of escalation exploits gaining access to other servers via SSH keys on the IB fabric but will reduce this nodes speed of access. On the other hand almost any IB faster than SDR probably is going to wait on the external network unless it's 40Gb or 100Gb attached. -firewalled access and/or narrow corridor for ftp access. This is pretty much a must. -fail2ban like product checking the ftp logs. Takes some work, but if the firewall isn't narrow enough this is worth it. Ed Wahl OSC ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [martin.gasthuber at desy.de] Sent: Monday, November 02, 2015 8:53 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS (partly) inside dmz Hi, we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? best regards, Martin _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From martin.gasthuber at desy.de Mon Nov 2 20:49:02 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Mon, 2 Nov 2015 21:49:02 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: the path via NFS is already checked - problem here is not the bandwidth, although the WAN ports allows for 2 x 10GE, its the file rate we need to optimize. With NFS, in between GPFS and FTP, we saw ~2 times less file download rate. My concern are also not really about raw IB access and misuse - its because IPoIB, in order to minimize the risk, we had to reconfigure all other cluster nodes to refuse IP connects through the IB ports from that node - more work, less fun ! Probably we had to go the slower NFS way ;-) best regards, Martin > On 2 Nov, 2015, at 16:22, Wahl, Edward wrote: > > First off let me recommend vsftpd. We've used that in a few single point to point cases to excellent results. > > Next, I'm going to agree with Johnathan here, any hacker that someone gains advantage on an FTP server will probably not have the knowledge to take advantage of the IB, however there are some steps you could take to mitigate this on a node such as you are thinking of: > > -Perhaps an NFS share from an NSD across IB instead of being a native GPFS client? This would remove any possibility of escalation exploits gaining access to other servers via SSH keys on the IB fabric but will reduce this nodes speed of access. On the other hand almost any IB faster than SDR probably is going to wait on the external network unless it's 40Gb or 100Gb attached. > > -firewalled access and/or narrow corridor for ftp access. This is pretty much a must. > > -fail2ban like product checking the ftp logs. Takes some work, but if the firewall isn't narrow enough this is worth it. > > Ed Wahl > OSC > > > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [martin.gasthuber at desy.de] > Sent: Monday, November 02, 2015 8:53 AM > To: gpfsug main discussion list > Subject: [gpfsug-discuss] GPFS (partly) inside dmz > > Hi, > > we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? > > best regards, > Martin > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From peserocka at gmail.com Tue Nov 3 02:32:56 2015 From: peserocka at gmail.com (Pete Sero) Date: Tue, 3 Nov 2015 10:32:56 +0800 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: Have you tested prefetching reads on the NFS server node? That should help for streaming reads as ultimatively initial by the ftp user. ? Peter On 2015 Nov 3 Tue, at 04:49, Martin Gasthuber wrote: > the path via NFS is already checked - problem here is not the bandwidth, although the WAN ports allows for 2 x 10GE, its the file rate we need to optimize. With NFS, in between GPFS and FTP, we saw ~2 times less file download rate. My concern are also not really about raw IB access and misuse - its because IPoIB, in order to minimize the risk, we had to reconfigure all other cluster nodes to refuse IP connects through the IB ports from that node - more work, less fun ! Probably we had to go the slower NFS way ;-) > > best regards, > Martin >> On 2 Nov, 2015, at 16:22, Wahl, Edward wrote: >> >> First off let me recommend vsftpd. We've used that in a few single point to point cases to excellent results. >> >> Next, I'm going to agree with Johnathan here, any hacker that someone gains advantage on an FTP server will probably not have the knowledge to take advantage of the IB, however there are some steps you could take to mitigate this on a node such as you are thinking of: >> >> -Perhaps an NFS share from an NSD across IB instead of being a native GPFS client? This would remove any possibility of escalation exploits gaining access to other servers via SSH keys on the IB fabric but will reduce this nodes speed of access. On the other hand almost any IB faster than SDR probably is going to wait on the external network unless it's 40Gb or 100Gb attached. >> >> -firewalled access and/or narrow corridor for ftp access. This is pretty much a must. >> >> -fail2ban like product checking the ftp logs. Takes some work, but if the firewall isn't narrow enough this is worth it. >> >> Ed Wahl >> OSC >> >> >> ________________________________________ >> From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [martin.gasthuber at desy.de] >> Sent: Monday, November 02, 2015 8:53 AM >> To: gpfsug main discussion list >> Subject: [gpfsug-discuss] GPFS (partly) inside dmz >> >> Hi, >> >> we are currently in discussion with our local network security people about the plan to make certain data accessible to outside scientists via ftp - this implies that the host running the ftp daemon runs with their ethernet ports inside a dmz. On the other hand, all NSD access is through IB (and should stay that way). The biggest concerns are around the possible intrude from that ftp host (running as GPFS client) through the IB infrastructure to other cluster nodes and possible causing big troubles on the scientific data. Did anybody here has similar constrains and possible solutions to mitigate that risk ? >> >> best regards, >> Martin >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From janfrode at tanso.net Tue Nov 3 09:16:09 2015 From: janfrode at tanso.net (Jan-Frode Myklebust) Date: Tue, 3 Nov 2015 10:16:09 +0100 Subject: [gpfsug-discuss] GPFS (partly) inside dmz In-Reply-To: References: <863BF91C-9F02-40AD-B540-5065A04558B5@desy.de> <9DA9EC7A281AC7428A9618AFDC49049955B0B3E3@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: I would be very weary about stretching a cluster between DMZ's. IMHO the nodes are too tighly connected for that. I just saw the Desy/GPFS talk at IBM technical university in Cannes, and it was mentioned that you had moved from 60 MB/s to 600 MB/s from un-tuned to tuned NFS over 10GbE. Sounded quite impressive. Are you saying putting FTP on top of those 600 MB/s kills the performance / download rate? Maybe AFM, with readonly Cache, would allow you to get better performance by caching the content on the FTP-servers ? Then all you should need of openings between the DMZ's would be the NFS-port for a readonly export.. -jf On Mon, Nov 2, 2015 at 9:49 PM, Martin Gasthuber wrote: > the path via NFS is already checked - problem here is not the bandwidth, > although the WAN ports allows for 2 x 10GE, its the file rate we need to > optimize. With NFS, in between GPFS and FTP, we saw ~2 times less file > download rate. My concern are also not really about raw IB access and > misuse - its because IPoIB, in order to minimize the risk, we had to > reconfigure all other cluster nodes to refuse IP connects through the IB > ports from that node - more work, less fun ! Probably we had to go the > slower NFS way ;-) > > best regards, > Martin > > On 2 Nov, 2015, at 16:22, Wahl, Edward wrote: > > > > First off let me recommend vsftpd. We've used that in a few single > point to point cases to excellent results. > > > > Next, I'm going to agree with Johnathan here, any hacker that someone > gains advantage on an FTP server will probably not have the knowledge to > take advantage of the IB, however there are some steps you could take to > mitigate this on a node such as you are thinking of: > > > > -Perhaps an NFS share from an NSD across IB instead of being a native > GPFS client? This would remove any possibility of escalation exploits > gaining access to other servers via SSH keys on the IB fabric but will > reduce this nodes speed of access. On the other hand almost any IB faster > than SDR probably is going to wait on the external network unless it's 40Gb > or 100Gb attached. > > > > -firewalled access and/or narrow corridor for ftp access. This is pretty > much a must. > > > > -fail2ban like product checking the ftp logs. Takes some work, but if > the firewall isn't narrow enough this is worth it. > > > > Ed Wahl > > OSC > > > > > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [ > gpfsug-discuss-bounces at spectrumscale.org] on behalf of Martin Gasthuber [ > martin.gasthuber at desy.de] > > Sent: Monday, November 02, 2015 8:53 AM > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] GPFS (partly) inside dmz > > > > Hi, > > > > we are currently in discussion with our local network security people > about the plan to make certain data accessible to outside scientists via > ftp - this implies that the host running the ftp daemon runs with their > ethernet ports inside a dmz. On the other hand, all NSD access is through > IB (and should stay that way). The biggest concerns are around the possible > intrude from that ftp host (running as GPFS client) through the IB > infrastructure to other cluster nodes and possible causing big troubles on > the scientific data. Did anybody here has similar constrains and possible > solutions to mitigate that risk ? > > > > best regards, > > Martin > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Wed Nov 4 18:18:21 2015 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Wed, 4 Nov 2015 18:18:21 +0000 Subject: [gpfsug-discuss] AFM performance under load Message-ID: <563A4BED.1040801@ed.ac.uk> Hi folks, We're trying to get our AFM stack to remain responsive when under a heavy write load from the cache -> home. It looks like read operations won't get scheduled when there's a large write queue, and operations like "ls" in a directory which isn't currently valid in the cache can take several minutes to return. Does anyone have any ideas on how to stop AFM lookups running slowly when the AFM queues are big? ----------- Orlando -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From S.J.Thompson at bham.ac.uk Thu Nov 5 16:51:00 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 5 Nov 2015 16:51:00 +0000 Subject: [gpfsug-discuss] Running the gui Message-ID: Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? Thanks Simon From Robert.Oesterlin at nuance.com Thu Nov 5 16:55:42 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 5 Nov 2015 16:55:42 +0000 Subject: [gpfsug-discuss] Running the gui Message-ID: <2DD690DB-6510-4C5F-848A-91FC15DA6C84@nuance.com> Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Thu Nov 5 17:10:46 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 5 Nov 2015 17:10:46 +0000 Subject: [gpfsug-discuss] Running the gui In-Reply-To: <2DD690DB-6510-4C5F-848A-91FC15DA6C84@nuance.com> References: <2DD690DB-6510-4C5F-848A-91FC15DA6C84@nuance.com> Message-ID: Yeah. Works and requires. What I'm trying to figure out. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 05 November 2015 16:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? From duersch at us.ibm.com Mon Nov 9 16:27:54 2015 From: duersch at us.ibm.com (Steve Duersch) Date: Mon, 9 Nov 2015 11:27:54 -0500 Subject: [gpfsug-discuss] Running the GUI In-Reply-To: References: Message-ID: I have confirmed that the GUI will run on a client license and is fully supported there. It can be any node. Steve Duersch Spectrum Scale (GPFS) FVTest IBM Poughkeepsie, New York Date: Thu, 5 Nov 2015 16:51:00 +0000 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Subject: [gpfsug-discuss] Running the gui Message-ID: Content-Type: text/plain; charset="us-ascii" Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? Thanks Simon From: gpfsug-discuss-request at spectrumscale.org To: gpfsug-discuss at spectrumscale.org Date: 11/06/2015 07:00 AM Subject: gpfsug-discuss Digest, Vol 46, Issue 4 Sent by: gpfsug-discuss-bounces at spectrumscale.org Send gpfsug-discuss mailing list submissions to gpfsug-discuss at spectrumscale.org To subscribe or unsubscribe via the World Wide Web, visit http://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' to gpfsug-discuss-request at spectrumscale.org You can reach the person managing the list at gpfsug-discuss-owner at spectrumscale.org When replying, please edit your Subject line so it is more specific than "Re: Contents of gpfsug-discuss digest..." Today's Topics: 1. Running the gui (Simon Thompson (Research Computing - IT Services)) 2. Re: Running the gui (Oesterlin, Robert) 3. Re: Running the gui (Simon Thompson (Research Computing - IT Services)) ---------------------------------------------------------------------- Message: 1 Date: Thu, 5 Nov 2015 16:51:00 +0000 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Subject: [gpfsug-discuss] Running the gui Message-ID: Content-Type: text/plain; charset="us-ascii" Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? Thanks Simon ------------------------------ Message: 2 Date: Thu, 5 Nov 2015 16:55:42 +0000 From: "Oesterlin, Robert" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Message-ID: <2DD690DB-6510-4C5F-848A-91FC15DA6C84 at nuance.com> Content-Type: text/plain; charset="utf-8" Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20151105/e39af88a/attachment-0001.html > ------------------------------ Message: 3 Date: Thu, 5 Nov 2015 17:10:46 +0000 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Message-ID: Content-Type: text/plain; charset="Windows-1252" Yeah. Works and requires. What I'm trying to figure out. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Oesterlin, Robert [Robert.Oesterlin at nuance.com] Sent: 05 November 2015 16:55 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Running the gui Well, in my beta testing, it runs just fine with a client licensed node. Can?t imagine it requiring a server license. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Thursday, November 5, 2015 at 11:51 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] Running the gui Quick question, the gui and performance monitor has to run on a node in the cluster. Does anyone know if that can be any node? Or does it have to have a server license? ------------------------------ _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss End of gpfsug-discuss Digest, Vol 46, Issue 4 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From st.graf at fz-juelich.de Tue Nov 10 07:53:19 2015 From: st.graf at fz-juelich.de (Stephan Graf) Date: Tue, 10 Nov 2015 08:53:19 +0100 Subject: [gpfsug-discuss] ILM and Backup Question In-Reply-To: <201510281336.t9SDaiNa015723@d01av01.pok.ibm.com> References: <81E9FF09-D666-4BD1-A727-39AF4ED1F54B@iu.edu> <562DE7B5.7080303@fz-juelich.de> <201510262114.t9QLENpG024083@d01av01.pok.ibm.com> <562F21B7.8040007@fz-juelich.de> <201510271526.t9RFQ2Bw027971@d03av02.boulder.ibm.com> <563081E9.2090605@fz-juelich.de> <201510281336.t9SDaiNa015723@d01av01.pok.ibm.com> Message-ID: <5641A26F.4070405@fz-juelich.de> Hi Wayne. Just to come back to the mmbackup performance. Here the way we call it and the performance results: MTHREADS=1 QOPT="" # we check the lust run and set this to '-q' if required' /usr/lpp/mmfs/bin/mmbackup /$FS -S $SNAPFILE -g /work/root/mmbackup -a 4 $QOPT -m $MTHREADS -B 1000 -N justt sms04c1 --noquote --tsm-servers home -v -------------------------------------------------------- mmbackup: Backup of /homeb begins at Mon Nov 9 00:03:30 MEZ 2015. -------------------------------------------------------- ... Mon Nov 9 00:03:35 2015 mmbackup:Scanning file system homeb Mon Nov 9 03:07:17 2015 mmbackup:File system scan of homeb is complete. Mon Nov 9 03:07:17 2015 mmbackup:Calculating backup and expire lists for server home Mon Nov 9 03:07:17 2015 mmbackup:Determining file system changes for homeb [home]. Mon Nov 9 03:44:33 2015 mmbackup:changed=126305, expired=10086, unsupported=0 for server [home] Mon Nov 9 03:44:33 2015 mmbackup:Finished calculating lists [126305 changed, 10086 expired] for server home. Mon Nov 9 03:44:33 2015 mmbackup:Sending files to the TSM server [126305 changed, 10086 expired]. Mon Nov 9 03:44:33 2015 mmbackup:Performing expire operations Mon Nov 9 03:45:32 2015 mmbackup:Completed policy expiry run with 0 policy errors, 0 files failed, 0 severe errors, returning r c=0. Mon Nov 9 03:45:32 2015 mmbackup:Policy for expiry returned 0 Highest TSM error 0 Mon Nov 9 03:45:32 2015 mmbackup:Performing backup operations Mon Nov 9 04:54:29 2015 mmbackup:Completed policy backup run with 0 policy errors, 0 files failed, 0 severe errors, returning r c=0. Mon Nov 9 04:54:29 2015 mmbackup:Policy for backup returned 0 Highest TSM error 0 Total number of objects inspected: 137562 Total number of objects backed up: 127476 Total number of objects updated: 0 Total number of objects rebound: 0 Total number of objects deleted: 0 Total number of objects expired: 10086 Total number of objects failed: 0 Total number of bytes transferred: 427 GB Total number of objects encrypted: 0 Total number of bytes inspected: 459986708656 Total number of bytes transferred: 459989351070 Mon Nov 9 04:54:29 2015 mmbackup:analyzing: results from home. Mon Nov 9 04:54:29 2015 mmbackup:Analyzing audit log file /homeb/mmbackup.audit.homeb.home Mon Nov 9 05:02:46 2015 mmbackup:updating /homeb/.mmbackupShadow.1.home with /homeb/.mmbackupCfg/tmpfile2.mmbackup.homeb Mon Nov 9 05:02:46 2015 mmbackup:Copying updated shadow file to the TSM server Mon Nov 9 05:03:51 2015 mmbackup:Done working with files for TSM Server: home. Mon Nov 9 05:03:51 2015 mmbackup:Completed backup and expire jobs. Mon Nov 9 05:03:51 2015 mmbackup:TSM server home had 0 failures or excluded paths and returned 0. Its shadow database has been updated. Shadow DB state:updated Mon Nov 9 05:03:51 2015 mmbackup:Completed successfully. exit 0 ---------------------------------------------------------- mmbackup: Backup of /homeb completed successfully at Mon Nov 9 05:03:51 MEZ 2015. ---------------------------------------------------------- Stephan On 10/28/15 14:36, Wayne Sawdon wrote: > > You have to use both options even if -N is only the local node. Sorry, > > -Wayne > > > > Inactive hide details for Stephan Graf ---10/28/2015 01:06:36 AM---Hi > Wayne! We are using -g, and we only want to run it on oneStephan Graf > ---10/28/2015 01:06:36 AM---Hi Wayne! We are using -g, and we only > want to run it on one node, so we don't use the -N option. > > From: Stephan Graf > To: > Date: 10/28/2015 01:06 AM > Subject: Re: [gpfsug-discuss] ILM and Backup Question > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Hi Wayne! > > We are using -g, and we only want to run it on one node, so we don't > use the -N option. > > Stephan > > On 10/27/15 16:25, Wayne Sawdon wrote: > > > > From: Stephan Graf __ > > > > We are running the mmbackup on an AIX system > > oslevel -s > > 6100-07-10-1415 > > Current GPFS build: "4.1.0.8 ". > > > > So we only use one node for the policy run. > > > > Even on one node you should see a speedup using -g and -N. > > -Wayne > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > _http://gpfsug.org/mailman/listinfo/gpfsug-discuss_ > > > > > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > Forschungszentrum Juelich GmbH > 52425 Juelich > Sitz der Gesellschaft: Juelich > Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 > Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher > Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), > Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, > Prof. Dr. Sebastian M. Schmidt > ------------------------------------------------------------------------------------------------ > ------------------------------------------------------------------------------------------------ > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: From makaplan at us.ibm.com Tue Nov 10 16:20:18 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 10 Nov 2015 11:20:18 -0500 Subject: [gpfsug-discuss] ILM and Backup Question In-Reply-To: <5641A26F.4070405@fz-juelich.de> References: <81E9FF09-D666-4BD1-A727-39AF4ED1F54B@iu.edu> <562DE7B5.7080303@fz-juelich.de> <201510262114.t9QLENpG024083@d01av01.pok.ibm.com> <562F21B7.8040007@fz-juelich.de> <201510271526.t9RFQ2Bw027971@d03av02.boulder.ibm.com> <563081E9.2090605@fz-juelich.de> <201510281336.t9SDaiNa015723@d01av01.pok.ibm.com> <5641A26F.4070405@fz-juelich.de> Message-ID: <201511101620.tAAGKRg0010175@d03av03.boulder.ibm.com> OOPS... mmbackup uses mmapplypolicy. Unfortunately the script "mmapplypolicy" is a little "too smart". When you use the "-N mynode" parameter it sees that you are referring to just the node upon which you are executing and does not pass the -N argument to the underlying tsapolicy command. (Not my idea, just telling you what's there.) So right now, to force the parallelized inode scan on a single node, please just use the tsapolicy command with -N and -g. tsapolicy doesn't do such smart argument checking, it is also missing the nodefile, nodeclass, defaultHelperNodes stuff ... those are some of the "value add" of the mmapplypolicy script. If you're running the parallel version and with message level -L 1 you will see this message: [I] 2015-11-10 at 15:57:47.871 Parallel-piped sort and policy evaluation. 5 files scanned. Otherwise you will see this message: [I] 2015-11-10 at 15:49:44.816 Policy evaluation. 5 files scanned. But ... if you're running mmapplypolicy under mmbackup... a little more hacking is required. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Nov 11 13:01:30 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 11 Nov 2015 13:01:30 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda Message-ID: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. Here is the agenda: 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden 2:10 - 2:30 GUI Demo- Ben Randall 2:30 - 3:00 Product quality improvement updates - Hye-Young 3:00 - 3:15 Break 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? Panelists: Bob Oesterlin, Nuance (Arxscan) Wolfgang Bring, Julich (homegrown) Mark Weghorst, Travelport (open source based on Graphana & FluxDB) 5:45 ?Welcome Reception by DSS (sponsoring reception) Bob Oesterlin Sr Storage Engineer, Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From service at metamodul.com Wed Nov 11 16:57:49 2015 From: service at metamodul.com (service at metamodul.com) Date: Wed, 11 Nov 2015 17:57:49 +0100 (CET) Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) Message-ID: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> @IBM GPFS and HA GPFS has now the so called protocol nodes which do provide a HA environment for NFS and SAMBA. I assume its based on the CTDB since the CTDB is currently supporting a few protocols already.* What i would like to see is a generic HA interface using GPFS. It could be based on the CTDB , native GPFS callbacks or any service providing HA functionality based on a clustered FS. Such a service would allow - only with minor extentions - to make almost any service (Oracle,DB2,FTP,SSH,NFS,CRON,TSM a.s.o ) HA. So IMHO the current approach is a little bit shortsighted. GPFS and System i I looking forward the day we have a SQL interface/API to GPFS. Thus storing DB objects natively on a GPFS thus not using any kind of addional DB files. Now if you would have such an interface what about a general modern language which supportr SQL and is multi node runable ? Who knows ... Maybe the AS/400 gets reinvented cheers Hajo Reference: * https://ctdb.samba.org/documentation.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed Nov 11 19:12:05 2015 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 11 Nov 2015 11:12:05 -0800 Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) In-Reply-To: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> References: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> Message-ID: <201511111921.tABJLbrG011143@d01av04.pok.ibm.com> It is probably not what you are looking for, but I did implement a two node HA solution using callbacks for SNMP. You could do something like that in the near term. https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Implementing%20a%20GPFS%20HA%20SNMP%20configuration%20using%20Callbacks Scott Fadden Spectrum Scale - Technical Marketing Phone: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/storage/spectrum/scale From: "service at metamodul.com" To: gpfsug main discussion list Date: 11/11/2015 08:58 AM Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) Sent by: gpfsug-discuss-bounces at spectrumscale.org @IBM GPFS and HA GPFS has now the so called protocol nodes which do provide a HA environment for NFS and SAMBA. I assume its based on the CTDB since the CTDB is currently supporting a few protocols already.* What i would like to see is a generic HA interface using GPFS. It could be based on the CTDB , native GPFS callbacks or any service providing HA functionality based on a clustered FS. Such a service would allow - only with minor extentions - to make almost any service (Oracle,DB2,FTP,SSH,NFS,CRON,TSM a.s.o ) HA. So IMHO the current approach is a little bit shortsighted. GPFS and System i I looking forward the day we have a SQL interface/API to GPFS. Thus storing DB objects natively on a GPFS thus not using any kind of addional DB files. Now if you would have such an interface what about a general modern language which supportr SQL and is multi node runable ? Who knows ... Maybe the AS/400 gets reinvented cheers Hajo Reference: * https://ctdb.samba.org/documentation.html _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From sfadden at us.ibm.com Wed Nov 11 19:12:05 2015 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 11 Nov 2015 11:12:05 -0800 Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) In-Reply-To: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> References: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> Message-ID: <201511111920.tABJK1Fq016276@d01av05.pok.ibm.com> It is probably not what you are looking for, but I did implement a two node HA solution using callbacks for SNMP. You could do something like that in the near term. https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/Implementing%20a%20GPFS%20HA%20SNMP%20configuration%20using%20Callbacks Scott Fadden Spectrum Scale - Technical Marketing Phone: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/storage/spectrum/scale From: "service at metamodul.com" To: gpfsug main discussion list Date: 11/11/2015 08:58 AM Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) Sent by: gpfsug-discuss-bounces at spectrumscale.org @IBM GPFS and HA GPFS has now the so called protocol nodes which do provide a HA environment for NFS and SAMBA. I assume its based on the CTDB since the CTDB is currently supporting a few protocols already.* What i would like to see is a generic HA interface using GPFS. It could be based on the CTDB , native GPFS callbacks or any service providing HA functionality based on a clustered FS. Such a service would allow - only with minor extentions - to make almost any service (Oracle,DB2,FTP,SSH,NFS,CRON,TSM a.s.o ) HA. So IMHO the current approach is a little bit shortsighted. GPFS and System i I looking forward the day we have a SQL interface/API to GPFS. Thus storing DB objects natively on a GPFS thus not using any kind of addional DB files. Now if you would have such an interface what about a general modern language which supportr SQL and is multi node runable ? Who knows ... Maybe the AS/400 gets reinvented cheers Hajo Reference: * https://ctdb.samba.org/documentation.html _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From RWelp at uk.ibm.com Thu Nov 12 20:11:27 2015 From: RWelp at uk.ibm.com (Richard Welp) Date: Thu, 12 Nov 2015 20:11:27 +0000 Subject: [gpfsug-discuss] Meet the Devs - Edinburgh Message-ID: Hello All, I recently posted a blog entry to the User Group website outlining the Meet the Devs meeting we had in Edinburgh. If you are interested - here is a link to the recap-> http://www.spectrumscale.org/meet-the-devs-edinburgh/ Thanks, Rick =================== Rick Welp Software Engineer Master Inventor Email: rwelp at uk.ibm.com phone: +44 0161 214 0461 IBM Systems - Manchester Lab IBM UK Limited -------------------------- Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From volobuev at us.ibm.com Fri Nov 13 00:08:22 2015 From: volobuev at us.ibm.com (Yuri L Volobuev) Date: Thu, 12 Nov 2015 16:08:22 -0800 Subject: [gpfsug-discuss] NSD Server Design and Tuning Message-ID: <201511130008.tAD08Rih003504@d03av03.boulder.ibm.com> Hi The subject of GPFS NSD server tuning, and the underlying design that dictates tuning choices, has been coming up repeatedly in various forums, including this mailing list. ?Clearly, this topic hasn't been documented in sufficient detail. ?It is my sincere hope that the new document on the subject is going to provide some relief: https://ibm.biz/BdHq5v As always, feedback is welcome. yuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From carlz at us.ibm.com Fri Nov 13 13:33:01 2015 From: carlz at us.ibm.com (Carl Zetie) Date: Fri, 13 Nov 2015 08:33:01 -0500 Subject: [gpfsug-discuss] Announce: You can now file PUBLIC enhancement requests for Scale Message-ID: <201511131333.tADDXGUL010059@d01av02.pok.ibm.com> In response to requests from the community, we've added a new way to submit Public enhancement requests (RFEs) for Scale. In the past, RFEs were private, which was great for business-sensitive requests, but meant that other people couldn't effectively vote on them; and requests would often be duplicated because people couldn't see the detail of existing requests. So now we have TWO ways to submit a request. When you go to the RFE page on developerworks (https://www.ibm.com/developerworks/rfe/), you'll find two entries for Scale in the "products": one for Private RFEs (same as previously), and one for Public RFEs. Simply choose the visibility you want. Internally, they all go into the same evaluation process. A couple of notes: - Even with a public request, certain fields are still private, including Company Name and Business Justification - All existing requests remain Private. If you have one that you want flipped, please contact me off-list with the request number regards, Carl Carl Zetie Product Manager for Spectrum Scale, IBM (540) 882 9353 ][ 15750 Brookhill Ct, Waterford VA 20197 carlz at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri Nov 13 20:33:55 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 13 Nov 2015 20:33:55 +0000 Subject: [gpfsug-discuss] NSD Server Design and Tuning In-Reply-To: <201511130008.tAD08Rih003504@d03av03.boulder.ibm.com> References: <201511130008.tAD08Rih003504@d03av03.boulder.ibm.com> Message-ID: Yuri - this a fantastic document! Thanks for taking the time to put it together. I'll probably have a lot more questions after I really look at my NSD configuration. Encourage the Spectrum Scale team to do more of these. Bob Oesterlin Sr Storage Engineer, Nuance Communications _____________________________ From: Yuri L Volobuev > Sent: Thursday, November 12, 2015 6:08 PM Subject: [gpfsug-discuss] NSD Server Design and Tuning To: > Hi The subject of GPFS NSD server tuning, and the underlying design that dictates tuning choices, has been coming up repeatedly in various forums, including this mailing list. Clearly, this topic hasn't been documented in sufficient detail. It is my sincere hope that the new document on the subject is going to provide some relief: https://ibm.biz/BdHq5v As always, feedback is welcome. yuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From bsallen at alcf.anl.gov Fri Nov 13 21:21:36 2015 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 13 Nov 2015 21:21:36 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> References: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> Message-ID: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Hi Bob, For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? Thanks, Ben > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > Here is the agenda: > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > 2:10 - 2:30 GUI Demo- Ben Randall > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > 3:00 - 3:15 Break > > 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar > 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > Panelists: > Bob Oesterlin, Nuance (Arxscan) > Wolfgang Bring, Julich (homegrown) > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > Bob Oesterlin > Sr Storage Engineer, Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From bsallen at alcf.anl.gov Fri Nov 13 21:21:36 2015 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 13 Nov 2015 21:21:36 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> References: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com> Message-ID: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Hi Bob, For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? Thanks, Ben > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > Here is the agenda: > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > 2:10 - 2:30 GUI Demo- Ben Randall > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > 3:00 - 3:15 Break > > 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar > 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > Panelists: > Bob Oesterlin, Nuance (Arxscan) > Wolfgang Bring, Julich (homegrown) > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > Bob Oesterlin > Sr Storage Engineer, Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Fri Nov 13 21:34:58 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 13 Nov 2015 21:34:58 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> References: <91FB69CE-ED08-4D85-A126-0E49ACAD27E3@nuance.com>, <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Message-ID: Hi Ben, We always try and ask if people are happy for people to have their slides posted online afterwards. Obviously if there are nda slides in the deck then we cant share. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Allen, Benjamin S. [bsallen at alcf.anl.gov] Sent: 13 November 2015 21:21 To: gpfsug main discussion list Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda Hi Bob, For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? Thanks, Ben > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > Here is the agenda: > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > 2:10 - 2:30 GUI Demo- Ben Randall > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > 3:00 - 3:15 Break > > 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar > 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > Panelists: > Bob Oesterlin, Nuance (Arxscan) > Wolfgang Bring, Julich (homegrown) > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > Bob Oesterlin > Sr Storage Engineer, Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From kallbac at iu.edu Fri Nov 13 21:44:22 2015 From: kallbac at iu.edu (Kristy Kallback-Rose) Date: Fri, 13 Nov 2015 16:44:22 -0500 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <8E021133-25EC-4478-9894-07FECB8368C8@alcf.anl.gov> Message-ID: <6b0611d1-24fd-4145-96dd-aff7d751a8ae@email.android.com> We will collect as many as we can and put up with a blog post. Kristy On Nov 13, 2015 4:21 PM, "Allen, Benjamin S." wrote: > > Hi Bob, > > For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? > > Thanks, > > Ben > > > On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: > > > > The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! > > > > The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. > > > > Here is the agenda: > > > > 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose > > 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali > > 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden > > 2:10 - 2:30 GUI Demo- Ben Randall > > 2:30 - 3:00 Product quality improvement updates - Hye-Young > > > > 3:00 - 3:15 Break > > > > 3:10 to 3:35 The? Hartree Centre, Past, present and future - Colin Morey of UK HPC > > 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport > > 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar? > > 4:25 to 4:50? "Large Data Ingest Architecture? - Martin Gasthuber of DESY > > 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? > > Panelists: > > Bob Oesterlin, Nuance (Arxscan) > > Wolfgang Bring, Julich? (homegrown) > > Mark Weghorst, Travelport (open source based on Graphana & FluxDB) > > > > 5:45 ?Welcome Reception by DSS (sponsoring reception) > > > > > > Bob Oesterlin > > Sr Storage Engineer, Nuance Communications > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From bsallen at alcf.anl.gov Fri Nov 13 22:22:29 2015 From: bsallen at alcf.anl.gov (Allen, Benjamin S.) Date: Fri, 13 Nov 2015 22:22:29 +0000 Subject: [gpfsug-discuss] GPFSUG Meeting at SC15 - Final Agenda In-Reply-To: <6b0611d1-24fd-4145-96dd-aff7d751a8ae@email.android.com> References: <6b0611d1-24fd-4145-96dd-aff7d751a8ae@email.android.com> Message-ID: <2602E279-E811-4AB4-8E77-746D96B28B34@alcf.anl.gov> Thanks Kristy and Simon. Ben > On Nov 13, 2015, at 3:44 PM, Kristy Kallback-Rose wrote: > > We will collect as many as we can and put up with a blog post. > > Kristy > > On Nov 13, 2015 4:21 PM, "Allen, Benjamin S." wrote: >> >> Hi Bob, >> >> For those of us that can't make SC this year, could you possibly collect slides and share them to the group afterwards? >> >> Thanks, >> >> Ben >> >>> On Nov 11, 2015, at 7:01 AM, Oesterlin, Robert wrote: >>> >>> The GPFS UG meeting at SC15 is just a few days away. We have close to 200 signed up, so it should be a great time! >>> >>> The meeting starts at 1 PM US Central time at the JW Marriott in Austin. It?s pretty packed ? I?m sure there will be time after and during the week for extended discussions. >>> >>> Here is the agenda: >>> >>> 1:00 - 1:10 - GPFS-UG US chapter Overview ? Bob Oesterlin /Kristy Kallback-Rose >>> 1:10 - 1:20 Kick-off ? Doris Conti/Akhtar Ali >>> 1:20 - 2:10 Roadmap & technical deep dive - Scott Fadden >>> 2:10 - 2:30 GUI Demo- Ben Randall >>> 2:30 - 3:00 Product quality improvement updates - Hye-Young >>> >>> 3:00 - 3:15 Break >>> >>> 3:10 to 3:35 The Hartree Centre, Past, present and future - Colin Morey of UK HPC >>> 3:35 to 4:00 Low Latency performance with Flash - Mark Weghorst of Travelport >>> 4:00 to 4:25 "Performance Tuning & results with Latest ESS configurations? - Matt Forney & Bernard of WSU/Ennovar >>> 4:25 to 4:50 "Large Data Ingest Architecture? - Martin Gasthuber of DESY >>> 4:50 ? 5:45 Panel Discussion: "My favorite tool for managing Spectrum Scale is...? >>> Panelists: >>> Bob Oesterlin, Nuance (Arxscan) >>> Wolfgang Bring, Julich (homegrown) >>> Mark Weghorst, Travelport (open source based on Graphana & FluxDB) >>> >>> 5:45 ?Welcome Reception by DSS (sponsoring reception) >>> >>> >>> Bob Oesterlin >>> Sr Storage Engineer, Nuance Communications >>> _______________________________________________ >>> gpfsug-discuss mailing list >>> gpfsug-discuss at spectrumscale.org >>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Robert.Oesterlin at nuance.com Sun Nov 15 00:55:56 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Sun, 15 Nov 2015 00:55:56 +0000 Subject: [gpfsug-discuss] Announce: You can now file PUBLIC enhancement requests for Scale In-Reply-To: <201511131333.tADDXGUL010059@d01av02.pok.ibm.com> References: <201511131333.tADDXGUL010059@d01av02.pok.ibm.com> Message-ID: Great news Carl ? thanks for you help in getting this in place. Bob Oesterlin Sr Storage Engineer, Nuance Communications From: > on behalf of Carl Zetie > Reply-To: gpfsug main discussion list > Date: Friday, November 13, 2015 at 7:33 AM To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Announce: You can now file PUBLIC enhancement requests for Scale In response to requests from the community, we've added a new way to submit Public enhancement requests (RFEs) for Scale. In the past, RFEs were private, which was great for business-sensitive requests, but meant that other people couldn't effectively vote on them; and requests would often be duplicated because people couldn't see the detail of existing requests. So now we have TWO ways to submit a request. When you go to the RFE page on developerworks (https://www.ibm.com/developerworks/rfe/), you'll find two entries for Scale in the "products": one for Private RFEs (same as previously), and one for Public RFEs. Simply choose the visibility you want. Internally, they all go into the same evaluation process. A couple of notes: - Even with a public request, certain fields are still private, including Company Name and Business Justification - All existing requests remain Private. If you have one that you want flipped, please contact me off-list with the request number regards, Carl Carl Zetie Product Manager for Spectrum Scale, IBM (540) 882 9353 ][ 15750 Brookhill Ct, Waterford VA 20197 carlz at us.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at spectrumscale.org Mon Nov 16 12:26:52 2015 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Mon, 16 Nov 2015 06:26:52 -0600 Subject: [gpfsug-discuss] SC15 UG Survery Message-ID: Hi, For those at yesterday's meeting at SC15, just a reminder that there is an online survey for feedback at: http://www.surveymonkey.com/r/SSUGSC15 Thanks to all the speakers yesterday and to Kristy, Bob and the IBM people (Doug, Pallavi) for making it happen. Simon From service at metamodul.com Mon Nov 16 18:13:05 2015 From: service at metamodul.com (service at metamodul.com) Date: Mon, 16 Nov 2015 19:13:05 +0100 (CET) Subject: [gpfsug-discuss] GPFS and High Availability , GPFS and the System i (AS/400 ) In-Reply-To: <201511111920.tABJK3Ga016406@d01av05.pok.ibm.com> References: <1525396761.174075.1447261069961.JavaMail.open-xchange@oxbaltgw05.schlund.de> <201511111920.tABJK3Ga016406@d01av05.pok.ibm.com> Message-ID: <772407947.175151.1447697585599.JavaMail.open-xchange@oxbaltgw02.schlund.de> Hi Scott, > > It is probably not what you are looking for, but I did implement a two node > HA solution using callbacks for SNMP. ... I knew about and wrote even my own generic HA API for GPFS based on the very old GPFS callbacks ( preumount .... ) I am trying to make IBM aware that they have a very nice product ( GPFS ) which just needs a little HA API on top to be able to provide generic HA application support out of the box. I must admit that i could rewrite my own HA API ( A script and a config file .... ) for GPFS but i have no time or money for it. I must also admit that i am not the best shell script writer .... Cheers Hajo -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From chair at spectrumscale.org Mon Nov 16 23:47:51 2015 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Mon, 16 Nov 2015 17:47:51 -0600 Subject: [gpfsug-discuss] SC15 User Groups Slides Message-ID: Hi All, Slides from the SC15 user group meeting in Austin have been posted to the UG website at: http://www.spectrumscale.org/presentations/ Simon From cphoffma at lanl.gov Fri Nov 20 16:52:23 2015 From: cphoffma at lanl.gov (Hoffman, Christopher P) Date: Fri, 20 Nov 2015 16:52:23 +0000 Subject: [gpfsug-discuss] GPFS API Question Message-ID: Greetings, I hope this is the correct place to post this, if not I apologize. I'm attempting work with extended attributes on gpfs using the C API interface. I'm wanting to be able to read attributes and then based off that value, change the attribute. What I've done so far is a policy scan that collects certain inodes based of an xattr value. From there I collect inode numbers. Just to clarify, I'm trying to not work with a path name of any sorts, just inode. There are these functions: int gpfs_igetattrsx(gpfs_ifile_t *ifile, int flags, void *buffer, int bufferSize, int *attrSize); and int gpfs_iputattrsx(gpfs_ifile_t *ifile, int flags, void *buffer, const char *pathName); I'm looking at how to use iputattrsx but the void *buffer part confuses me on what struct to use. I've been playing with igetattrsx to try to attempt and figure out what struct to use based off the data I am seeing. I've come across gpfsGetSetXAttr_t but haven't had any luck using it. My question is, is this even possible to manipulate custom XATTRs via the gpfs api? If so any ideas on what am I doing wrong? Thanks, Christopher -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Fri Nov 20 17:39:04 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Fri, 20 Nov 2015 12:39:04 -0500 Subject: [gpfsug-discuss] GPFS API Question - extended attributes In-Reply-To: References: Message-ID: <201511201739.tAKHdBBG006478@d01av03.pok.ibm.com> If you're using policy rules and the xattr() SQL function, then you should consider using the setXattr() SQL function, to set or change the value of any particular extended attributes. Notice that the doc says: gpfs_igetattrs() subroutine: Retrieves extended file attributes in opaque format. What it does is pickup all the extended attributes of a given file and return them in a "blob". The structure of the blob is undocumented, so you should not use it to set individual extended attributes. The intended use is for backup and restore of a file's extended attributes, and you get an ACL also as a bonus. The doc says: "This subroutine is intended for use by a backup program to save all extended file attributes (ACLs, attributes, and so forth)." If you are determined to use a C API to manipulate extended attributes, I personally recommend that you first see and try if the standard OS methods will work for you. That means your code will work for any file system that can be mounted on you OS that supports extended attributes. BUT, unfortunately I have found that some extended attribute names with special prefix values cannot be accessed with the standard Linux or AIX or Posix commands or APIs. In that case you need to use the gpfs API, GPFS_FCNTL_SET_XATTR (see gpfs_fcntl.h) Which is indeed what setXattr() is using and what the mmchattr command ultimately uses. Notice that setXattr() requires you pass the new value as an SQL string. So what if you need to store a numeric value as a "binary" value? Well first figure out how to represent the value as a hexadecimal constant and then use this notation: setXattr('user.whatever', X'0123456789ABCDEF') In some common situations you can use the m4 processor to build or tear down binary and/or hexadecimal values and strings. For some examples of how to do that add this to a test policy rules file: debugfile(/tmp/m4xdeb) dumpdef And peek into the resulting m4xdeb file! -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Tue Nov 24 12:48:29 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Tue, 24 Nov 2015 12:48:29 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome Message-ID: Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon From Robert.Oesterlin at nuance.com Tue Nov 24 13:30:11 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 24 Nov 2015 13:30:11 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome Message-ID: <4D197A26-6843-4903-AB89-08F121136F03@nuance.com> It?s listed as an ?optional ? package for Linux nodes, according to the documentation ? but I can?t find it documented either. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 From: > on behalf of "Simon Thompson (Research Computing - IT Services)" > Reply-To: gpfsug main discussion list > Date: Tuesday, November 24, 2015 at 6:48 AM To: gpfsug main discussion list > Subject: [gpfsug-discuss] 4.2.0 and callhome Does anyone know what the call home rpm packages in the 4.2.0 release do? -------------- next part -------------- An HTML attachment was scrubbed... URL: From PAULROBE at uk.ibm.com Tue Nov 24 13:45:54 2015 From: PAULROBE at uk.ibm.com (Paul Roberts) Date: Tue, 24 Nov 2015 13:45:54 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome In-Reply-To: References: Message-ID: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Tue Nov 24 13:51:53 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Tue, 24 Nov 2015 13:51:53 +0000 Subject: [gpfsug-discuss] 4.2.0 and callhome In-Reply-To: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> References: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> Message-ID: Thanks for the pointer Paul. It appears that search for anything in the docs, doesn't work ... Simon From: > on behalf of Paul Roberts > Reply-To: gpfsug main discussion list > Date: Tuesday, 24 November 2015 at 13:45 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Tue Nov 24 16:35:56 2015 From: knop at us.ibm.com (Felipe Knop) Date: Tue, 24 Nov 2015 11:35:56 -0500 Subject: [gpfsug-discuss] 4.2.0 and callhome In-Reply-To: References: <201511241247.tAOCl4pS012723@d06av10.portsmouth.uk.ibm.com> Message-ID: <201511241636.tAOGa62F002867@d01av03.pok.ibm.com> Simon, all, The Call Home facility is described in the Advanced Administration Guide http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Chapter 24. Understanding the call home function A problem has been identified with the indexing facility for the Spectrum Scale 4.2 publications . The team is working to rectify that. Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/24/2015 08:52 AM Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for the pointer Paul. It appears that search for anything in the docs, doesn't work ... Simon From: on behalf of Paul Roberts Reply-To: gpfsug main discussion list Date: Tuesday, 24 November 2015 at 13:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.m.killen at leeds.ac.uk Wed Nov 25 17:52:30 2015 From: s.m.killen at leeds.ac.uk (Sean Killen) Date: Wed, 25 Nov 2015 17:52:30 +0000 Subject: [gpfsug-discuss] Introduction Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hello everyone, Just joined the list to be part of the community, so here is a bit about me. I'm Sean Killen and I work in the Faculty of Biological Sciences at the University of Leeds. I am responsible for Research Computing, UNIX/Linux, Storage and Virtualisation. I am new to GPFS /SpectrumScale and am currently evaluating a setup with a view to acquiring it primarily to manage a multi-PetaByte storage system for Research Data coming from our new Electron Microscopes, but also with a view to rolling it out to manage and curate all the research data within the Faculty and beyond. Yours - -- Sean - --? - ------------------------------------------------------------------- ??? Dr Sean M Killen ??? Research Computing Manager, IT ??? Faculty of Biological Sciences ??? University of Leeds ??? LEEDS ??? LS2 9JT ??? United Kingdom ??? Tel: +44 (0)113 3433148 ??? Mob: +44 (0)776 8670907 ??? Fax: +44 (0)113 3438465 ??? GnuPG Key ID: ee0d36f0 - ------------------------------------------------------------------- -----BEGIN PGP SIGNATURE----- iGcEAREKACcgHFMgTSBLaWxsZW4gPHNlYW5Aa2lsbGVucy5jby51az4FAlZV9VUA CgkQEm087+4NNvA+xACg61vxW34Li7tMV8dwNPXy+muO834Anj6ZM2y0j6MWHbRr WFZqTG99oeD+ =GSNu -----END PGP SIGNATURE----- From tpathare at sidra.org Thu Nov 26 15:47:17 2015 From: tpathare at sidra.org (Tushar Pathare) Date: Thu, 26 Nov 2015 15:47:17 +0000 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. Message-ID: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> Hello Team, Is it possible to share the data on GPFS and disabling data copy. It is possible through ACLs. Tushar B Pathare High Performance Computing (HPC) Administrator General Parallel File System Scientific Computing Bioinformatics Division Research Sidra Medical and Research Centre PO Box 26999 | Doha, Qatar Burj Doha Tower,Floor 8 D +974 44042250 | M +974 74793547 tpathare at sidra.org | www.sidra.org [cid:C4701480-241B-4973-B378-C72FB3BC9FFB] Disclaimer: This email and its attachments may be confidential and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, any reading, printing, storage, disclosure, copying or any other action taken in respect of this e-mail is prohibited and may be unlawful. If you are not the intended recipient, please notify the sender immediately by using the reply function and then permanently delete what you have received. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Sidra Medical and Research Center. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 142717 bytes Desc: image001.png URL: From jonathan at buzzard.me.uk Thu Nov 26 23:21:22 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Thu, 26 Nov 2015 23:21:22 +0000 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> References: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> Message-ID: <565793F2.5070407@buzzard.me.uk> On 26/11/15 15:47, Tushar Pathare wrote: > Hello Team, > > Is it possible to share the data on GPFS and disabling data copy. > > It is possible through ACLs. > I don't believe that what you are asking is technically possible in any mainstream operating system/file system combination. It certainly cannot be achieved with ACL's whether Posix, NSFv4 or NTFS. The only way to achieve this sort of thing is using digital rights management which is way beyond the scope of a file system in itself. These are all application specific. In addition these are invariable all a busted flush anyway. Torrents of movies etc. are all the proof one needs of this. The short and curlies are if then end user can view the data in any meaningful way to them, then they can make a copy of that data. From a file system perspective you can't defeat the following command line. $ cat readonly_file > my_evil_copy JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From chair at spectrumscale.org Fri Nov 27 16:01:42 2015 From: chair at spectrumscale.org (GPFS UG Chair (Simon Thompson)) Date: Fri, 27 Nov 2015 16:01:42 +0000 Subject: [gpfsug-discuss] User group etiquette Message-ID: Hi All, I'd just like to remind all users of the user group that this group is intended to be a technically focussed group and is not intended as a sales lead opportunity. In the past we've had good relationships with many vendors who have engaged in technical discussion on the list and I'd like to see this continue, just recently we've had some complaints that *several* vendors have used the group as a way of trying to generate sales leads. Please can I gently remind all members of the group that the user group is a technical forum. If we continue to receive complaints that posts to the mailing list are being used as sales leads then we'll start to ban offenders from participating in the group. I'm really sorry that we're having to do this, but strongly believe that as a user community we should be focussed on the technical aspects of the products in use. Simon (Chair) From bhill at physics.ucsd.edu Fri Nov 27 22:03:00 2015 From: bhill at physics.ucsd.edu (Bryan Hill) Date: Fri, 27 Nov 2015 14:03:00 -0800 Subject: [gpfsug-discuss] Switching from Standard to Advanced Message-ID: Hello group: Is there any special procedure or caveats involved in going from Standard Edition to Advanced Edition (besides purchasing the license, of course)? Can the Advanced Edition RPM?s (I?m on RedHat EL 6.7) simply be installed in place over the Standard Edition? I would like to implement the new AFM-based DR feature in version 4.1.1, but this requires the Advanced Edition. Thanks, Bryan --- Bryan Hill Lead System Administrator UCSD Physics Computing Facility 9500 Gilman Dr. # 0319 La Jolla, CA 92093 +1-858-534-5538 bhill at ucsd.edu From daniel.kidger at uk.ibm.com Sat Nov 28 12:56:40 2015 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Sat, 28 Nov 2015 12:56:40 +0000 Subject: [gpfsug-discuss] Switching from Standard to Advanced In-Reply-To: References: Message-ID: <201511281257.tASCvaAW027707@d06av12.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Sat Nov 28 17:49:42 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Sat, 28 Nov 2015 12:49:42 -0500 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: <565793F2.5070407@buzzard.me.uk> References: <52692F6C7C1EEB4B94E9C3B023DEFEFF1F8B1252@MV3WEXMX04PRV.smrc.sidra.org> <565793F2.5070407@buzzard.me.uk> Message-ID: <201511281749.tASHnmaU009090@d01av03.pok.ibm.com> In some ways, Jon Buzzard's answer is correct. However, outside of GPFS consider: 1) It is certainly possible to provide a user-id that has at most read access to any files and devices. A user that cannot write any files on any device, but perhaps can view them with some applications on some display only devices. 2) Regardless of (1), I always say, much as Jon, "If you can read it, you can copy it!" Consider even in a secured facility on a secure, armored terminal with no means of electrical interfacing, subject to strip search, a spy can commit important secrets to memory. Or short of strip search, one can always transcribe (copy!) to paper, canvas, parchment, film, or photograph or otherwise "screen scrape" and copy an image and/or audio to any storage device. It has also been reported that spy agencies have devices that can screen scrape at a distance, by processing electro-magnetic signals (Radio, Microwave, ...) emanated from ordinary PCs, CRTs, and the like. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From kraemerf at de.ibm.com Sun Nov 29 18:32:39 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Sun, 29 Nov 2015 19:32:39 +0100 Subject: [gpfsug-discuss] FYI - IBM Redbooks Message-ID: <201511291832.tATIWpIX023706@d06av11.portsmouth.uk.ibm.com> IBM Spectrum Scale (formerly GPFS) Revised: November 17, 2015 ISBN: 0738440736 550 pages Explore the book online at http://www.redbooks.ibm.com/redbooks/pdfs/sg248254.pdf Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Sun Nov 29 18:34:38 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Sun, 29 Nov 2015 19:34:38 +0100 Subject: [gpfsug-discuss] FYI - IBM Redpaper Message-ID: <201511291845.tATIjVeo017922@d06av08.portsmouth.uk.ibm.com> Implementing IBM Spectrum Scale Revised: November 20, 2015 More details are available at http://www.redbooks.ibm.com/redpapers/pdfs/redp5254.pdf Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From service at metamodul.com Sun Nov 29 21:22:49 2015 From: service at metamodul.com (service at metamodul.com) Date: Sun, 29 Nov 2015 22:22:49 +0100 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. Message-ID: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> I think you talk about something like the novell ci copy inhibit attribut?https://www.novell.com/documentation/oes11/stor_filesys_lx/data/bs3fkbm.html. With the current GPFS it is imho not possible. Might be able in case leight weight callbacks gets introduced. Together with self defined user attributs it might be able. Hajo Von Samsung Mobile gesendet
-------- Urspr?ngliche Nachricht --------
Von: Tushar Pathare
Datum:2015.11.26 16:47 (GMT+01:00)
An: gpfsug-discuss at spectrumscale.org
Betreff: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy.
Hello Team, Is it possible to share the data on GPFS and disabling data copy. It is possible through ACLs. Tushar B Pathare High Performance Computing (HPC) Administrator General Parallel File System Scientific Computing Bioinformatics Division Research Sidra Medical and Research Centre PO Box 26999 | Doha, Qatar Burj Doha Tower,Floor 8 D +974 44042250 | M +974 74793547 tpathare at sidra.org | www.sidra.org Disclaimer: This email and its attachments may be confidential and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient, any reading, printing, storage, disclosure, copying or any other action taken in respect of this e-mail is prohibited and may be unlawful. If you are not the intended recipient, please notify the sender immediately by using the reply function and then permanently delete what you have received. Any views or opinions expressed are solely those of the author and do not necessarily represent those of Sidra Medical and Research Center. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdeluca at gmail.com Sun Nov 29 21:45:52 2015 From: bdeluca at gmail.com (Ben De Luca) Date: Sun, 29 Nov 2015 23:45:52 +0200 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> References: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> Message-ID: How can some one have thought of implementing this, if the data can be read to memory it can be written from it...... On 29 November 2015 at 23:22, service at metamodul.com wrote: > I think you talk about something like the novell ci copy inhibit attribut > https://www.novell.com/documentation/oes11/stor_filesys_lx/data/bs3fkbm.html > . > With the current GPFS it is imho not possible. Might be able in case > leight weight callbacks gets introduced. Together with self defined user > attributs it might be able. > Hajo > > > Von Samsung Mobile gesendet > > > -------- Urspr?ngliche Nachricht -------- > Von: Tushar Pathare > Datum:2015.11.26 16:47 (GMT+01:00) > An: gpfsug-discuss at spectrumscale.org > Betreff: [gpfsug-discuss] How can we give read access to GPFS data with > restricting data copy. > > Hello Team, > > Is it possible to share the data on GPFS and disabling data copy. > > It is possible through ACLs. > > > > > > *Tushar B Pathare* > > High Performance Computing (HPC) Administrator > > General Parallel File System > > Scientific Computing > > Bioinformatics Division > > Research > > > > *Sidra Medical and Research Centre* > > PO Box 26999 | Doha, Qatar > > Burj Doha Tower,Floor 8 > > D +974 44042250 | M +974 74793547 > > tpathare at sidra.org | www.sidra.org > > > > > > [image: cid:C4701480-241B-4973-B378-C72FB3BC9FFB] > > > Disclaimer: This email and its attachments may be confidential and are > intended solely for the use of the individual to whom it is addressed. If > you are not the intended recipient, any reading, printing, storage, > disclosure, copying or any other action taken in respect of this e-mail is > prohibited and may be unlawful. If you are not the intended recipient, > please notify the sender immediately by using the reply function and then > permanently delete what you have received. Any views or opinions expressed > are solely those of the author and do not necessarily represent those of > Sidra Medical and Research Center. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Sun Nov 29 21:54:35 2015 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Sun, 29 Nov 2015 21:54:35 +0000 Subject: [gpfsug-discuss] How can we give read access to GPFS data with restricting data copy. In-Reply-To: References: <63irj9lpe966jlvyr5oj7o8d.1448832169396@email.android.com> Message-ID: <565B741B.1010003@buzzard.me.uk> On 29/11/15 21:45, Ben De Luca wrote: > How can some one have thought of implementing this, if the data can be > read to memory it can be written from it...... > That's my point. Also unless it is encrypted on the wire I can just dump it with tcpdump, I guess the issue is how high you want to make the hurdles. You and I on this list might see DRM as a waste of time the rest of population won't find it anywhere near as simple. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Robert.Oesterlin at nuance.com Sun Nov 29 23:08:06 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Sun, 29 Nov 2015 23:08:06 +0000 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? Message-ID: I noticed that IBM only shipped the zimon performance sensors for RH7 with version 4.2 . This is a HUGE disappointment ? most of my NSD servers are still and RH 6.6 (and the clients). gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm pmswift-4.2.0-0.noarch.rpm Can IBM comment on support for RH6 system with the Performance sensors? I understand the collector node must be at RH7. Making the performance sensor RH7 only means many users won?t be able to take advantage of this function. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 -------------- next part -------------- An HTML attachment was scrubbed... URL: From knop at us.ibm.com Mon Nov 30 03:27:42 2015 From: knop at us.ibm.com (Felipe Knop) Date: Sun, 29 Nov 2015 22:27:42 -0500 Subject: [gpfsug-discuss] Spectrum Scale 4.2 publications: indexing fixed Message-ID: <201511300327.tAU3ReiE005929@d01av01.pok.ibm.com> All, The indexing problem reported below has now been fixed. Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 ----- Forwarded by Felipe Knop/Poughkeepsie/IBM on 11/29/2015 10:21 PM ----- From: Felipe Knop/Poughkeepsie/IBM To: gpfsug main discussion list Date: 11/24/2015 11:36 AM Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Simon, all, The Call Home facility is described in the Advanced Administration Guide http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Chapter 24. Understanding the call home function A problem has been identified with the indexing facility for the Spectrum Scale 4.2 publications . The team is working to rectify that. Felipe ---- Felipe Knop knop at us.ibm.com GPFS Development IBM Systems IBM Building 008 2455 South Rd, Poughkeepsie, NY 12601 (845) 433-9314 T/L 293-9314 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/24/2015 08:52 AM Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Thanks for the pointer Paul. It appears that search for anything in the docs, doesn't work ... Simon From: on behalf of Paul Roberts Reply-To: gpfsug main discussion list Date: Tuesday, 24 November 2015 at 13:45 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] 4.2.0 and callhome Hi Simon, there is a section on call home in the Spectrum Scale 4.2 knowledge centre: http://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html It's chapter 24 in the "IBM Spectrum Scale V4.2: Advanced Administration Guide" section which is available as a pdf here: http://www-01.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY_4.2.0/c2370323.pdf Hope that helps give you some idea, I'm sure someone with more knowledge about Call Home can answer any specific queries. Best wishes, Paul ====================================================== Dr Paul Roberts, IBM Spectrum Scale - Development Engineer IBM Systems UK IBM Manchester Lab, 40 Blackfriars Street, Manchester, M3 2EG, UK E-mail: paulrobe at uk.ibm.com, Telephone: (+44) 161 2140424 ====================================================== From: "Simon Thompson (Research Computing - IT Services)" < S.J.Thompson at bham.ac.uk> To: gpfsug main discussion list Date: 24/11/2015 12:48 Subject: [gpfsug-discuss] 4.2.0 and callhome Sent by: gpfsug-discuss-bounces at spectrumscale.org Does anyone know what the call home rpm packages in the 4.2.0 release do? The upgrade/install guide tells me to "rpm -i *.rpm", but I'd like to know what this call home stuff is before just blindly installing it. Searching for "call home" and "callhome" in the online docs doesn't seems to find anything. Anyone any insight on what this is all about? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tomasz.Wolski at ts.fujitsu.com Mon Nov 30 10:45:36 2015 From: Tomasz.Wolski at ts.fujitsu.com (Tomasz.Wolski at ts.fujitsu.com) Date: Mon, 30 Nov 2015 10:45:36 +0000 Subject: [gpfsug-discuss] IO performance of replicated GPFS filesystem Message-ID: <8b3278e23a5b42a3be80629ee18f307b@R01UKEXCASM223.r01.fujitsu.local> Hi All, I could use some help of the experts here :) Please correct me if I'm wrong: I suspect that GPFS filesystem READ performance is better when filesystem is replicated to i.e. two failure groups, where these failure groups are placed on separate RAID controllers. In this case WRITE performance should be worse, since the same data must go to two locations. What about situation where GPFS filesystem has two metadataOnly NSDs which are also replicated? Does metadata READ performance increase in this way as well (and WRITE decreases)? Best regards, Tomasz Wolski -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Mon Nov 30 11:11:44 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 30 Nov 2015 11:11:44 +0000 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? In-Reply-To: References: , Message-ID: Thanks Alexander! I'm assuming these can be requested directly from IBM until then via PMR process. (no need to respond if this is the case) Bob Oesterlin Sr Storage Engineer, Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From A.Wolf-Reber at de.ibm.com Mon Nov 30 12:52:10 2015 From: A.Wolf-Reber at de.ibm.com (Alexander Wolf) Date: Mon, 30 Nov 2015 13:52:10 +0100 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? In-Reply-To: References: Message-ID: This was a mistake. The RHEL6 sensor packages should have been included but where somehow not picked up in the final image. We will fix this with the next PTF. Mit freundlichen Gr??en / Kind regards IBM Spectrum Scale Dr. Alexander Wolf-Reber Spectrum Scale GUI development lead Department M069 / Spectrum Scale Software Development +49-6131-84-6521 a.wolf-reber at de.ibm.com IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz / Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ----- Original message ----- From: "Oesterlin, Robert" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? Date: Mon, Nov 30, 2015 12:08 AM I noticed that IBM only shipped the zimon performance sensors for RH7 with version 4.2 . This is a HUGE disappointment ? most of my NSD servers are still and RH 6.6 (and the clients). gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm pmswift-4.2.0-0.noarch.rpm Can IBM comment on support for RH6 system with the Performance sensors? I understand the collector node must be at RH7. Making the performance sensor RH7 only means many users won?t be able to take advantage of this function. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From bbanister at jumptrading.com Mon Nov 30 16:01:58 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Mon, 30 Nov 2015 16:01:58 +0000 Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? In-Reply-To: References: Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05DAB217@CHI-EXCHANGEW1.w2k.jumptrading.com> Please let us know if there is an APAR number we can track for this, thanks! -Bryan -----Original Message----- From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Alexander Wolf Sent: Monday, November 30, 2015 6:52 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? This was a mistake. The RHEL6 sensor packages should have been included but where somehow not picked up in the final image. We will fix this with the next PTF. Mit freundlichen Gr??en / Kind regards IBM Spectrum Scale Dr. Alexander Wolf-Reber Spectrum Scale GUI development lead Department M069 / Spectrum Scale Software Development +49-6131-84-6521 a.wolf-reber at de.ibm.com IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz / Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ----- Original message ----- From: "Oesterlin, Robert" Sent by: gpfsug-discuss-bounces at spectrumscale.org To: gpfsug main discussion list Cc: Subject: [gpfsug-discuss] Spectrum Scale 4.2 - no support for Zimon Perf sensors on RH6? Date: Mon, Nov 30, 2015 12:08 AM I noticed that IBM only shipped the zimon performance sensors for RH7 with version 4.2 . This is a HUGE disappointment ? most of my NSD servers are still and RH 6.6 (and the clients). gpfs.gss.pmcollector-4.2.0-0.el7.x86_64.rpm gpfs.gss.pmsensors-4.2.0-0.el7.x86_64.rpm pmswift-4.2.0-0.noarch.rpm Can IBM comment on support for RH6 system with the Performance sensors? I understand the collector node must be at RH7. Making the performance sensor RH7 only means many users won?t be able to take advantage of this function. Bob Oesterlin Sr Storage Engineer, Nuance Communications 507-269-0413 _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. From S.J.Thompson at bham.ac.uk Mon Nov 30 16:27:34 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 30 Nov 2015 16:27:34 +0000 Subject: [gpfsug-discuss] Placement policies and copies Message-ID: Hi, I have a file system which has the default number of data copies set to 2. I now have some data Id like to have which only has 1 copy made. I know that files and directories don't inherit 1 copy based on their parent. Can I do this with a placement rule to change the number of copies to 1? I don't really want to have to find the file afterwards and fix up as that requires an mmrestripefs to clear the second copy. Or if I have a pool which only has nsd disks in a single failure group and use a placement policy for that, would that work? Or will gpfs forever warn me that due to fs changes I have data at risk? Thanks Simon From makaplan at us.ibm.com Mon Nov 30 17:58:23 2015 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 30 Nov 2015 12:58:23 -0500 Subject: [gpfsug-discuss] Placement policies and copies In-Reply-To: References: Message-ID: <201511301758.tAUHwYn9018800@d01av01.pok.ibm.com> From the Advanced Admin book: File placement rules: RULE [?RuleName?] SET POOL ?PoolName? [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET (?FilesetName?[,?FilesetName?]...)] [WHERE SqlExpression] So, use REPLICATE(1) That's for new files as they are being created. You can use mmapplypolicy and the MIGRATE rule to change the replication factor of files that already exist. --marc of GPFS. From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/30/2015 11:27 AM Subject: [gpfsug-discuss] Placement policies and copies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi, I have a file system which has the default number of data copies set to 2. I now have some data Id like to have which only has 1 copy made. I know that files and directories don't inherit 1 copy based on their parent. Can I do this with a placement rule to change the number of copies to 1? I don't really want to have to find the file afterwards and fix up as that requires an mmrestripefs to clear the second copy. Or if I have a pool which only has nsd disks in a single failure group and use a placement policy for that, would that work? Or will gpfs forever warn me that due to fs changes I have data at risk? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweil at genome.wustl.edu Mon Nov 30 18:42:21 2015 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 30 Nov 2015 12:42:21 -0600 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Message-ID: <565C988D.5060604@genome.wustl.edu> Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. From puneetc at us.ibm.com Mon Nov 30 18:53:04 2015 From: puneetc at us.ibm.com (Puneet Chaudhary) Date: Mon, 30 Nov 2015 13:53:04 -0500 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <565C988D.5060604@genome.wustl.edu> References: <565C988D.5060604@genome.wustl.edu> Message-ID: <201511301853.tAUIrARZ004937@d03av05.boulder.ibm.com> Matt, GPFS version 4.1.0-8 and prior had an issue with RHEL 7.1 systemd. Red Hat introduced new changes is systemd that led to this issue. Subsequently Red Hat issued an errata and reverted the changes to systemd ( https://rhn.redhat.com/errata/RHBA-2015-0738.html). Please update the level of systemd on your nodes which will address the issue. Regards, Puneet Chaudhary Scalable I/O Development General Parallel File System (GPFS) and Technical Computing (TC) Solutions Enablement Phone: 1-720-342-1546 | Mobile: 1-845-475-8806 IBM E-mail: puneetc at us.ibm.com 2455 South Rd Poughkeepsie, NY 12601-5400 United States From: Matt Weil To: gpfsug main discussion list Date: 11/30/2015 01:42 PM Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 09076871.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Mon Nov 30 18:55:42 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 30 Nov 2015 18:55:42 +0000 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <565C988D.5060604@genome.wustl.edu> References: <565C988D.5060604@genome.wustl.edu> Message-ID: I'm sure I read about this, possibly the release notes or faq. Cant find it right now, but I did find a post on devworks: https://www.ibm.com/developerworks/community/forums/html/threadTopic?id=00104bb5-acf5-4036-93ba-29ea7b1d43b7 So sounds like you need a higher gpfs version, or possibly a rhel patch. Simon ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Matt Weil [mweil at genome.wustl.edu] Sent: 30 November 2015 18:42 To: gpfsug main discussion list Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From kywang at us.ibm.com Mon Nov 30 19:00:13 2015 From: kywang at us.ibm.com (Kuei-Yu Wang-Knop) Date: Mon, 30 Nov 2015 14:00:13 -0500 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <565C988D.5060604@genome.wustl.edu> References: <565C988D.5060604@genome.wustl.edu> Message-ID: <201511301900.tAUJ0LSl007722@d03av05.boulder.ibm.com> It is appears to be a known problem that is fixed in GPFS 4.1.1.0, where RHEL 7.1 has been tested with. This is the detail on the issue: Problem: one systemd commit ff502445 is in RHEL7.1/SLES12 systemd, now new systemd will try to check the status of the BindsTo device. If the BindsTo device is inactive, systemd will fail the mount job and unmount the file system. Unfortunately, the mknod device will be always marked as inactive by systemd, and GPFS invokes mknod to create block device under /dev, so hit the unmount issue. Fix: Udev/systemd reads device info from kernel sysfs, while device created by mknod does not register in kernel, that is why, systemd fails to read the device info and device status keeps as inactive. Under new distros, a new tsctl setPseudoDisk command implemented, takes the role of mknod, will register the pseudo device for each GPFS file system in kernel sysfs before mounting, to make systemd happy. ------------------------------------ Kuei-Yu Wang-Knop IBM Scalable I/O development (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com From: Matt Weil To: gpfsug main discussion list Date: 11/30/2015 01:42 PM Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 Sent by: gpfsug-discuss-bounces at spectrumscale.org Hello all, Not sure if this is the a good place but we are experiencing a strange issue. It appears that systemd is un-mounting the file system immediately after it is mounted. #strace of systemd shows that the device is not there. Systemd sees that the path is failed and umounts the device. Our only work around currently is to link /usr/bin/umount to true. Then the device stays mounted. 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, 235), ...}) = 0 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 ENOENT (No such file or directory) 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No such file or directory) 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 # It appears that the major min numbers have been changed [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> ../../devices/virtual/block/dm-239 [root at gennsd4 system]# ls -l /dev/aggr3 brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 [root at gennsd4 system]# ls /sys/dev/block/239:235 ls: cannot access /sys/dev/block/239:235: No such file or directory [root at gennsd4 system]# rpm -qa | grep gpfs gpfs.gpl-4.1.0-7.noarch gpfs.gskit-8.0.50-32.x86_64 gpfs.msg.en_US-4.1.0-7.noarch gpfs.docs-4.1.0-7.noarch gpfs.base-4.1.0-7.x86_64 gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 gpfs.ext-4.1.0-7.x86_64 [root at gennsd4 system]# rpm -qa | grep systemd systemd-sysv-219-19.el7.x86_64 systemd-libs-219-19.el7.x86_64 systemd-219-19.el7.x86_64 systemd-python-219-19.el7.x86_64 any help would be appreciated. Thanks Matt ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From stijn.deweirdt at ugent.be Mon Nov 30 19:31:49 2015 From: stijn.deweirdt at ugent.be (Stijn De Weirdt) Date: Mon, 30 Nov 2015 20:31:49 +0100 Subject: [gpfsug-discuss] HDFS protocol in 4.2 Message-ID: <565CA425.9070109@ugent.be> hi all, the gpfs 4.2.0 advanced administration guide has a section on HDFS protocol. while reading it, i'm a bit puzzled if this has any advantage for a non-FPO site. we are are still experimenting with the "regular" gpfs hadoop connector, so it would be nice to hear any advantages (besides protocol transparency) over the hadoop connector. in particular performance comes to mind ;) the admin guide advises to enable local read, which seems understandable for FPO, but what does this mean for a non-FPO site? sending data over RPC is proabably worse performance wise compare to the gpfs hadoop binding. also, are there any other advantages possible with a proper name and data node services from hdfs protocol? (like zero copy shuffle on gpfs, something that didn't seem to exist with the connector during some tests we ran, and which was a bit disappointing, beging a shared filesystem and all that) many thanks, stijn From S.J.Thompson at bham.ac.uk Mon Nov 30 20:19:39 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 30 Nov 2015 20:19:39 +0000 Subject: [gpfsug-discuss] Placement policies and copies In-Reply-To: <201511301758.tAUHwYn9018800@d01av01.pok.ibm.com> References: , <201511301758.tAUHwYn9018800@d01av01.pok.ibm.com> Message-ID: Hi Marc, Thanks. With the migrate option, does it remove the second copy if already present? Or do you still need to do an mmrestripefs to reclaim the space? Related: if the storage pool has multiple failure groups, will GPFS place the data into a single pool, or will it spray the data over all NSD disks in all failure groups? I think I'll stick to using a pool with NSD disks in a single failure group, so I know where the files are, but would be useful to know. I assume that if the pool then goes offline, I won't lose my whole FS, just not have access to the non replicated fileset? Thanks Simon ________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Marc A Kaplan [makaplan at us.ibm.com] Sent: 30 November 2015 17:58 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Placement policies and copies >From the Advanced Admin book: File placement rules: RULE [?RuleName?] SET POOL ?PoolName? [LIMIT (OccupancyPercentage)] [REPLICATE (DataReplication)] [FOR FILESET (?FilesetName?[,?FilesetName?]...)] [WHERE SqlExpression] So, use REPLICATE(1) That's for new files as they are being created. You can use mmapplypolicy and the MIGRATE rule to change the replication factor of files that already exist. --marc of GPFS. From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 11/30/2015 11:27 AM Subject: [gpfsug-discuss] Placement policies and copies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi, I have a file system which has the default number of data copies set to 2. I now have some data Id like to have which only has 1 copy made. I know that files and directories don't inherit 1 copy based on their parent. Can I do this with a placement rule to change the number of copies to 1? I don't really want to have to find the file afterwards and fix up as that requires an mmrestripefs to clear the second copy. Or if I have a pool which only has nsd disks in a single failure group and use a placement policy for that, would that work? Or will gpfs forever warn me that due to fs changes I have data at risk? Thanks Simon _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mweil at genome.wustl.edu Mon Nov 30 22:13:16 2015 From: mweil at genome.wustl.edu (Matt Weil) Date: Mon, 30 Nov 2015 16:13:16 -0600 Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file systems PMR 70339, 122, 000 In-Reply-To: <201511301900.tAUJ0LSl007722@d03av05.boulder.ibm.com> References: <565C988D.5060604@genome.wustl.edu> <201511301900.tAUJ0LSl007722@d03av05.boulder.ibm.com> Message-ID: <565CC9FC.8080506@genome.wustl.edu> Thanks. That was the problem. On 11/30/15 1:00 PM, Kuei-Yu Wang-Knop wrote: > > It is appears to be a known problem that is fixed in GPFS 4.1.1.0, > where RHEL 7.1 has been tested with. > > This is the detail on the issue: > > Problem: one systemd commit ff502445 is in RHEL7.1/SLES12 systemd, > now new systemd will try to check the status of the BindsTo device. > If the BindsTo device is inactive, systemd will fail the mount job > and unmount the file system. Unfortunately, the mknod device will > be always marked as inactive by systemd, and GPFS invokes mknod to > create block device under /dev, so hit the unmount issue. > > Fix: Udev/systemd reads device info from kernel sysfs, while device > created by mknod does not register in kernel, that is why, systemd > fails to read the device info and device status keeps as inactive. > Under new distros, a new tsctl setPseudoDisk command implemented, > takes the role of mknod, will register the pseudo device for each > GPFS file system in kernel sysfs before mounting, to make systemd > happy. > > > ------------------------------------ > Kuei-Yu Wang-Knop > IBM Scalable I/O development > (845) 433-9333 T/L 293-9333, E-mail: kywang at us.ibm.com > > > Inactive hide details for Matt Weil ---11/30/2015 01:42:08 PM---Hello > all, Not sure if this is the a good place but we are expeMatt Weil > ---11/30/2015 01:42:08 PM---Hello all, Not sure if this is the a good > place but we are experiencing a strange > > From: Matt Weil > To: gpfsug main discussion list > Date: 11/30/2015 01:42 PM > Subject: [gpfsug-discuss] rhel 7.1 systemd is un mounting gpfs file > systems PMR 70339, 122, 000 > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > ------------------------------------------------------------------------ > > > > Hello all, > > Not sure if this is the a good place but we are experiencing a strange > issue. > > It appears that systemd is un-mounting the file system immediately after > it is mounted. > > #strace of systemd shows that the device is not there. Systemd sees > that the path is failed and umounts the device. Our only work around > currently is to link /usr/bin/umount to true. Then the device stays > mounted. > > 1 stat("/dev/aggr3", {st_mode=S_IFBLK|0644, st_rdev=makedev(239, > 235), ...}) = 0 > 1 readlink("/sys/dev/block/239:235", 0x7ffdb657a750, 1024) = -1 > ENOENT (No such file or directory) > 1 stat("/sys/dev/block/239:235", 0x7ffdb657a2c0) = -1 ENOENT (No > such file or directory) > 1 socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 19 > > # It appears that the major min numbers have been changed > [root at gennsd4 system]# ls -l /sys/dev/block/|grep 239 > lrwxrwxrwx 1 root root 0 Nov 19 15:04 253:239 -> > ../../devices/virtual/block/dm-239 > [root at gennsd4 system]# ls -l /dev/aggr3 > brw-r--r-- 1 root root 239, 235 Nov 19 15:06 /dev/aggr3 > [root at gennsd4 system]# ls /sys/dev/block/239:235 > ls: cannot access /sys/dev/block/239:235: No such file or directory > > [root at gennsd4 system]# rpm -qa | grep gpfs > gpfs.gpl-4.1.0-7.noarch > gpfs.gskit-8.0.50-32.x86_64 > gpfs.msg.en_US-4.1.0-7.noarch > gpfs.docs-4.1.0-7.noarch > gpfs.base-4.1.0-7.x86_64 > gpfs.gplbin-3.10.0-229.14.1.el7.x86_64-4.1.0-7.x86_64 > gpfs.ext-4.1.0-7.x86_64 > [root at gennsd4 system]# rpm -qa | grep systemd > systemd-sysv-219-19.el7.x86_64 > systemd-libs-219-19.el7.x86_64 > systemd-219-19.el7.x86_64 > systemd-python-219-19.el7.x86_64 > > any help would be appreciated. > > Thanks > > Matt > > ____ > This email message is a private communication. The information > transmitted, including attachments, is intended only for the person or > entity to which it is addressed and may contain confidential, > privileged, and/or proprietary material. Any review, duplication, > retransmission, distribution, or other use of, or taking of any action > in reliance upon, this information by persons or entities other than > the intended recipient is unauthorized by the sender and is > prohibited. If you have received this message in error, please contact > the sender immediately by return email and delete the original message > from all computer systems. Thank you. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss ____ This email message is a private communication. The information transmitted, including attachments, is intended only for the person or entity to which it is addressed and may contain confidential, privileged, and/or proprietary material. Any review, duplication, retransmission, distribution, or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is unauthorized by the sender and is prohibited. If you have received this message in error, please contact the sender immediately by return email and delete the original message from all computer systems. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 105 bytes Desc: not available URL: