From UWEFALKE at de.ibm.com Mon Feb 1 08:39:05 2016 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Mon, 1 Feb 2016 09:39:05 +0100 Subject: [gpfsug-discuss] what's on a 'dataOnly' disk? In-Reply-To: <20160129170401.0ec9f72e@uphs.upenn.edu> References: <20160129170401.0ec9f72e@uphs.upenn.edu> Message-ID: <201602010839.u118dC24013651@d06av06.portsmouth.uk.ibm.com> Hi Mark, AFAIK, there will not be any file system corruption if just data blocks are altered by activities outside GPFS. Mind: the metadata just tell were to find the data, not what will be there. If you have the data replicated, you could compare the two replica. But mind: with some GPFS version, a replica compare tool was introduced which would fix differences by always assuming the first version it has read is the correct one -- which is wrong in half of the cases, I'd say. Only now (I think with SpSc 4.2), a version of that tool is available which allows the user to check the differences and possibly select the good version. If you have your data replicated and you may assume that the problem is affecting only disks in one failure group (FG), you could also set these disks down, add new disks to the FG and restripe the FS. Then, GNR works with end-to-end checksumming. This would not help you retrieving the original content but would allow you to identify altered file contents. Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Frank Hammer, Thorsten Moehring Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From orlando.richards at ed.ac.uk Mon Feb 1 09:25:44 2016 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Mon, 1 Feb 2016 09:25:44 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> Message-ID: <56AF2498.8010503@ed.ac.uk> For what it's worth - there's a patch for rsync which IBM provided a while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up on the gpfsug github here: https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync On 29/01/16 22:36, Sven Oehme wrote: > Doug, > > This won't really work if you make use of ACL's or use special GPFS > extended attributes or set quotas, filesets, etc > so unfortunate the answer is you need to use a combination of things and > there is work going on to make some of this simpler (e.g. for ACL's) , > but its a longer road to get there. so until then you need to think > about multiple aspects . > > 1. you need to get the data across and there are various ways to do this. > > a) AFM is the simplest of all as it not just takes care of ACL's and > extended attributes and alike as it understands the GPFS internals it > also is operating in parallel can prefetch data, etc so its a efficient > way to do this but as already pointed out doesn't transfer quota or > fileset informations. > > b) you can either use rsync or any other pipe based copy program. the > downside is that they are typical single threaded and do a file by file > approach, means very metadata intensive on the source as well as target > side and cause a lot of ios on both side. > > c) you can use the policy engine to create a list of files to transfer > to at least address the single threaded scan part, then partition the > data and run multiple instances of cp or rsync in parallel, still > doesn't fix the ACL / EA issues, but the data gets there faster. > > 2. you need to get ACL/EA informations over too. there are several > command line options to dump the data and restore it, they kind of > suffer the same problem as data transfers , which is why using AFM is > the best way of doing this if you rely on ACL/EA informations. > > 3. transfer quota / fileset infos. there are several ways to do this, > but all require some level of scripting to do this. > > if you have TSM/HSM you could also transfer the data using SOBAR it's > described in the advanced admin book. > > sven > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > wrote: > > I have found that a tar pipe is much faster than rsync for this sort > of thing. The fastest of these is ?star? (schily tar). On average it > is about 2x-5x faster than rsync for doing this. After one pass with > this, you can use rsync for a subsequent or last pass synch.____ > > __ __ > > e.g.____ > > $ cd /export/gpfs1/foo____ > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > __ __ > > This also will not preserve filesets and quotas, though. You should > be able to automate that with a little bit of awk, perl, or whatnot.____ > > __ __ > > __ __ > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > ] *On Behalf Of > *Damir Krstic > *Sent:* Friday, January 29, 2016 2:32 PM > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1)____ > > __ __ > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > of storage. We are in planning stages of implementation. We would > like to migrate date from our existing GPFS installation (around > 300TB) to new solution. ____ > > __ __ > > We were planning of adding ESS to our existing GPFS cluster and > adding its disks and then deleting our old disks and having the data > migrated this way. However, our existing block size on our projects > filesystem is 1M and in order to extract as much performance out of > ESS we would like its filesystem created with larger block size. > Besides rsync do you have any suggestions of how to do this without > downtime and in fastest way possible? ____ > > __ __ > > I have looked at AFM but it does not seem to migrate quotas and > filesets so that may not be an optimal solution. ____ > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Research Services Manager Information Services IT Infrastructure Division Tel: 0131 650 4994 skype: orlando.richards The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From Paul.Tomlinson at awe.co.uk Mon Feb 1 10:06:15 2016 From: Paul.Tomlinson at awe.co.uk (Paul.Tomlinson at awe.co.uk) Date: Mon, 1 Feb 2016 10:06:15 +0000 Subject: [gpfsug-discuss] EXTERNAL: Next meet the devs - 24th Feb 2016 In-Reply-To: <20160119171452.C7F963C1EAC@gpfsug.org> References: <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: <201602011006.u11A6Mui009286@msw1.awe.co.uk> Hi Simon, We would like to send Mark Roberts (HPC) from AWE if any places are available. If there any places I'm sure will be willing to provide a list of topics that interest us. Best Regards Paul Tomlinson High Performance Computing Direct: 0118 985 8060 or 0118 982 4147 Mobile 07920783365 VPN: 88864 AWE, Aldermaston, Reading, RG7 4PR From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of "Spectrum scale UG Chair (Simon Thompson)"< Sent: 19 January 2016 17:14 To: gpfsug-discuss at spectrumscale.org Subject: EXTERNAL: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 Dear All, We are planning the next 'Meet the Devs' event for Wednesday 24th February 2016, 11am-3:30pm. The event will be held in central Oxford. The agenda promises to be hands on and give you the opportunity to speak face to face with the developers of Spectrum Scale. Guideline agenda: * TBC - please provide input on what you'd like to see! Lunch and refreshments will be provided. Please can you let me know by email if you are interested in attending by Wednesday 17th February. Thanks and we hope to see you there. Thanks to Andy at OERC for offering to host. Simon The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Mon Feb 1 10:18:51 2016 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Mon, 1 Feb 2016 10:18:51 +0000 Subject: [gpfsug-discuss] EXTERNAL: Next meet the devs - 24th Feb 2016 In-Reply-To: <201602011006.u11A6Mui009286@msw1.awe.co.uk> References: <201602011006.u11A6Mui009286@msw1.awe.co.uk>, <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: <201602011018.u11AIuUt009534@d06av09.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Mon Feb 1 17:29:07 2016 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Mon, 1 Feb 2016 18:29:07 +0100 Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 Message-ID: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is composed of various components tested together for compatibility and correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and Power System Firmware. Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Publication Date: 29 January 2016 Summary of changes in ESS ver 4.0 a) ESS core - IBM Spectrum Scale RAID V4.2.0-1 - Updated GUI b) Support of Red Hat Enterprise Linux 7.1 - No changes from 3.0.x or 3.5.x c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1 - Updated from 3.x.y d) Install Toolkit - Updated Install Toolkit e) Updated firmware rpm - IP RAID Adapter FW - Host Adapter FW - Enclosure and drive FW Download: (612 MB) http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM +Spectrum+Scale +RAID&function=fixid&fixids=ESS_ADV_BASEIMAGE-4.0.0-power-Linux README: http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002500 Deployment and Administration Guides are available in IBM Knowledge Center. http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html - Elastic Storage Server: Quick Deployment Guide - Deploying the Elastic Storage Server - IBM Spectrum Scale RAID: Administration Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From volobuev at us.ibm.com Mon Feb 1 18:28:01 2016 From: volobuev at us.ibm.com (Yuri L Volobuev) Date: Mon, 1 Feb 2016 10:28:01 -0800 Subject: [gpfsug-discuss] what's on a 'dataOnly' disk? In-Reply-To: <20160129170401.0ec9f72e@uphs.upenn.edu> References: <20160129170401.0ec9f72e@uphs.upenn.edu> Message-ID: <201602011828.u11ISGDS029189@d01av04.pok.ibm.com> > What's on a 'dataOnly' GPFS 3.5.x NSD besides data and the NSD disk > header, if anything? That's it. In some cases there may also be a copy of the file system descriptor, but that doesn't really matter in your case. > I'm trying to understand some file corruption, and one potential > explanation would be if a (non-GPFS) server wrote to a LUN used as a > GPFS dataOnly NSD. > > We are not seeing any 'I/O' or filesystem errors, mmfsck (online) doesn't > detect any errors, and all NSDs are usable. However, some files seem to > have changes in content, with no changes in metadata (modify timestamp, > ownership), including files with the GPFS "immutable" ACL set. This is all consistent with the content on a dataOnly disk being overwritten outside of GPFS. > If an NSD was changed outside of GPFS control, would mmfsck detect > filesystem errors, or would the GPFS filesystem be consistent, even > though the content of some of the data blocks was altered? No. mmfsck can detect metadata corruption, but has no way to tell whether a data block has correct content or garbage. > Is there any metadata or checksum information maintained by GPFS, or any > means of doing a consistency check of the contents of files that would > correlate with blocks stored on a particular NSD? GPFS on top of traditional disks/RAID LUNs doesn't checksum data blocks, and thus can't tell whether a data block is good or bad. GPFS Native RAID has very strong on-disk data checksumming, OTOH. yuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuk at us.ibm.com Mon Feb 1 18:26:43 2016 From: liuk at us.ibm.com (Kenneth Liu) Date: Mon, 1 Feb 2016 10:26:43 -0800 Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 In-Reply-To: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> References: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> Message-ID: <201602011838.u11Ic39I004064@d03av02.boulder.ibm.com> And ISKLM to manage the encryption keys. Kenneth Liu Software Defined Infrastructure -- Spectrum Storage, Cleversafe & Platform Computing Sales Address: 4000 Executive Parkway San Ramon, CA 94583 Mobile #: (510) 584-7657 Email: liuk at us.ibm.com From: "Frank Kraemer" To: gpfsug-discuss at gpfsug.org Date: 02/01/2016 09:30 AM Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 Sent by: gpfsug-discuss-bounces at spectrumscale.org IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is composed of various components tested together for compatibility and correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and Power System Firmware. Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Publication Date: 29 January 2016 Summary of changes in ESS ver 4.0 a) ESS core - IBM Spectrum Scale RAID V4.2.0-1 - Updated GUI b) Support of Red Hat Enterprise Linux 7.1 - No changes from 3.0.x or 3.5.x c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1 - Updated from 3.x.y d) Install Toolkit - Updated Install Toolkit e) Updated firmware rpm - IP RAID Adapter FW - Host Adapter FW - Enclosure and drive FW Download: (612 MB) http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM +Spectrum+Scale +RAID&function=fixid&fixids=ESS_ADV_BASEIMAGE-4.0.0-power-Linux README: http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002500 Deployment and Administration Guides are available in IBM Knowledge Center. http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html - Elastic Storage Server: Quick Deployment Guide - Deploying the Elastic Storage Server - IBM Spectrum Scale RAID: Administration Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From liuk at us.ibm.com Mon Feb 1 18:26:43 2016 From: liuk at us.ibm.com (Kenneth Liu) Date: Mon, 1 Feb 2016 10:26:43 -0800 Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 In-Reply-To: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> References: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> Message-ID: <201602011838.u11Ic6D2004449@d03av02.boulder.ibm.com> And ISKLM to manage the encryption keys. Kenneth Liu Software Defined Infrastructure -- Spectrum Storage, Cleversafe & Platform Computing Sales Address: 4000 Executive Parkway San Ramon, CA 94583 Mobile #: (510) 584-7657 Email: liuk at us.ibm.com From: "Frank Kraemer" To: gpfsug-discuss at gpfsug.org Date: 02/01/2016 09:30 AM Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 Sent by: gpfsug-discuss-bounces at spectrumscale.org IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is composed of various components tested together for compatibility and correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and Power System Firmware. Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Publication Date: 29 January 2016 Summary of changes in ESS ver 4.0 a) ESS core - IBM Spectrum Scale RAID V4.2.0-1 - Updated GUI b) Support of Red Hat Enterprise Linux 7.1 - No changes from 3.0.x or 3.5.x c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1 - Updated from 3.x.y d) Install Toolkit - Updated Install Toolkit e) Updated firmware rpm - IP RAID Adapter FW - Host Adapter FW - Enclosure and drive FW Download: (612 MB) http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM +Spectrum+Scale +RAID&function=fixid&fixids=ESS_ADV_BASEIMAGE-4.0.0-power-Linux README: http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002500 Deployment and Administration Guides are available in IBM Knowledge Center. http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html - Elastic Storage Server: Quick Deployment Guide - Deploying the Elastic Storage Server - IBM Spectrum Scale RAID: Administration Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ewahl at osc.edu Mon Feb 1 18:39:12 2016 From: ewahl at osc.edu (Wahl, Edward) Date: Mon, 1 Feb 2016 18:39:12 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: <56AF2498.8010503@ed.ac.uk> References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> , <56AF2498.8010503@ed.ac.uk> Message-ID: <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Along the same vein I've patched rsync to maintain source atimes in Linux for large transitions such as this. Along with the stadnard "patches" mod for destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. I've not yet ported it to 3.1.x https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff Ed Wahl OSC ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Orlando Richards [orlando.richards at ed.ac.uk] Sent: Monday, February 01, 2016 4:25 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) For what it's worth - there's a patch for rsync which IBM provided a while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up on the gpfsug github here: https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync On 29/01/16 22:36, Sven Oehme wrote: > Doug, > > This won't really work if you make use of ACL's or use special GPFS > extended attributes or set quotas, filesets, etc > so unfortunate the answer is you need to use a combination of things and > there is work going on to make some of this simpler (e.g. for ACL's) , > but its a longer road to get there. so until then you need to think > about multiple aspects . > > 1. you need to get the data across and there are various ways to do this. > > a) AFM is the simplest of all as it not just takes care of ACL's and > extended attributes and alike as it understands the GPFS internals it > also is operating in parallel can prefetch data, etc so its a efficient > way to do this but as already pointed out doesn't transfer quota or > fileset informations. > > b) you can either use rsync or any other pipe based copy program. the > downside is that they are typical single threaded and do a file by file > approach, means very metadata intensive on the source as well as target > side and cause a lot of ios on both side. > > c) you can use the policy engine to create a list of files to transfer > to at least address the single threaded scan part, then partition the > data and run multiple instances of cp or rsync in parallel, still > doesn't fix the ACL / EA issues, but the data gets there faster. > > 2. you need to get ACL/EA informations over too. there are several > command line options to dump the data and restore it, they kind of > suffer the same problem as data transfers , which is why using AFM is > the best way of doing this if you rely on ACL/EA informations. > > 3. transfer quota / fileset infos. there are several ways to do this, > but all require some level of scripting to do this. > > if you have TSM/HSM you could also transfer the data using SOBAR it's > described in the advanced admin book. > > sven > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > wrote: > > I have found that a tar pipe is much faster than rsync for this sort > of thing. The fastest of these is ?star? (schily tar). On average it > is about 2x-5x faster than rsync for doing this. After one pass with > this, you can use rsync for a subsequent or last pass synch.____ > > __ __ > > e.g.____ > > $ cd /export/gpfs1/foo____ > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > __ __ > > This also will not preserve filesets and quotas, though. You should > be able to automate that with a little bit of awk, perl, or whatnot.____ > > __ __ > > __ __ > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > ] *On Behalf Of > *Damir Krstic > *Sent:* Friday, January 29, 2016 2:32 PM > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1)____ > > __ __ > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > of storage. We are in planning stages of implementation. We would > like to migrate date from our existing GPFS installation (around > 300TB) to new solution. ____ > > __ __ > > We were planning of adding ESS to our existing GPFS cluster and > adding its disks and then deleting our old disks and having the data > migrated this way. However, our existing block size on our projects > filesystem is 1M and in order to extract as much performance out of > ESS we would like its filesystem created with larger block size. > Besides rsync do you have any suggestions of how to do this without > downtime and in fastest way possible? ____ > > __ __ > > I have looked at AFM but it does not seem to migrate quotas and > filesets so that may not be an optimal solution. ____ > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Research Services Manager Information Services IT Infrastructure Division Tel: 0131 650 4994 skype: orlando.richards The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Mon Feb 1 18:44:50 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 1 Feb 2016 13:44:50 -0500 Subject: [gpfsug-discuss] what's on a 'dataOnly' disk? In-Reply-To: <201602011828.u11ISGDS029189@d01av04.pok.ibm.com> References: <20160129170401.0ec9f72e@uphs.upenn.edu> <201602011828.u11ISGDS029189@d01av04.pok.ibm.com> Message-ID: <201602011844.u11IirBd015334@d03av01.boulder.ibm.com> Just to add... Spectrum Scale is no different than most other file systems in this respect. It assumes the disk system and network systems will detect I/O errors, including data corruption. And it usually will ... but there are, as you've discovered, scenarios where it can not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Mon Feb 1 19:18:22 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 1 Feb 2016 19:18:22 +0000 Subject: [gpfsug-discuss] Question on FPO node - NSD recovery Message-ID: <427E3540-585D-4DD9-9E41-29C222548E03@nuance.com> When a node that?s part of an FPO file system (local disks) and the node is rebooted ? the NSDs come up as ?down? until I manually starts them. GPFS start on the node but the NSDs stay down. Is this the expected behavior or is there a config setting I missed somewhere? Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Tue Feb 2 08:23:43 2016 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Tue, 2 Feb 2016 09:23:43 +0100 Subject: [gpfsug-discuss] IBM Spectrum Protect with IBM Spectrum Scale - Introduction Message-ID: <201602020823.u128Nv9h015238@d06av05.portsmouth.uk.ibm.com> by Nils Haustein, see at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5334 Abstract: This presentation gives a short overview about the IBM Spectrum Family and briefly introduces IBM Spectrum Protect? (Tivoli Storage Manager, TSM) and IBM Spectrum Scale? (General Parallel File System, GPFS) in more detail. Subsequently it presents a solution integrating these two components and outlines its advantages. It further discusses use cases and deployment options. Last but not least this presentation elaborates on the client values running multiple Spectrum Protect instance in a Spectrum Scale cluster and presents performance test results highlighting that this solution scales with the growing data protection demands. Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tomasz.Wolski at ts.fujitsu.com Wed Feb 3 08:10:32 2016 From: Tomasz.Wolski at ts.fujitsu.com (Tomasz.Wolski at ts.fujitsu.com) Date: Wed, 3 Feb 2016 08:10:32 +0000 Subject: [gpfsug-discuss] DMAPI multi-thread safe Message-ID: <44d08c1749b9482787f5b90c8b7d6dbb@R01UKEXCASM223.r01.fujitsu.local> Hi Experts :) Could you please tell me if the DMAPI implementation for GPFS is multi-thread safe? Are there any limitation towards using multiple threads within a single DM application process? For example: DM events are processed by multiple threads, which call dm* functions for manipulating file attributes - will there be any problem when two threads try to access the same file at the same time? Is the libdmapi thread safe? Best regards, Tomasz Wolski -------------- next part -------------- An HTML attachment was scrubbed... URL: From stschmid at de.ibm.com Wed Feb 3 08:41:27 2016 From: stschmid at de.ibm.com (Stefan Schmidt) Date: Wed, 3 Feb 2016 09:41:27 +0100 Subject: [gpfsug-discuss] IBM Spectrum Protect with IBM Spectrum Scale - Introduction In-Reply-To: <201602020823.u128Nv9h015238@d06av05.portsmouth.uk.ibm.com> References: <201602020823.u128Nv9h015238@d06av05.portsmouth.uk.ibm.com> Message-ID: <201602030841.u138fY2l007402@d06av06.portsmouth.uk.ibm.com> Hi all, I want to add that IBM Spectrum Scale Raid ( ESS/GNR) is missing in the table I think. I know it's now a HW solution but the GNR package I thought would be named IBM Spectrum Scale Raid. Mit freundlichen Gr??en / Kind regards Stefan Schmidt Scrum Master IBM Spectrum Scale GUI / Senior IT Architect /PMP - Dept. M069 / IBM Spectrum Scale Software Development IBM Systems Group IBM Deutschland Phone: +49-6131-84-3465 IBM Deutschland Mobile: +49-170-6346601 Hechtsheimer Str. 2 E-Mail: stschmid at de.ibm.com 55131 Mainz Germany IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: Frank Kraemer/Germany/IBM at IBMDE To: gpfsug-discuss at gpfsug.org Date: 02.02.2016 09:24 Subject: [gpfsug-discuss] IBM Spectrum Protect with IBM Spectrum Scale - Introduction Sent by: gpfsug-discuss-bounces at spectrumscale.org by Nils Haustein, see at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5334 Abstract: This presentation gives a short overview about the IBM Spectrum Family and briefly introduces IBM Spectrum Protect? (Tivoli Storage Manager, TSM) and IBM Spectrum Scale? (General Parallel File System, GPFS) in more detail. Subsequently it presents a solution integrating these two components and outlines its advantages. It further discusses use cases and deployment options. Last but not least this presentation elaborates on the client values running multiple Spectrum Protect instance in a Spectrum Scale cluster and presents performance test results highlighting that this solution scales with the growing data protection demands. Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert at strubi.ox.ac.uk Wed Feb 3 16:53:59 2016 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Wed, 3 Feb 2016 16:53:59 +0000 (GMT) Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 In-Reply-To: <20160119171452.C7F963C1EAC@gpfsug.org> References: <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: <201602031653.060161@mail.strubi.ox.ac.uk> Hi Simon, I'll certainly be interested in wandering into town to attend this... please register me or whatever has to be done. Regards, Robert -- Dr. Robert Esnouf, University Research Lecturer, Head of Research Computing Core, NDM Research Computing Strategy Officer Room 10/028, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Email: robert at strubi.ox.ac.uk / robert at well.ox.ac.uk Tel: (+44) - 1865 - 287783 -------------- next part -------------- An embedded message was scrubbed... From: "Spectrum scale UG Chair (Simon Thompson)" Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 Date: Tue, 19 Jan 2016 17:13:42 +0000 Size: 5334 URL: From wsawdon at us.ibm.com Wed Feb 3 18:22:48 2016 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Wed, 3 Feb 2016 10:22:48 -0800 Subject: [gpfsug-discuss] DMAPI multi-thread safe In-Reply-To: <44d08c1749b9482787f5b90c8b7d6dbb@R01UKEXCASM223.r01.fujitsu.local> References: <44d08c1749b9482787f5b90c8b7d6dbb@R01UKEXCASM223.r01.fujitsu.local> Message-ID: <201602031822.u13IMv3c017365@d03av05.boulder.ibm.com> > From: "Tomasz.Wolski at ts.fujitsu.com" > > Could you please tell me if the DMAPI implementation for GPFS is > multi-thread safe? Are there any limitation towards using multiple > threads within a single DM application process? > For example: DM events are processed by multiple threads, which call > dm* functions for manipulating file attributes ? will there be any > problem when two threads try to access the same file at the same time? > > Is the libdmapi thread safe? > With the possible exception of dm_init_service it should be thread safe. Dmapi does offer access rights to allow or prevent concurrent access to a file. If you are not using the access rights, internally Spectrum Scale will serialize the dmapi calls like it would serialize for posix -- some calls will proceed in parallel (e.g. reads, non-overlapping writes) and some will be serialized (e.g. EA updates). -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From damir.krstic at gmail.com Thu Feb 4 21:15:56 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Thu, 04 Feb 2016 21:15:56 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> <56AF2498.8010503@ed.ac.uk> <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: Thanks all for great suggestions. We will most likely end up using either AFM or some mechanism of file copy (tar/rsync etc.). On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > Along the same vein I've patched rsync to maintain source atimes in Linux > for large transitions such as this. Along with the stadnard "patches" mod > for destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. > I've not yet ported it to 3.1.x > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff > > Ed Wahl > OSC > > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [ > gpfsug-discuss-bounces at spectrumscale.org] on behalf of Orlando Richards [ > orlando.richards at ed.ac.uk] > Sent: Monday, February 01, 2016 4:25 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance > (GPFS4.1) > > For what it's worth - there's a patch for rsync which IBM provided a > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up > on the gpfsug github here: > > https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync > > > > On 29/01/16 22:36, Sven Oehme wrote: > > Doug, > > > > This won't really work if you make use of ACL's or use special GPFS > > extended attributes or set quotas, filesets, etc > > so unfortunate the answer is you need to use a combination of things and > > there is work going on to make some of this simpler (e.g. for ACL's) , > > but its a longer road to get there. so until then you need to think > > about multiple aspects . > > > > 1. you need to get the data across and there are various ways to do this. > > > > a) AFM is the simplest of all as it not just takes care of ACL's and > > extended attributes and alike as it understands the GPFS internals it > > also is operating in parallel can prefetch data, etc so its a efficient > > way to do this but as already pointed out doesn't transfer quota or > > fileset informations. > > > > b) you can either use rsync or any other pipe based copy program. the > > downside is that they are typical single threaded and do a file by file > > approach, means very metadata intensive on the source as well as target > > side and cause a lot of ios on both side. > > > > c) you can use the policy engine to create a list of files to transfer > > to at least address the single threaded scan part, then partition the > > data and run multiple instances of cp or rsync in parallel, still > > doesn't fix the ACL / EA issues, but the data gets there faster. > > > > 2. you need to get ACL/EA informations over too. there are several > > command line options to dump the data and restore it, they kind of > > suffer the same problem as data transfers , which is why using AFM is > > the best way of doing this if you rely on ACL/EA informations. > > > > 3. transfer quota / fileset infos. there are several ways to do this, > > but all require some level of scripting to do this. > > > > if you have TSM/HSM you could also transfer the data using SOBAR it's > > described in the advanced admin book. > > > > sven > > > > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > > > wrote: > > > > I have found that a tar pipe is much faster than rsync for this sort > > of thing. The fastest of these is ?star? (schily tar). On average it > > is about 2x-5x faster than rsync for doing this. After one pass with > > this, you can use rsync for a subsequent or last pass synch.____ > > > > __ __ > > > > e.g.____ > > > > $ cd /export/gpfs1/foo____ > > > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > > > __ __ > > > > This also will not preserve filesets and quotas, though. You should > > be able to automate that with a little bit of awk, perl, or > whatnot.____ > > > > __ __ > > > > __ __ > > > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > > ] *On Behalf Of > > *Damir Krstic > > *Sent:* Friday, January 29, 2016 2:32 PM > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1)____ > > > > __ __ > > > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > > of storage. We are in planning stages of implementation. We would > > like to migrate date from our existing GPFS installation (around > > 300TB) to new solution. ____ > > > > __ __ > > > > We were planning of adding ESS to our existing GPFS cluster and > > adding its disks and then deleting our old disks and having the data > > migrated this way. However, our existing block size on our projects > > filesystem is 1M and in order to extract as much performance out of > > ESS we would like its filesystem created with larger block size. > > Besides rsync do you have any suggestions of how to do this without > > downtime and in fastest way possible? ____ > > > > __ __ > > > > I have looked at AFM but it does not seem to migrate quotas and > > filesets so that may not be an optimal solution. ____ > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -- > -- > Dr Orlando Richards > Research Services Manager > Information Services > IT Infrastructure Division > Tel: 0131 650 4994 > skype: orlando.richards > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Feb 5 11:25:38 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 5 Feb 2016 11:25:38 +0000 Subject: [gpfsug-discuss] BM Spectrum Scale transparent cloud tiering In-Reply-To: <201601291718.u0THIPLr009799@d01av03.pok.ibm.com> References: <8505A552-5410-4F70-AA77-3DE5EF54BE09@nuance.com> <201601291718.u0THIPLr009799@d01av03.pok.ibm.com> Message-ID: Just to note if anyone is interested, the open beta is now "open" for the transparent cloud tiering, see: http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html?ce=sm6024&cmp=IBMSocial&ct=M16402YW&cm=h&IIO=BSYS&csr=blog&cr=casyst&ccy=us&s_tact=M16402YW Simon From: > on behalf of Marc A Kaplan > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Friday, 29 January 2016 at 17:18 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] BM Spectrum Scale transparent cloud tiering Since this official IBM website (pre)announces transparent cloud tiering ... http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html?ce=sm6024&cmp=IBMSocial&ct=M16402YW&cm=h&IIO=BSYS&csr=blog&cr=casyst&ccy=us&s_tact=M16402YW And since Oesterlin mentioned Cluster Export Service (CES), please allow me to (hopefully!) clarify: Transparent Cloud Tiering uses some new interfaces and functions within Spectrum Scale, it is not "just a rehash" of the long existing DMAPI HSM support. Transparent Cloud Tiering allows one to dynamically migrate Spectrum Scale files to and from foreign file and/or object stores. on the other hand ... Cluster Export Service, allows one to access Spectrum Scale files with foreign protocols, such as NFS, SMB, and Object(OpenStack) I suppose one could deploy both, using Spectrum Scale with Cluster Export Service for local, fast, immediate access to "hot" file and objects and some foreign object service, such as Amazon S3 or Cleversafe for long term "cold" storage. Oh, and just to add to the mix, in case you haven't heard yet, Cleversafe is a fairly recent IBM acquisition, http://www-03.ibm.com/press/us/en/pressrelease/47776.wss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Feb 8 10:07:29 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 8 Feb 2016 10:07:29 +0000 Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 In-Reply-To: <20160119171452.C7F963C1EAC@gpfsug.org> References: <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: Hi All, Just to note that we are NOW FULL for the next meet the devs in Feb. Simon From: > on behalf of Simon Thompson > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 19 January 2016 at 17:13 To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 Dear All, We are planning the next 'Meet the Devs' event for Wednesday 24th February 2016, 11am-3:30pm. The event will be held in central Oxford. The agenda promises to be hands on and give you the opportunity to speak face to face with the developers of Spectrum Scale. Guideline agenda: * TBC - please provide input on what you'd like to see! Lunch and refreshments will be provided. Please can you let me know by email if you are interested in attending by Wednesday 17th February. Thanks and we hope to see you there. Thanks to Andy at OERC for offering to host. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Feb 9 14:42:07 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 9 Feb 2016 14:42:07 +0000 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config Message-ID: Any ideas on how to get out of this? [root at gpfs01 ~]# mmlsnodeclass onegig Node Class Name Members --------------------- ----------------------------------------------------------- one gig [root at gpfs01 ~]# mmchconfig maxMBpS=DEFAULT -N onegig mmchconfig: No nodes were found that matched the input specification. mmchconfig: Command failed. Examine previous error messages to determine cause. [root at gpfs01 ~]# mmdelnodeclass onegig mmdelnodeclass: Node class "onegig" still appears in GPFS configuration node override section maxMBpS 120 [onegig] mmdelnodeclass: Command failed. Examine previous error messages to determine cause. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Feb 9 15:04:38 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 9 Feb 2016 10:04:38 -0500 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config In-Reply-To: References: Message-ID: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> Yeah. Try first changing the configuration so it does not depend on onegig. Then secondly you may want to delete the nodeclass. Any ideas on how to get out of this? [root at gpfs01 ~]# mmlsnodeclass onegig Node Class Name Members --------------------- ----------------------------------------------------------- one gig [root at gpfs01 ~]# mmchconfig maxMBpS=DEFAULT -N onegig mmchconfig: No nodes were found that matched the input specification. mmchconfig: Command failed. Examine previous error messages to determine cause. [root at gpfs01 ~]# mmdelnodeclass onegig mmdelnodeclass: Node class "onegig" still appears in GPFS configuration node override section maxMBpS 120 [onegig] mmdelnodeclass: Command failed. Examine previous error messages to determine cause. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Feb 9 15:07:30 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 9 Feb 2016 15:07:30 +0000 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config In-Reply-To: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> References: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> Message-ID: <9EA36B16-AF4D-45AC-86D8-B996059A8D61@nuance.com> Well, that would have been my guess as well. But I need to associate that value with ?something?? I?ve been trying a sequence of commands, no joy. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Marc A Kaplan > Reply-To: gpfsug main discussion list > Date: Tuesday, February 9, 2016 at 9:04 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Removing empty "nodeclass" from config Yeah. Try first changing the configuration so it does not depend on onegig. Then secondly you may want to delete the nodeclass. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Feb 9 15:34:17 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 9 Feb 2016 10:34:17 -0500 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config In-Reply-To: <9EA36B16-AF4D-45AC-86D8-B996059A8D61@nuance.com> References: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> <9EA36B16-AF4D-45AC-86D8-B996059A8D61@nuance.com> Message-ID: <201602091534.u19FYPCE020191@d01av02.pok.ibm.com> AH... I see, instead of `maxMBpS=default -N all` try a specific number. And then revert to "default" with a second command. Seems there are some bugs or peculiarities in this code. # mmchconfig maxMBpS=99999 -N all # mmchconfig maxMBpS=default -N all I tried some other stuff. If you're curious play around and do mmlsconfig after each mmchconfig and see how the settings "evolve"!! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From pinto at scinet.utoronto.ca Wed Feb 10 19:26:56 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Wed, 10 Feb 2016 14:26:56 -0500 Subject: [gpfsug-discuss] mmlsnode: Unable to determine the local node identity. Message-ID: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> Dear group I'm trying to deal with this in the most elegant way possible: Once upon the time there were nodeA and nodeB in the cluster, on a 'onDemand manual HA' fashion. * nodeA died, so I migrated the whole OS/software/application stack from backup over to 'nodeB', IP/hostname, etc, hence 'old nodeB' effectively became the new nodeA. * Getting the new nodeA to rejoin the cluster was already a pain, but through a mmdelnode and mmaddnode operation we eventually got it to mount gpfs. Well ... * Old nodeA is now fixed and back on the network, and I'd like to re-purpose it as the new standby nodeB (IP and hostname already applied). As the subject say, I'm now facing node identity issues. From the FSmgr I already tried to del/add nodeB, even nodeA, etc, however GPFS seems to keep some information cached somewhere in the cluster. * At this point I even turned old nodeA into a nodeC with a different IP, etc, but that doesn't help either. I can't even start gpfs on nodeC. Question: what is the appropriate process to clean this mess from the GPFS perspective? I can't touch the new nodeA. It's highly committed in production already. Thanks Jaime ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From pinto at scinet.utoronto.ca Wed Feb 10 20:24:21 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Wed, 10 Feb 2016 15:24:21 -0500 Subject: [gpfsug-discuss] mmlsnode: Unable to determine the local node identity. In-Reply-To: References: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> Message-ID: <20160210152421.63075r24zqb156d1@support.scinet.utoronto.ca> Quoting "Buterbaugh, Kevin L" : > Hi Jaime, > > Have you tried wiping out /var/mmfs/gen/* and /var/mmfs/etc/* on the > old nodeA? > > Kevin That did the trick. Thanks Kevin and all that responded privately. Jaime > >> On Feb 10, 2016, at 1:26 PM, Jaime Pinto wrote: >> >> Dear group >> >> I'm trying to deal with this in the most elegant way possible: >> >> Once upon the time there were nodeA and nodeB in the cluster, on a >> 'onDemand manual HA' fashion. >> >> * nodeA died, so I migrated the whole OS/software/application stack >> from backup over to 'nodeB', IP/hostname, etc, hence 'old nodeB' >> effectively became the new nodeA. >> >> * Getting the new nodeA to rejoin the cluster was already a pain, >> but through a mmdelnode and mmaddnode operation we eventually got >> it to mount gpfs. >> >> Well ... >> >> * Old nodeA is now fixed and back on the network, and I'd like to >> re-purpose it as the new standby nodeB (IP and hostname already >> applied). As the subject say, I'm now facing node identity issues. >> From the FSmgr I already tried to del/add nodeB, even nodeA, etc, >> however GPFS seems to keep some information cached somewhere in the >> cluster. >> >> * At this point I even turned old nodeA into a nodeC with a >> different IP, etc, but that doesn't help either. I can't even start >> gpfs on nodeC. >> >> Question: what is the appropriate process to clean this mess from >> the GPFS perspective? >> >> I can't touch the new nodeA. It's highly committed in production already. >> >> Thanks >> Jaime >> >> >> >> >> >> >> ************************************ >> --- >> Jaime Pinto >> SciNet HPC Consortium - Compute/Calcul Canada >> www.scinet.utoronto.ca - www.computecanada.org >> University of Toronto >> 256 McCaul Street, Room 235 >> Toronto, ON, M5T1W5 >> P: 416-978-2755 >> C: 416-505-1477 >> >> ---------------------------------------------------------------- >> This message was sent using IMP at SciNet Consortium, University of Toronto. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From makaplan at us.ibm.com Wed Feb 10 20:34:58 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 10 Feb 2016 15:34:58 -0500 Subject: [gpfsug-discuss] mmlsnode: Unable to determine the local nodeidentity. In-Reply-To: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> References: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> Message-ID: <201602102035.u1AKZ4v9030063@d01av01.pok.ibm.com> For starters, show us the output of mmlscluster mmgetstate -a cat /var/mmfs/gen/mmsdrfs Depending on how those look, this might be simple or not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Feb 11 14:42:40 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 11 Feb 2016 14:42:40 +0000 Subject: [gpfsug-discuss] IBM Interconnect - Any interest in an informal Spectrum Scale UG meetup? Message-ID: <3FA3ABD2-0B93-4A26-A841-84AE4A8505CA@nuance.com> I?ll be at IBM Interconnect the week of 2/21. Anyone else going? Is there interest in a meet-up or getting together informally? If anyone is interested, drop me a note and I?ll try and pull something together - robert.oesterlin at nuance.com Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Fri Feb 12 14:53:22 2016 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Fri, 12 Feb 2016 15:53:22 +0100 Subject: [gpfsug-discuss] Upcoming Spectrum Scale education events and user group meetings in Europe Message-ID: <201602121453.u1CErUAS012453@d06av07.portsmouth.uk.ibm.com> Here is an overview of upcoming Spectrum Scale education events and user group meetings in Europe. I plan to be at most of the events. Looking forward to meet you there! https://ibm.biz/BdHtBN -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From service at metamodul.com Sun Feb 14 13:59:36 2016 From: service at metamodul.com (MetaService) Date: Sun, 14 Feb 2016 14:59:36 +0100 Subject: [gpfsug-discuss] Migration from SONAS to Spectrum Scale - Limit of 200 TB for ACE migrations Message-ID: <1455458376.4507.92.camel@pluto> Hi, The Playbook: SONAS / Unified Migration to IBM Spectrum Scale - https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/fa32927c-e904-49cc-a4cc-870bcc8e307c/page/2ff0c6d7-a854-4d64-a98c-0dbfc611ffc6/attachment/a57f1d1e-c68e-44b0-bcde-20ce6b0aebd6/media/Migration_Playbook_PoC_SonasToSpectrumScale.pdf - mentioned that only ACE migration for SONAS FS up to 200TB are supported/recommended. Is this a limitation for the whole SONAS FS or for each fileset ? tia Hajo -- MetaModul GmbH Suederstr. 12 DE-25336 Elmshorn Mobil: +49 177 4393994 Geschaeftsfuehrer: Hans-Joachim Ehlers From douglasof at us.ibm.com Mon Feb 15 15:26:08 2016 From: douglasof at us.ibm.com (Douglas O'flaherty) Date: Mon, 15 Feb 2016 10:26:08 -0500 Subject: [gpfsug-discuss] IBM Interconnect - Any interest in an informal Spectrum Scale UG meetup? In-Reply-To: References: Message-ID: <201602151530.u1FFU4IG026030@d01av03.pok.ibm.com> Greetings: I like Bob's suggestion of an informal meet-up next week. How does Spectrum Scale beers sound? Tuesday right near the Expo should work. We'll scope out a place this week. We will have several places Scale is covered, including some references in different keynotes. There will be a demonstration of transparent cloud tiering - the Open Beta currently running - at the Interconnect Expo. There is summary of the several events in EU coming up. I'm looking for topics you want covered at the ISC User Group meeting. https://www.ibm.com/developerworks/community/blogs/storageneers/entry/Upcoming_Spectrum_Scale_education_events_and_user_group_meetings_in_Europe?lang=en_us The next US user group is still to be scheduled, so send in your ideas. doug ----- Message from "Oesterlin, Robert" on Thu, 11 Feb 2016 14:42:40 +0000 ----- To: gpfsug main discussion list Subject: [gpfsug-discuss] IBM Interconnect - Any interest in an informal Spectrum Scale UG meetup? I?ll be at IBM Interconnect the week of 2/21. Anyone else going? Is there interest in a meet-up or getting together informally? If anyone is interested, drop me a note and I?ll try and pull something together - robert.oesterlin at nuance.com Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From damir.krstic at gmail.com Wed Feb 17 21:07:33 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Wed, 17 Feb 2016 21:07:33 +0000 Subject: [gpfsug-discuss] question about remote cluster mounting Message-ID: In our current environment we have a storage gpfs cluster and a compute gpfs cluster. We use gpfs remote cluster mounting mechanism to mount storage cluster on compute cluster. So far so good. We are about to introduce 3rd storage cluster in our environment and question I have is about gpfs authorization keys. More specifically, when we initially did remote cluster mounting, we had to run mmauth command on both the storage cluster and the compute cluster and then share the keys between the clusters. With the third storage cluster, can we re-use authorization key from compute cluster and share it with the new storage cluster? The reason for this question is I am hoping to minimize downtime on our compute cluster and I remember having to shut gpfs down when issuing mmauth command so I am hoping I can re-use the compute cluster key without shutting gpfs down. Thanks, Damir -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Feb 17 21:40:05 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 17 Feb 2016 21:40:05 +0000 Subject: [gpfsug-discuss] question about remote cluster mounting In-Reply-To: References: Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05FB36F6@CHI-EXCHANGEW1.w2k.jumptrading.com> Yes, you may (and should) reuse the auth key from the compute cluster, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Damir Krstic Sent: Wednesday, February 17, 2016 3:08 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] question about remote cluster mounting In our current environment we have a storage gpfs cluster and a compute gpfs cluster. We use gpfs remote cluster mounting mechanism to mount storage cluster on compute cluster. So far so good. We are about to introduce 3rd storage cluster in our environment and question I have is about gpfs authorization keys. More specifically, when we initially did remote cluster mounting, we had to run mmauth command on both the storage cluster and the compute cluster and then share the keys between the clusters. With the third storage cluster, can we re-use authorization key from compute cluster and share it with the new storage cluster? The reason for this question is I am hoping to minimize downtime on our compute cluster and I remember having to shut gpfs down when issuing mmauth command so I am hoping I can re-use the compute cluster key without shutting gpfs down. Thanks, Damir ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From volobuev at us.ibm.com Wed Feb 17 22:54:36 2016 From: volobuev at us.ibm.com (Yuri L Volobuev) Date: Wed, 17 Feb 2016 14:54:36 -0800 Subject: [gpfsug-discuss] question about remote cluster mounting In-Reply-To: References: Message-ID: <201602172255.u1HMtIDp000702@d03av05.boulder.ibm.com> The authentication scheme used for GPFS multi-clustering is similar to what other frameworks (e.g. ssh) do for private/public auth: each cluster has a private key and a public key. The key pair only needs to be generated once (unless you want to periodically regenerate it for higher security; this is different from enabling authentication for the very first time and can be done without downtime). The public key can then be exchanged with multiple remote clusters. yuri From: Damir Krstic To: gpfsug main discussion list , Date: 02/17/2016 01:08 PM Subject: [gpfsug-discuss] question about remote cluster mounting Sent by: gpfsug-discuss-bounces at spectrumscale.org In our current environment we have a storage gpfs cluster and a compute gpfs cluster. We use gpfs remote cluster mounting mechanism to mount storage cluster on compute cluster. So far so good. We are about to introduce 3rd storage cluster in our environment and question I have is about gpfs authorization keys. More specifically, when we initially did remote cluster mounting, we had to run mmauth command on both the storage cluster and the compute cluster and then share the keys between the clusters. With the third storage cluster, can we re-use authorization key from compute cluster and share it with the new storage cluster? The reason for this question is I am hoping to minimize downtime on our compute cluster and I remember having to shut gpfs down when issuing mmauth command so I am hoping I can re-use the compute cluster key without shutting gpfs down. Thanks, Damir _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From damir.krstic at gmail.com Mon Feb 22 13:12:14 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Mon, 22 Feb 2016 13:12:14 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> <56AF2498.8010503@ed.ac.uk> <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: Sorry to revisit this question - AFM seems to be the best way to do this. I was wondering if anyone has done AFM migration. I am looking at this wiki page for instructions: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating%20Data%20Using%20AFM and I am little confused by step 3 "cut over users" <-- does this mean, unmount existing filesystem and point users to new filesystem? The reason we were looking at AFM is to not have downtime - make the transition as seamless as possible to the end user. Not sure what, then, AFM buys us if we still have to take "downtime" in order to cut users over to the new system. Thanks, Damir On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic wrote: > Thanks all for great suggestions. We will most likely end up using either > AFM or some mechanism of file copy (tar/rsync etc.). > > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > >> Along the same vein I've patched rsync to maintain source atimes in Linux >> for large transitions such as this. Along with the stadnard "patches" mod >> for destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. >> I've not yet ported it to 3.1.x >> https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff >> >> Ed Wahl >> OSC >> >> ________________________________________ >> From: gpfsug-discuss-bounces at spectrumscale.org [ >> gpfsug-discuss-bounces at spectrumscale.org] on behalf of Orlando Richards [ >> orlando.richards at ed.ac.uk] >> Sent: Monday, February 01, 2016 4:25 AM >> To: gpfsug-discuss at spectrumscale.org >> Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS >> appliance (GPFS4.1) >> >> For what it's worth - there's a patch for rsync which IBM provided a >> while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up >> on the gpfsug github here: >> >> https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync >> >> >> >> On 29/01/16 22:36, Sven Oehme wrote: >> > Doug, >> > >> > This won't really work if you make use of ACL's or use special GPFS >> > extended attributes or set quotas, filesets, etc >> > so unfortunate the answer is you need to use a combination of things and >> > there is work going on to make some of this simpler (e.g. for ACL's) , >> > but its a longer road to get there. so until then you need to think >> > about multiple aspects . >> > >> > 1. you need to get the data across and there are various ways to do >> this. >> > >> > a) AFM is the simplest of all as it not just takes care of ACL's and >> > extended attributes and alike as it understands the GPFS internals it >> > also is operating in parallel can prefetch data, etc so its a efficient >> > way to do this but as already pointed out doesn't transfer quota or >> > fileset informations. >> > >> > b) you can either use rsync or any other pipe based copy program. the >> > downside is that they are typical single threaded and do a file by file >> > approach, means very metadata intensive on the source as well as target >> > side and cause a lot of ios on both side. >> > >> > c) you can use the policy engine to create a list of files to transfer >> > to at least address the single threaded scan part, then partition the >> > data and run multiple instances of cp or rsync in parallel, still >> > doesn't fix the ACL / EA issues, but the data gets there faster. >> > >> > 2. you need to get ACL/EA informations over too. there are several >> > command line options to dump the data and restore it, they kind of >> > suffer the same problem as data transfers , which is why using AFM is >> > the best way of doing this if you rely on ACL/EA informations. >> > >> > 3. transfer quota / fileset infos. there are several ways to do this, >> > but all require some level of scripting to do this. >> > >> > if you have TSM/HSM you could also transfer the data using SOBAR it's >> > described in the advanced admin book. >> > >> > sven >> > >> > >> > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug >> > > > > wrote: >> > >> > I have found that a tar pipe is much faster than rsync for this sort >> > of thing. The fastest of these is ?star? (schily tar). On average it >> > is about 2x-5x faster than rsync for doing this. After one pass with >> > this, you can use rsync for a subsequent or last pass synch.____ >> > >> > __ __ >> > >> > e.g.____ >> > >> > $ cd /export/gpfs1/foo____ >> > >> > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ >> > >> > __ __ >> > >> > This also will not preserve filesets and quotas, though. You should >> > be able to automate that with a little bit of awk, perl, or >> whatnot.____ >> > >> > __ __ >> > >> > __ __ >> > >> > *From:*gpfsug-discuss-bounces at spectrumscale.org >> > >> > [mailto:gpfsug-discuss-bounces at spectrumscale.org >> > ] *On Behalf Of >> > *Damir Krstic >> > *Sent:* Friday, January 29, 2016 2:32 PM >> > *To:* gpfsug main discussion list >> > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS >> > appliance (GPFS4.1)____ >> > >> > __ __ >> > >> > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT >> > of storage. We are in planning stages of implementation. We would >> > like to migrate date from our existing GPFS installation (around >> > 300TB) to new solution. ____ >> > >> > __ __ >> > >> > We were planning of adding ESS to our existing GPFS cluster and >> > adding its disks and then deleting our old disks and having the data >> > migrated this way. However, our existing block size on our projects >> > filesystem is 1M and in order to extract as much performance out of >> > ESS we would like its filesystem created with larger block size. >> > Besides rsync do you have any suggestions of how to do this without >> > downtime and in fastest way possible? ____ >> > >> > __ __ >> > >> > I have looked at AFM but it does not seem to migrate quotas and >> > filesets so that may not be an optimal solution. ____ >> > >> > >> > _______________________________________________ >> > gpfsug-discuss mailing list >> > gpfsug-discuss at spectrumscale.org >> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > >> > >> > >> > >> > _______________________________________________ >> > gpfsug-discuss mailing list >> > gpfsug-discuss at spectrumscale.org >> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > >> >> -- >> -- >> Dr Orlando Richards >> Research Services Manager >> Information Services >> IT Infrastructure Division >> Tel: 0131 650 4994 >> skype: orlando.richards >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Mon Feb 22 13:39:16 2016 From: YARD at il.ibm.com (Yaron Daniel) Date: Mon, 22 Feb 2016 15:39:16 +0200 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance(GPFS4.1) In-Reply-To: References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com><56AF2498.8010503@ed.ac.uk><9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: <201602221339.u1MDdVfH012286@d06av07.portsmouth.uk.ibm.com> Hi AFM - Active File Management (AFM) is an asynchronous cross cluster utility It means u create new GPFS cluster - migrate the data without downtime , and when u r ready - u do last sync and cut-over. Hope this help. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Server, Storage and Data Services - Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel gpfsug-discuss-bounces at spectrumscale.org wrote on 02/22/2016 03:12:14 PM: > From: Damir Krstic > To: gpfsug main discussion list > Date: 02/22/2016 03:12 PM > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1) > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > Sorry to revisit this question - AFM seems to be the best way to do > this. I was wondering if anyone has done AFM migration. I am looking > at this wiki page for instructions: > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/ > wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating% > 20Data%20Using%20AFM > and I am little confused by step 3 "cut over users" <-- does this > mean, unmount existing filesystem and point users to new filesystem? > > The reason we were looking at AFM is to not have downtime - make the > transition as seamless as possible to the end user. Not sure what, > then, AFM buys us if we still have to take "downtime" in order to > cut users over to the new system. > > Thanks, > Damir > > On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic wrote: > Thanks all for great suggestions. We will most likely end up using > either AFM or some mechanism of file copy (tar/rsync etc.). > > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > Along the same vein I've patched rsync to maintain source atimes in > Linux for large transitions such as this. Along with the stadnard > "patches" mod for destination atimes it is quite useful. Works in > 3.0.8 and 3.0.9. I've not yet ported it to 3.1.x > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff > > Ed Wahl > OSC > > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss- > bounces at spectrumscale.org] on behalf of Orlando Richards [ > orlando.richards at ed.ac.uk] > Sent: Monday, February 01, 2016 4:25 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1) > > For what it's worth - there's a patch for rsync which IBM provided a > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up > on the gpfsug github here: > > https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync > > > > On 29/01/16 22:36, Sven Oehme wrote: > > Doug, > > > > This won't really work if you make use of ACL's or use special GPFS > > extended attributes or set quotas, filesets, etc > > so unfortunate the answer is you need to use a combination of things and > > there is work going on to make some of this simpler (e.g. for ACL's) , > > but its a longer road to get there. so until then you need to think > > about multiple aspects . > > > > 1. you need to get the data across and there are various ways to do this. > > > > a) AFM is the simplest of all as it not just takes care of ACL's and > > extended attributes and alike as it understands the GPFS internals it > > also is operating in parallel can prefetch data, etc so its a efficient > > way to do this but as already pointed out doesn't transfer quota or > > fileset informations. > > > > b) you can either use rsync or any other pipe based copy program. the > > downside is that they are typical single threaded and do a file by file > > approach, means very metadata intensive on the source as well as target > > side and cause a lot of ios on both side. > > > > c) you can use the policy engine to create a list of files to transfer > > to at least address the single threaded scan part, then partition the > > data and run multiple instances of cp or rsync in parallel, still > > doesn't fix the ACL / EA issues, but the data gets there faster. > > > > 2. you need to get ACL/EA informations over too. there are several > > command line options to dump the data and restore it, they kind of > > suffer the same problem as data transfers , which is why using AFM is > > the best way of doing this if you rely on ACL/EA informations. > > > > 3. transfer quota / fileset infos. there are several ways to do this, > > but all require some level of scripting to do this. > > > > if you have TSM/HSM you could also transfer the data using SOBAR it's > > described in the advanced admin book. > > > > sven > > > > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > > > wrote: > > > > I have found that a tar pipe is much faster than rsync for this sort > > of thing. The fastest of these is ?star? (schily tar). On average it > > is about 2x-5x faster than rsync for doing this. After one pass with > > this, you can use rsync for a subsequent or last pass synch.____ > > > > __ __ > > > > e.g.____ > > > > $ cd /export/gpfs1/foo____ > > > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > > > __ __ > > > > This also will not preserve filesets and quotas, though. You should > > be able to automate that with a little bit of awk, perl, or whatnot.____ > > > > __ __ > > > > __ __ > > > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > > ] *On Behalf Of > > *Damir Krstic > > *Sent:* Friday, January 29, 2016 2:32 PM > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1)____ > > > > __ __ > > > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > > of storage. We are in planning stages of implementation. We would > > like to migrate date from our existing GPFS installation (around > > 300TB) to new solution. ____ > > > > __ __ > > > > We were planning of adding ESS to our existing GPFS cluster and > > adding its disks and then deleting our old disks and having the data > > migrated this way. However, our existing block size on our projects > > filesystem is 1M and in order to extract as much performance out of > > ESS we would like its filesystem created with larger block size. > > Besides rsync do you have any suggestions of how to do this without > > downtime and in fastest way possible? ____ > > > > __ __ > > > > I have looked at AFM but it does not seem to migrate quotas and > > filesets so that may not be an optimal solution. ____ > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -- > -- > Dr Orlando Richards > Research Services Manager > Information Services > IT Infrastructure Division > Tel: 0131 650 4994 > skype: orlando.richards > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From damir.krstic at gmail.com Mon Feb 22 16:11:31 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Mon, 22 Feb 2016 16:11:31 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance(GPFS4.1) In-Reply-To: <201602221339.u1MDdVfH012286@d06av07.portsmouth.uk.ibm.com> References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> <56AF2498.8010503@ed.ac.uk> <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> <201602221339.u1MDdVfH012286@d06av07.portsmouth.uk.ibm.com> Message-ID: Thanks for the reply - but that explanation does not mean no downtime without elaborating on "cut over." I can do the sync via rsync or tar today but eventually I will have to cut over to the new system. Is this the case with AFM as well - once everything is synced over - cutting over means users will have to "cut over" by: 1. either mounting new AFM-synced system on all compute nodes with same mount as the old system (which means downtime to unmount the existing filesystem and mounting new filesystem) or 2. end-user training i.e. starting using new filesystem, move your own files you need because eventually we will shutdown the old filesystem. If, then, it's true that AFM requires some sort of cut over (either by disconnecting the old system and mounting new system as the old mount point, or by instruction to users to start using new filesystem at once) I am not sure that AFM gets me anything more than rsync or tar when it comes to taking a downtime (cutting over) for the end user. Thanks, Damir On Mon, Feb 22, 2016 at 7:39 AM Yaron Daniel wrote: > Hi > > AFM - Active File Management (AFM) is an asynchronous cross cluster > utility > > It means u create new GPFS cluster - migrate the data without downtime , > and when u r ready - u do last sync and cut-over. > > Hope this help. > > > > Regards > > > > ------------------------------ > > > > *Yaron Daniel* 94 Em Ha'Moshavot Rd > *Server, **Storage and Data Services* > *- > Team Leader* Petach Tiqva, 49527 > *Global Technology Services* Israel > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > *IBM Israel* > > > > > > gpfsug-discuss-bounces at spectrumscale.org wrote on 02/22/2016 03:12:14 PM: > > > From: Damir Krstic > > To: gpfsug main discussion list > > Date: 02/22/2016 03:12 PM > > > > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1) > > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > > Sorry to revisit this question - AFM seems to be the best way to do > > this. I was wondering if anyone has done AFM migration. I am looking > > at this wiki page for instructions: > > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/ > > wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating% > > 20Data%20Using%20AFM > > and I am little confused by step 3 "cut over users" <-- does this > > mean, unmount existing filesystem and point users to new filesystem? > > > > The reason we were looking at AFM is to not have downtime - make the > > transition as seamless as possible to the end user. Not sure what, > > then, AFM buys us if we still have to take "downtime" in order to > > cut users over to the new system. > > > > Thanks, > > Damir > > > > On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic > wrote: > > Thanks all for great suggestions. We will most likely end up using > > either AFM or some mechanism of file copy (tar/rsync etc.). > > > > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > > Along the same vein I've patched rsync to maintain source atimes in > > Linux for large transitions such as this. Along with the stadnard > > "patches" mod for destination atimes it is quite useful. Works in > > 3.0.8 and 3.0.9. I've not yet ported it to 3.1.x > > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff > > > > Ed Wahl > > OSC > > > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss- > > bounces at spectrumscale.org] on behalf of Orlando Richards [ > > orlando.richards at ed.ac.uk] > > Sent: Monday, February 01, 2016 4:25 AM > > To: gpfsug-discuss at spectrumscale.org > > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1) > > > > For what it's worth - there's a patch for rsync which IBM provided a > > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up > > on the gpfsug github here: > > > > https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync > > > > > > > > On 29/01/16 22:36, Sven Oehme wrote: > > > Doug, > > > > > > This won't really work if you make use of ACL's or use special GPFS > > > extended attributes or set quotas, filesets, etc > > > so unfortunate the answer is you need to use a combination of things > and > > > there is work going on to make some of this simpler (e.g. for ACL's) , > > > but its a longer road to get there. so until then you need to think > > > about multiple aspects . > > > > > > 1. you need to get the data across and there are various ways to do > this. > > > > > > a) AFM is the simplest of all as it not just takes care of ACL's and > > > extended attributes and alike as it understands the GPFS internals it > > > also is operating in parallel can prefetch data, etc so its a efficient > > > way to do this but as already pointed out doesn't transfer quota or > > > fileset informations. > > > > > > b) you can either use rsync or any other pipe based copy program. the > > > downside is that they are typical single threaded and do a file by file > > > approach, means very metadata intensive on the source as well as target > > > side and cause a lot of ios on both side. > > > > > > c) you can use the policy engine to create a list of files to transfer > > > to at least address the single threaded scan part, then partition the > > > data and run multiple instances of cp or rsync in parallel, still > > > doesn't fix the ACL / EA issues, but the data gets there faster. > > > > > > 2. you need to get ACL/EA informations over too. there are several > > > command line options to dump the data and restore it, they kind of > > > suffer the same problem as data transfers , which is why using AFM is > > > the best way of doing this if you rely on ACL/EA informations. > > > > > > 3. transfer quota / fileset infos. there are several ways to do this, > > > but all require some level of scripting to do this. > > > > > > if you have TSM/HSM you could also transfer the data using SOBAR it's > > > described in the advanced admin book. > > > > > > sven > > > > > > > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > > > > >> wrote: > > > > > > I have found that a tar pipe is much faster than rsync for this > sort > > > of thing. The fastest of these is ?star? (schily tar). On average > it > > > is about 2x-5x faster than rsync for doing this. After one pass > with > > > this, you can use rsync for a subsequent or last pass synch.____ > > > > > > __ __ > > > > > > e.g.____ > > > > > > $ cd /export/gpfs1/foo____ > > > > > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > > > > > __ __ > > > > > > This also will not preserve filesets and quotas, though. You should > > > be able to automate that with a little bit of awk, perl, or > whatnot.____ > > > > > > __ __ > > > > > > __ __ > > > > > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > > > > > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > > > > >] *On Behalf Of > > > *Damir Krstic > > > *Sent:* Friday, January 29, 2016 2:32 PM > > > *To:* gpfsug main discussion list > > > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > > appliance (GPFS4.1)____ > > > > > > __ __ > > > > > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > > > of storage. We are in planning stages of implementation. We would > > > like to migrate date from our existing GPFS installation (around > > > 300TB) to new solution. ____ > > > > > > __ __ > > > > > > We were planning of adding ESS to our existing GPFS cluster and > > > adding its disks and then deleting our old disks and having the > data > > > migrated this way. However, our existing block size on our projects > > > filesystem is 1M and in order to extract as much performance out of > > > ESS we would like its filesystem created with larger block size. > > > Besides rsync do you have any suggestions of how to do this without > > > downtime and in fastest way possible? ____ > > > > > > __ __ > > > > > > I have looked at AFM but it does not seem to migrate quotas and > > > filesets so that may not be an optimal solution. ____ > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > -- > > -- > > Dr Orlando Richards > > Research Services Manager > > Information Services > > IT Infrastructure Division > > Tel: 0131 650 4994 > > skype: orlando.richards > > > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From Luke.Raimbach at crick.ac.uk Wed Feb 24 14:05:07 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Wed, 24 Feb 2016 14:05:07 +0000 Subject: [gpfsug-discuss] AFM and Placement Policies Message-ID: Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From dhildeb at us.ibm.com Wed Feb 24 19:16:54 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Wed, 24 Feb 2016 11:16:54 -0800 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: References: Message-ID: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center From: Luke Raimbach To: gpfsug main discussion list Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From dhildeb at us.ibm.com Wed Feb 24 19:16:54 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Wed, 24 Feb 2016 11:16:54 -0800 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: References: Message-ID: <201602241923.u1OJNxMT006419@d01av04.pok.ibm.com> Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center From: Luke Raimbach To: gpfsug main discussion list Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Gethyn.Longworth at Rolls-Royce.com Thu Feb 25 10:42:39 2016 From: Gethyn.Longworth at Rolls-Royce.com (Longworth, Gethyn) Date: Thu, 25 Feb 2016 10:42:39 +0000 Subject: [gpfsug-discuss] Integration with Active Directory Message-ID: Hi all, I'm new to both GPFS and to this mailing list, so I thought I'd introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale.) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I've configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can't use RFC2307, as our IT department don't understand what this is), but the problem I'm having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate - I can run "id" on that node with a domain account and it provides the correct answer - whereas the other will not and denies any knowledge of the domain or user. >From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected - a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET | Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6181 bytes Desc: not available URL: -------------- next part -------------- The data contained in, or attached to, this e-mail, may contain confidential information. If you have received it in error you should notify the sender immediately by reply e-mail, delete the message from your system and contact +44 (0) 3301235850 (Security Operations Centre) if you need assistance. Please do not copy it for any purpose, or disclose its contents to any other person. An e-mail response to this address may be subject to interception or monitoring for operational reasons or for lawful business practices. (c) 2016 Rolls-Royce plc Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 1003142. Registered in England. From S.J.Thompson at bham.ac.uk Thu Feb 25 13:19:12 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 25 Feb 2016 13:19:12 +0000 Subject: [gpfsug-discuss] Integration with Active Directory Message-ID: Hi Gethyn, >From what I recall, CTDB used underneath is used to share the secret and only the primary named machine is joined, but CTDB and CES should work this backend part out for you. I do have a question though, do you want to have consistent UIDs across other systems? For example if you plan to use NFS to other *nix systems, then you probably want to think about LDAP mapping and using custom auth (we do this as out AD doesn't contain UIDs either). Simon From: > on behalf of "Longworth, Gethyn" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Thursday, 25 February 2016 at 10:42 To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Integration with Active Directory Hi all, I?m new to both GPFS and to this mailing list, so I thought I?d introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale?) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I?ve configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can?t use RFC2307, as our IT department don?t understand what this is), but the problem I?m having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate ? I can run ?id? on that node with a domain account and it provides the correct answer ? whereas the other will not and denies any knowledge of the domain or user. From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected ? a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET | Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: From poppe at us.ibm.com Thu Feb 25 17:01:00 2016 From: poppe at us.ibm.com (Monty Poppe) Date: Thu, 25 Feb 2016 11:01:00 -0600 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: References: Message-ID: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> All CES nodes should operate consistently across the cluster. Here are a few tips on debugging: /usr/lpp/mmfs/bin/wbinfo -p to ensure winbind is running properly /usr/lpp/mmfs/bin/wbinfo -P (capital P), to ensure winbind can communicate with AD server ensure the first nameserver in /etc/resolv.conf points to your AD server (check all nodes) mmuserauth service check --server-reachability for a more thorough validation that all nodes can communicate to the authentication server If you need to look at samba logs (/var/adm/ras/log.smbd & log.wb-) to see what's going on, change samba log levels issue: /usr/lpp/mmfs/bin/net conf setparm global 'log level' 3. Don't forget to set back to 0 or 1 when you are done! If you're willing to go with a later release, AD authentication with LDAP ID mapping has been added as a feature in the 4.2 release. ( https://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_adwithldap.htm?lang=en ) Monty Poppe Spectrum Scale Test poppe at us.ibm.com 512-286-8047 T/L 363-8047 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 02/25/2016 07:19 AM Subject: Re: [gpfsug-discuss] Integration with Active Directory Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Gethyn, From what I recall, CTDB used underneath is used to share the secret and only the primary named machine is joined, but CTDB and CES should work this backend part out for you. I do have a question though, do you want to have consistent UIDs across other systems? For example if you plan to use NFS to other *nix systems, then you probably want to think about LDAP mapping and using custom auth (we do this as out AD doesn't contain UIDs either). Simon From: on behalf of "Longworth, Gethyn" Reply-To: "gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Date: Thursday, 25 February 2016 at 10:42 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Integration with Active Directory Hi all, I?m new to both GPFS and to this mailing list, so I thought I?d introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale?) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I?ve configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can?t use RFC2307, as our IT department don?t understand what this is), but the problem I?m having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate ? I can run ?id? on that node with a domain account and it provides the correct answer ? whereas the other will not and denies any knowledge of the domain or user. From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected ? a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET | Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Thu Feb 25 17:46:02 2016 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 25 Feb 2016 17:46:02 +0000 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> References: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com>, Message-ID: <201602251746.u1PHk8Uw012701@d01av03.pok.ibm.com> An HTML attachment was scrubbed... URL: From Gethyn.Longworth at Rolls-Royce.com Fri Feb 26 09:04:50 2016 From: Gethyn.Longworth at Rolls-Royce.com (Longworth, Gethyn) Date: Fri, 26 Feb 2016 09:04:50 +0000 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> References: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> Message-ID: Monty, Simon, Christof, Many thanks for your help. I found that the firewall wasn?t configured correctly ? I made the assumption that the samba ?service? enabled the ctdb port (4379 the next person searching for this) as well ? enabling it manually and restarting the node has resolved it. I need to investigate the issue of consistent uids / gids between my linux machines. Obviously very easy when you have full control over the AD, but as ours is a local AD (which I can control) and most of the user IDs coming over on a trust it is much more tricky. Has anyone done an ldap set up where they are effectively adding extra user info (like uids / gids / samba info) to existing AD users without messing with the original AD? Thanks, Gethyn From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Monty Poppe Sent: 25 February 2016 17:01 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Integration with Active Directory All CES nodes should operate consistently across the cluster. Here are a few tips on debugging: /usr/lpp/mmfs/bin/wbinfo-p to ensure winbind is running properly /usr/lpp/mmfs/bin/wbinfo-P (capital P), to ensure winbind can communicate with AD server ensure the first nameserver in /etc/resolv.conf points to your AD server (check all nodes) mmuserauth service check --server-reachability for a more thorough validation that all nodes can communicate to the authentication server If you need to look at samba logs (/var/adm/ras/log.smbd & log.wb-) to see what's going on, change samba log levels issue: /usr/lpp/mmfs/bin/net conf setparm global 'log level' 3. Don't forget to set back to 0 or 1 when you are done! If you're willing to go with a later release, AD authentication with LDAP ID mapping has been added as a feature in the 4.2 release. ( https://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_adwithldap.htm?lang=en) Monty Poppe Spectrum Scale Test poppe at us.ibm.com 512-286-8047 T/L 363-8047 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 02/25/2016 07:19 AM Subject: Re: [gpfsug-discuss] Integration with Active Directory Sent by: gpfsug-discuss-bounces at spectrumscale.org _____ Hi Gethyn, >From what I recall, CTDB used underneath is used to share the secret and only the primary named machine is joined, but CTDB and CES should work this backend part out for you. I do have a question though, do you want to have consistent UIDs across other systems? For example if you plan to use NFS to other *nix systems, then you probably want to think about LDAP mapping and using custom auth (we do this as out AD doesn't contain UIDs either). Simon From: < gpfsug-discuss-bounces at spectrumscale.org> on behalf of "Longworth, Gethyn" < Gethyn.Longworth at Rolls-Royce.com> Reply-To: " gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Date: Thursday, 25 February 2016 at 10:42 To: " gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Subject: [gpfsug-discuss] Integration with Active Directory Hi all, I?m new to both GPFS and to this mailing list, so I thought I?d introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale?) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I?ve configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can?t use RFC2307, as our IT department don?t understand what this is), but the problem I?m having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate ? I can run ?id? on that node with a domain account and it provides the correct answer ? whereas the other will not and denies any knowledge of the domain or user. From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected ? a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET |Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6181 bytes Desc: not available URL: -------------- next part -------------- The data contained in, or attached to, this e-mail, may contain confidential information. If you have received it in error you should notify the sender immediately by reply e-mail, delete the message from your system and contact +44 (0) 3301235850 (Security Operations Centre) if you need assistance. Please do not copy it for any purpose, or disclose its contents to any other person. An e-mail response to this address may be subject to interception or monitoring for operational reasons or for lawful business practices. (c) 2016 Rolls-Royce plc Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 1003142. Registered in England. From S.J.Thompson at bham.ac.uk Fri Feb 26 10:12:21 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 26 Feb 2016 10:12:21 +0000 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: References: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> Message-ID: In theory you can do this with LDS ... My solution though is to run LDAP server (with replication) across the CTDB server nodes. Each node then points to itself and the other CTDB servers for the SMB config. We populate it with users and groups, names copied in from AD. Its a bit of a fudge to make it work, and we found for auxiliary groups that winbind wasn't doing quite what it should, so have to have the SIDs populated in the local LDAP server config. Simon From: > on behalf of "Longworth, Gethyn" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Friday, 26 February 2016 at 09:04 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] Integration with Active Directory Has anyone done an ldap set up where they are effectively adding extra user info (like uids / gids / samba info) to existing AD users without messing with the original AD? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Fri Feb 26 10:52:31 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Fri, 26 Feb 2016 10:52:31 +0000 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> References: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> Message-ID: Hi Dean, Thanks for this ? I had hoped this was the case. However what I?m now wondering is, if we operate the cache in independent-writer mode and the new file was pushed back home (conforming to cache, then home placement policies), then is subsequently evicted from the cache; if it needs to be pulled back for local operations in the cache, will the cache cluster see this file as ?new? for the third time? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Dean Hildebrand Sent: 24 February 2016 19:17 To: gpfsug main discussion list Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM and Placement Policies Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center [Inactive hide details for Luke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM S]Luke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache wri From: Luke Raimbach > To: gpfsug main discussion list > Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 105 bytes Desc: image001.gif URL: From dhildeb at us.ibm.com Fri Feb 26 18:58:47 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 26 Feb 2016 10:58:47 -0800 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: References: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> Message-ID: <201602261907.u1QJ7FZb019973@d03av03.boulder.ibm.com> Hi Luke, Cache eviction simply frees up space in the cache, but the inode/file is always the same. It does not delete and recreate the file in the cache. This is why you can continue to view files in the cache namespace even if they are evicted. Dean Hildebrand IBM Almaden Research Center From: Luke Raimbach To: gpfsug main discussion list Date: 02/26/2016 02:52 AM Subject: Re: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Dean, Thanks for this ? I had hoped this was the case. However what I?m now wondering is, if we operate the cache in independent-writer mode and the new file was pushed back home (conforming to cache, then home placement policies), then is subsequently evicted from the cache; if it needs to be pulled back for local operations in the cache, will the cache cluster see this file as ?new? for the third time? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Dean Hildebrand Sent: 24 February 2016 19:17 To: gpfsug main discussion list Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM and Placement Policies Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center Inactive hide details for Luke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM SLuke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache wri From: Luke Raimbach To: gpfsug main discussion list Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Luke.Raimbach at crick.ac.uk Mon Feb 29 14:31:57 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Mon, 29 Feb 2016 14:31:57 +0000 Subject: [gpfsug-discuss] AFM and Symbolic Links Message-ID: Hi All, Quick one: Does AFM follow symbolic links present at home in the cache fileset? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From dhildeb at us.ibm.com Mon Feb 29 16:59:11 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Mon, 29 Feb 2016 08:59:11 -0800 Subject: [gpfsug-discuss] AFM and Symbolic Links In-Reply-To: References: Message-ID: <201602291701.u1TH1owF031283@d03av05.boulder.ibm.com> Hi Luke, Quick response.... yes :) Dean From: Luke Raimbach To: gpfsug main discussion list Date: 02/29/2016 06:32 AM Subject: [gpfsug-discuss] AFM and Symbolic Links Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Quick one: Does AFM follow symbolic links present at home in the cache fileset? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From dhildeb at us.ibm.com Mon Feb 29 16:59:11 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Mon, 29 Feb 2016 08:59:11 -0800 Subject: [gpfsug-discuss] AFM and Symbolic Links In-Reply-To: References: Message-ID: <201602291702.u1TH2Ciu032313@d03av01.boulder.ibm.com> Hi Luke, Quick response.... yes :) Dean From: Luke Raimbach To: gpfsug main discussion list Date: 02/29/2016 06:32 AM Subject: [gpfsug-discuss] AFM and Symbolic Links Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Quick one: Does AFM follow symbolic links present at home in the cache fileset? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From UWEFALKE at de.ibm.com Mon Feb 1 08:39:05 2016 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Mon, 1 Feb 2016 09:39:05 +0100 Subject: [gpfsug-discuss] what's on a 'dataOnly' disk? In-Reply-To: <20160129170401.0ec9f72e@uphs.upenn.edu> References: <20160129170401.0ec9f72e@uphs.upenn.edu> Message-ID: <201602010839.u118dC24013651@d06av06.portsmouth.uk.ibm.com> Hi Mark, AFAIK, there will not be any file system corruption if just data blocks are altered by activities outside GPFS. Mind: the metadata just tell were to find the data, not what will be there. If you have the data replicated, you could compare the two replica. But mind: with some GPFS version, a replica compare tool was introduced which would fix differences by always assuming the first version it has read is the correct one -- which is wrong in half of the cases, I'd say. Only now (I think with SpSc 4.2), a version of that tool is available which allows the user to check the differences and possibly select the good version. If you have your data replicated and you may assume that the problem is affecting only disks in one failure group (FG), you could also set these disks down, add new disks to the FG and restripe the FS. Then, GNR works with end-to-end checksumming. This would not help you retrieving the original content but would allow you to identify altered file contents. Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Frank Hammer, Thorsten Moehring Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From orlando.richards at ed.ac.uk Mon Feb 1 09:25:44 2016 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Mon, 1 Feb 2016 09:25:44 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> Message-ID: <56AF2498.8010503@ed.ac.uk> For what it's worth - there's a patch for rsync which IBM provided a while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up on the gpfsug github here: https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync On 29/01/16 22:36, Sven Oehme wrote: > Doug, > > This won't really work if you make use of ACL's or use special GPFS > extended attributes or set quotas, filesets, etc > so unfortunate the answer is you need to use a combination of things and > there is work going on to make some of this simpler (e.g. for ACL's) , > but its a longer road to get there. so until then you need to think > about multiple aspects . > > 1. you need to get the data across and there are various ways to do this. > > a) AFM is the simplest of all as it not just takes care of ACL's and > extended attributes and alike as it understands the GPFS internals it > also is operating in parallel can prefetch data, etc so its a efficient > way to do this but as already pointed out doesn't transfer quota or > fileset informations. > > b) you can either use rsync or any other pipe based copy program. the > downside is that they are typical single threaded and do a file by file > approach, means very metadata intensive on the source as well as target > side and cause a lot of ios on both side. > > c) you can use the policy engine to create a list of files to transfer > to at least address the single threaded scan part, then partition the > data and run multiple instances of cp or rsync in parallel, still > doesn't fix the ACL / EA issues, but the data gets there faster. > > 2. you need to get ACL/EA informations over too. there are several > command line options to dump the data and restore it, they kind of > suffer the same problem as data transfers , which is why using AFM is > the best way of doing this if you rely on ACL/EA informations. > > 3. transfer quota / fileset infos. there are several ways to do this, > but all require some level of scripting to do this. > > if you have TSM/HSM you could also transfer the data using SOBAR it's > described in the advanced admin book. > > sven > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > wrote: > > I have found that a tar pipe is much faster than rsync for this sort > of thing. The fastest of these is ?star? (schily tar). On average it > is about 2x-5x faster than rsync for doing this. After one pass with > this, you can use rsync for a subsequent or last pass synch.____ > > __ __ > > e.g.____ > > $ cd /export/gpfs1/foo____ > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > __ __ > > This also will not preserve filesets and quotas, though. You should > be able to automate that with a little bit of awk, perl, or whatnot.____ > > __ __ > > __ __ > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > ] *On Behalf Of > *Damir Krstic > *Sent:* Friday, January 29, 2016 2:32 PM > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1)____ > > __ __ > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > of storage. We are in planning stages of implementation. We would > like to migrate date from our existing GPFS installation (around > 300TB) to new solution. ____ > > __ __ > > We were planning of adding ESS to our existing GPFS cluster and > adding its disks and then deleting our old disks and having the data > migrated this way. However, our existing block size on our projects > filesystem is 1M and in order to extract as much performance out of > ESS we would like its filesystem created with larger block size. > Besides rsync do you have any suggestions of how to do this without > downtime and in fastest way possible? ____ > > __ __ > > I have looked at AFM but it does not seem to migrate quotas and > filesets so that may not be an optimal solution. ____ > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Research Services Manager Information Services IT Infrastructure Division Tel: 0131 650 4994 skype: orlando.richards The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From Paul.Tomlinson at awe.co.uk Mon Feb 1 10:06:15 2016 From: Paul.Tomlinson at awe.co.uk (Paul.Tomlinson at awe.co.uk) Date: Mon, 1 Feb 2016 10:06:15 +0000 Subject: [gpfsug-discuss] EXTERNAL: Next meet the devs - 24th Feb 2016 In-Reply-To: <20160119171452.C7F963C1EAC@gpfsug.org> References: <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: <201602011006.u11A6Mui009286@msw1.awe.co.uk> Hi Simon, We would like to send Mark Roberts (HPC) from AWE if any places are available. If there any places I'm sure will be willing to provide a list of topics that interest us. Best Regards Paul Tomlinson High Performance Computing Direct: 0118 985 8060 or 0118 982 4147 Mobile 07920783365 VPN: 88864 AWE, Aldermaston, Reading, RG7 4PR From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of "Spectrum scale UG Chair (Simon Thompson)"< Sent: 19 January 2016 17:14 To: gpfsug-discuss at spectrumscale.org Subject: EXTERNAL: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 Dear All, We are planning the next 'Meet the Devs' event for Wednesday 24th February 2016, 11am-3:30pm. The event will be held in central Oxford. The agenda promises to be hands on and give you the opportunity to speak face to face with the developers of Spectrum Scale. Guideline agenda: * TBC - please provide input on what you'd like to see! Lunch and refreshments will be provided. Please can you let me know by email if you are interested in attending by Wednesday 17th February. Thanks and we hope to see you there. Thanks to Andy at OERC for offering to host. Simon The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Mon Feb 1 10:18:51 2016 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Mon, 1 Feb 2016 10:18:51 +0000 Subject: [gpfsug-discuss] EXTERNAL: Next meet the devs - 24th Feb 2016 In-Reply-To: <201602011006.u11A6Mui009286@msw1.awe.co.uk> References: <201602011006.u11A6Mui009286@msw1.awe.co.uk>, <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: <201602011018.u11AIuUt009534@d06av09.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Mon Feb 1 17:29:07 2016 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Mon, 1 Feb 2016 18:29:07 +0100 Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 Message-ID: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is composed of various components tested together for compatibility and correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and Power System Firmware. Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Publication Date: 29 January 2016 Summary of changes in ESS ver 4.0 a) ESS core - IBM Spectrum Scale RAID V4.2.0-1 - Updated GUI b) Support of Red Hat Enterprise Linux 7.1 - No changes from 3.0.x or 3.5.x c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1 - Updated from 3.x.y d) Install Toolkit - Updated Install Toolkit e) Updated firmware rpm - IP RAID Adapter FW - Host Adapter FW - Enclosure and drive FW Download: (612 MB) http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM +Spectrum+Scale +RAID&function=fixid&fixids=ESS_ADV_BASEIMAGE-4.0.0-power-Linux README: http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002500 Deployment and Administration Guides are available in IBM Knowledge Center. http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html - Elastic Storage Server: Quick Deployment Guide - Deploying the Elastic Storage Server - IBM Spectrum Scale RAID: Administration Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From volobuev at us.ibm.com Mon Feb 1 18:28:01 2016 From: volobuev at us.ibm.com (Yuri L Volobuev) Date: Mon, 1 Feb 2016 10:28:01 -0800 Subject: [gpfsug-discuss] what's on a 'dataOnly' disk? In-Reply-To: <20160129170401.0ec9f72e@uphs.upenn.edu> References: <20160129170401.0ec9f72e@uphs.upenn.edu> Message-ID: <201602011828.u11ISGDS029189@d01av04.pok.ibm.com> > What's on a 'dataOnly' GPFS 3.5.x NSD besides data and the NSD disk > header, if anything? That's it. In some cases there may also be a copy of the file system descriptor, but that doesn't really matter in your case. > I'm trying to understand some file corruption, and one potential > explanation would be if a (non-GPFS) server wrote to a LUN used as a > GPFS dataOnly NSD. > > We are not seeing any 'I/O' or filesystem errors, mmfsck (online) doesn't > detect any errors, and all NSDs are usable. However, some files seem to > have changes in content, with no changes in metadata (modify timestamp, > ownership), including files with the GPFS "immutable" ACL set. This is all consistent with the content on a dataOnly disk being overwritten outside of GPFS. > If an NSD was changed outside of GPFS control, would mmfsck detect > filesystem errors, or would the GPFS filesystem be consistent, even > though the content of some of the data blocks was altered? No. mmfsck can detect metadata corruption, but has no way to tell whether a data block has correct content or garbage. > Is there any metadata or checksum information maintained by GPFS, or any > means of doing a consistency check of the contents of files that would > correlate with blocks stored on a particular NSD? GPFS on top of traditional disks/RAID LUNs doesn't checksum data blocks, and thus can't tell whether a data block is good or bad. GPFS Native RAID has very strong on-disk data checksumming, OTOH. yuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuk at us.ibm.com Mon Feb 1 18:26:43 2016 From: liuk at us.ibm.com (Kenneth Liu) Date: Mon, 1 Feb 2016 10:26:43 -0800 Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 In-Reply-To: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> References: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> Message-ID: <201602011838.u11Ic39I004064@d03av02.boulder.ibm.com> And ISKLM to manage the encryption keys. Kenneth Liu Software Defined Infrastructure -- Spectrum Storage, Cleversafe & Platform Computing Sales Address: 4000 Executive Parkway San Ramon, CA 94583 Mobile #: (510) 584-7657 Email: liuk at us.ibm.com From: "Frank Kraemer" To: gpfsug-discuss at gpfsug.org Date: 02/01/2016 09:30 AM Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 Sent by: gpfsug-discuss-bounces at spectrumscale.org IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is composed of various components tested together for compatibility and correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and Power System Firmware. Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Publication Date: 29 January 2016 Summary of changes in ESS ver 4.0 a) ESS core - IBM Spectrum Scale RAID V4.2.0-1 - Updated GUI b) Support of Red Hat Enterprise Linux 7.1 - No changes from 3.0.x or 3.5.x c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1 - Updated from 3.x.y d) Install Toolkit - Updated Install Toolkit e) Updated firmware rpm - IP RAID Adapter FW - Host Adapter FW - Enclosure and drive FW Download: (612 MB) http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM +Spectrum+Scale +RAID&function=fixid&fixids=ESS_ADV_BASEIMAGE-4.0.0-power-Linux README: http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002500 Deployment and Administration Guides are available in IBM Knowledge Center. http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html - Elastic Storage Server: Quick Deployment Guide - Deploying the Elastic Storage Server - IBM Spectrum Scale RAID: Administration Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From liuk at us.ibm.com Mon Feb 1 18:26:43 2016 From: liuk at us.ibm.com (Kenneth Liu) Date: Mon, 1 Feb 2016 10:26:43 -0800 Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 In-Reply-To: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> References: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> Message-ID: <201602011838.u11Ic6D2004449@d03av02.boulder.ibm.com> And ISKLM to manage the encryption keys. Kenneth Liu Software Defined Infrastructure -- Spectrum Storage, Cleversafe & Platform Computing Sales Address: 4000 Executive Parkway San Ramon, CA 94583 Mobile #: (510) 584-7657 Email: liuk at us.ibm.com From: "Frank Kraemer" To: gpfsug-discuss at gpfsug.org Date: 02/01/2016 09:30 AM Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 Sent by: gpfsug-discuss-bounces at spectrumscale.org IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is composed of various components tested together for compatibility and correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and Power System Firmware. Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Publication Date: 29 January 2016 Summary of changes in ESS ver 4.0 a) ESS core - IBM Spectrum Scale RAID V4.2.0-1 - Updated GUI b) Support of Red Hat Enterprise Linux 7.1 - No changes from 3.0.x or 3.5.x c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1 - Updated from 3.x.y d) Install Toolkit - Updated Install Toolkit e) Updated firmware rpm - IP RAID Adapter FW - Host Adapter FW - Enclosure and drive FW Download: (612 MB) http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM +Spectrum+Scale +RAID&function=fixid&fixids=ESS_ADV_BASEIMAGE-4.0.0-power-Linux README: http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002500 Deployment and Administration Guides are available in IBM Knowledge Center. http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html - Elastic Storage Server: Quick Deployment Guide - Deploying the Elastic Storage Server - IBM Spectrum Scale RAID: Administration Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ewahl at osc.edu Mon Feb 1 18:39:12 2016 From: ewahl at osc.edu (Wahl, Edward) Date: Mon, 1 Feb 2016 18:39:12 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: <56AF2498.8010503@ed.ac.uk> References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> , <56AF2498.8010503@ed.ac.uk> Message-ID: <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Along the same vein I've patched rsync to maintain source atimes in Linux for large transitions such as this. Along with the stadnard "patches" mod for destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. I've not yet ported it to 3.1.x https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff Ed Wahl OSC ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Orlando Richards [orlando.richards at ed.ac.uk] Sent: Monday, February 01, 2016 4:25 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) For what it's worth - there's a patch for rsync which IBM provided a while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up on the gpfsug github here: https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync On 29/01/16 22:36, Sven Oehme wrote: > Doug, > > This won't really work if you make use of ACL's or use special GPFS > extended attributes or set quotas, filesets, etc > so unfortunate the answer is you need to use a combination of things and > there is work going on to make some of this simpler (e.g. for ACL's) , > but its a longer road to get there. so until then you need to think > about multiple aspects . > > 1. you need to get the data across and there are various ways to do this. > > a) AFM is the simplest of all as it not just takes care of ACL's and > extended attributes and alike as it understands the GPFS internals it > also is operating in parallel can prefetch data, etc so its a efficient > way to do this but as already pointed out doesn't transfer quota or > fileset informations. > > b) you can either use rsync or any other pipe based copy program. the > downside is that they are typical single threaded and do a file by file > approach, means very metadata intensive on the source as well as target > side and cause a lot of ios on both side. > > c) you can use the policy engine to create a list of files to transfer > to at least address the single threaded scan part, then partition the > data and run multiple instances of cp or rsync in parallel, still > doesn't fix the ACL / EA issues, but the data gets there faster. > > 2. you need to get ACL/EA informations over too. there are several > command line options to dump the data and restore it, they kind of > suffer the same problem as data transfers , which is why using AFM is > the best way of doing this if you rely on ACL/EA informations. > > 3. transfer quota / fileset infos. there are several ways to do this, > but all require some level of scripting to do this. > > if you have TSM/HSM you could also transfer the data using SOBAR it's > described in the advanced admin book. > > sven > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > wrote: > > I have found that a tar pipe is much faster than rsync for this sort > of thing. The fastest of these is ?star? (schily tar). On average it > is about 2x-5x faster than rsync for doing this. After one pass with > this, you can use rsync for a subsequent or last pass synch.____ > > __ __ > > e.g.____ > > $ cd /export/gpfs1/foo____ > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > __ __ > > This also will not preserve filesets and quotas, though. You should > be able to automate that with a little bit of awk, perl, or whatnot.____ > > __ __ > > __ __ > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > ] *On Behalf Of > *Damir Krstic > *Sent:* Friday, January 29, 2016 2:32 PM > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1)____ > > __ __ > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > of storage. We are in planning stages of implementation. We would > like to migrate date from our existing GPFS installation (around > 300TB) to new solution. ____ > > __ __ > > We were planning of adding ESS to our existing GPFS cluster and > adding its disks and then deleting our old disks and having the data > migrated this way. However, our existing block size on our projects > filesystem is 1M and in order to extract as much performance out of > ESS we would like its filesystem created with larger block size. > Besides rsync do you have any suggestions of how to do this without > downtime and in fastest way possible? ____ > > __ __ > > I have looked at AFM but it does not seem to migrate quotas and > filesets so that may not be an optimal solution. ____ > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Research Services Manager Information Services IT Infrastructure Division Tel: 0131 650 4994 skype: orlando.richards The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Mon Feb 1 18:44:50 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 1 Feb 2016 13:44:50 -0500 Subject: [gpfsug-discuss] what's on a 'dataOnly' disk? In-Reply-To: <201602011828.u11ISGDS029189@d01av04.pok.ibm.com> References: <20160129170401.0ec9f72e@uphs.upenn.edu> <201602011828.u11ISGDS029189@d01av04.pok.ibm.com> Message-ID: <201602011844.u11IirBd015334@d03av01.boulder.ibm.com> Just to add... Spectrum Scale is no different than most other file systems in this respect. It assumes the disk system and network systems will detect I/O errors, including data corruption. And it usually will ... but there are, as you've discovered, scenarios where it can not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Mon Feb 1 19:18:22 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 1 Feb 2016 19:18:22 +0000 Subject: [gpfsug-discuss] Question on FPO node - NSD recovery Message-ID: <427E3540-585D-4DD9-9E41-29C222548E03@nuance.com> When a node that?s part of an FPO file system (local disks) and the node is rebooted ? the NSDs come up as ?down? until I manually starts them. GPFS start on the node but the NSDs stay down. Is this the expected behavior or is there a config setting I missed somewhere? Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Tue Feb 2 08:23:43 2016 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Tue, 2 Feb 2016 09:23:43 +0100 Subject: [gpfsug-discuss] IBM Spectrum Protect with IBM Spectrum Scale - Introduction Message-ID: <201602020823.u128Nv9h015238@d06av05.portsmouth.uk.ibm.com> by Nils Haustein, see at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5334 Abstract: This presentation gives a short overview about the IBM Spectrum Family and briefly introduces IBM Spectrum Protect? (Tivoli Storage Manager, TSM) and IBM Spectrum Scale? (General Parallel File System, GPFS) in more detail. Subsequently it presents a solution integrating these two components and outlines its advantages. It further discusses use cases and deployment options. Last but not least this presentation elaborates on the client values running multiple Spectrum Protect instance in a Spectrum Scale cluster and presents performance test results highlighting that this solution scales with the growing data protection demands. Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tomasz.Wolski at ts.fujitsu.com Wed Feb 3 08:10:32 2016 From: Tomasz.Wolski at ts.fujitsu.com (Tomasz.Wolski at ts.fujitsu.com) Date: Wed, 3 Feb 2016 08:10:32 +0000 Subject: [gpfsug-discuss] DMAPI multi-thread safe Message-ID: <44d08c1749b9482787f5b90c8b7d6dbb@R01UKEXCASM223.r01.fujitsu.local> Hi Experts :) Could you please tell me if the DMAPI implementation for GPFS is multi-thread safe? Are there any limitation towards using multiple threads within a single DM application process? For example: DM events are processed by multiple threads, which call dm* functions for manipulating file attributes - will there be any problem when two threads try to access the same file at the same time? Is the libdmapi thread safe? Best regards, Tomasz Wolski -------------- next part -------------- An HTML attachment was scrubbed... URL: From stschmid at de.ibm.com Wed Feb 3 08:41:27 2016 From: stschmid at de.ibm.com (Stefan Schmidt) Date: Wed, 3 Feb 2016 09:41:27 +0100 Subject: [gpfsug-discuss] IBM Spectrum Protect with IBM Spectrum Scale - Introduction In-Reply-To: <201602020823.u128Nv9h015238@d06av05.portsmouth.uk.ibm.com> References: <201602020823.u128Nv9h015238@d06av05.portsmouth.uk.ibm.com> Message-ID: <201602030841.u138fY2l007402@d06av06.portsmouth.uk.ibm.com> Hi all, I want to add that IBM Spectrum Scale Raid ( ESS/GNR) is missing in the table I think. I know it's now a HW solution but the GNR package I thought would be named IBM Spectrum Scale Raid. Mit freundlichen Gr??en / Kind regards Stefan Schmidt Scrum Master IBM Spectrum Scale GUI / Senior IT Architect /PMP - Dept. M069 / IBM Spectrum Scale Software Development IBM Systems Group IBM Deutschland Phone: +49-6131-84-3465 IBM Deutschland Mobile: +49-170-6346601 Hechtsheimer Str. 2 E-Mail: stschmid at de.ibm.com 55131 Mainz Germany IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: Frank Kraemer/Germany/IBM at IBMDE To: gpfsug-discuss at gpfsug.org Date: 02.02.2016 09:24 Subject: [gpfsug-discuss] IBM Spectrum Protect with IBM Spectrum Scale - Introduction Sent by: gpfsug-discuss-bounces at spectrumscale.org by Nils Haustein, see at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5334 Abstract: This presentation gives a short overview about the IBM Spectrum Family and briefly introduces IBM Spectrum Protect? (Tivoli Storage Manager, TSM) and IBM Spectrum Scale? (General Parallel File System, GPFS) in more detail. Subsequently it presents a solution integrating these two components and outlines its advantages. It further discusses use cases and deployment options. Last but not least this presentation elaborates on the client values running multiple Spectrum Protect instance in a Spectrum Scale cluster and presents performance test results highlighting that this solution scales with the growing data protection demands. Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert at strubi.ox.ac.uk Wed Feb 3 16:53:59 2016 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Wed, 3 Feb 2016 16:53:59 +0000 (GMT) Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 In-Reply-To: <20160119171452.C7F963C1EAC@gpfsug.org> References: <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: <201602031653.060161@mail.strubi.ox.ac.uk> Hi Simon, I'll certainly be interested in wandering into town to attend this... please register me or whatever has to be done. Regards, Robert -- Dr. Robert Esnouf, University Research Lecturer, Head of Research Computing Core, NDM Research Computing Strategy Officer Room 10/028, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Email: robert at strubi.ox.ac.uk / robert at well.ox.ac.uk Tel: (+44) - 1865 - 287783 -------------- next part -------------- An embedded message was scrubbed... From: "Spectrum scale UG Chair (Simon Thompson)" Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 Date: Tue, 19 Jan 2016 17:13:42 +0000 Size: 5334 URL: From wsawdon at us.ibm.com Wed Feb 3 18:22:48 2016 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Wed, 3 Feb 2016 10:22:48 -0800 Subject: [gpfsug-discuss] DMAPI multi-thread safe In-Reply-To: <44d08c1749b9482787f5b90c8b7d6dbb@R01UKEXCASM223.r01.fujitsu.local> References: <44d08c1749b9482787f5b90c8b7d6dbb@R01UKEXCASM223.r01.fujitsu.local> Message-ID: <201602031822.u13IMv3c017365@d03av05.boulder.ibm.com> > From: "Tomasz.Wolski at ts.fujitsu.com" > > Could you please tell me if the DMAPI implementation for GPFS is > multi-thread safe? Are there any limitation towards using multiple > threads within a single DM application process? > For example: DM events are processed by multiple threads, which call > dm* functions for manipulating file attributes ? will there be any > problem when two threads try to access the same file at the same time? > > Is the libdmapi thread safe? > With the possible exception of dm_init_service it should be thread safe. Dmapi does offer access rights to allow or prevent concurrent access to a file. If you are not using the access rights, internally Spectrum Scale will serialize the dmapi calls like it would serialize for posix -- some calls will proceed in parallel (e.g. reads, non-overlapping writes) and some will be serialized (e.g. EA updates). -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From damir.krstic at gmail.com Thu Feb 4 21:15:56 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Thu, 04 Feb 2016 21:15:56 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> <56AF2498.8010503@ed.ac.uk> <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: Thanks all for great suggestions. We will most likely end up using either AFM or some mechanism of file copy (tar/rsync etc.). On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > Along the same vein I've patched rsync to maintain source atimes in Linux > for large transitions such as this. Along with the stadnard "patches" mod > for destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. > I've not yet ported it to 3.1.x > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff > > Ed Wahl > OSC > > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [ > gpfsug-discuss-bounces at spectrumscale.org] on behalf of Orlando Richards [ > orlando.richards at ed.ac.uk] > Sent: Monday, February 01, 2016 4:25 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance > (GPFS4.1) > > For what it's worth - there's a patch for rsync which IBM provided a > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up > on the gpfsug github here: > > https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync > > > > On 29/01/16 22:36, Sven Oehme wrote: > > Doug, > > > > This won't really work if you make use of ACL's or use special GPFS > > extended attributes or set quotas, filesets, etc > > so unfortunate the answer is you need to use a combination of things and > > there is work going on to make some of this simpler (e.g. for ACL's) , > > but its a longer road to get there. so until then you need to think > > about multiple aspects . > > > > 1. you need to get the data across and there are various ways to do this. > > > > a) AFM is the simplest of all as it not just takes care of ACL's and > > extended attributes and alike as it understands the GPFS internals it > > also is operating in parallel can prefetch data, etc so its a efficient > > way to do this but as already pointed out doesn't transfer quota or > > fileset informations. > > > > b) you can either use rsync or any other pipe based copy program. the > > downside is that they are typical single threaded and do a file by file > > approach, means very metadata intensive on the source as well as target > > side and cause a lot of ios on both side. > > > > c) you can use the policy engine to create a list of files to transfer > > to at least address the single threaded scan part, then partition the > > data and run multiple instances of cp or rsync in parallel, still > > doesn't fix the ACL / EA issues, but the data gets there faster. > > > > 2. you need to get ACL/EA informations over too. there are several > > command line options to dump the data and restore it, they kind of > > suffer the same problem as data transfers , which is why using AFM is > > the best way of doing this if you rely on ACL/EA informations. > > > > 3. transfer quota / fileset infos. there are several ways to do this, > > but all require some level of scripting to do this. > > > > if you have TSM/HSM you could also transfer the data using SOBAR it's > > described in the advanced admin book. > > > > sven > > > > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > > > wrote: > > > > I have found that a tar pipe is much faster than rsync for this sort > > of thing. The fastest of these is ?star? (schily tar). On average it > > is about 2x-5x faster than rsync for doing this. After one pass with > > this, you can use rsync for a subsequent or last pass synch.____ > > > > __ __ > > > > e.g.____ > > > > $ cd /export/gpfs1/foo____ > > > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > > > __ __ > > > > This also will not preserve filesets and quotas, though. You should > > be able to automate that with a little bit of awk, perl, or > whatnot.____ > > > > __ __ > > > > __ __ > > > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > > ] *On Behalf Of > > *Damir Krstic > > *Sent:* Friday, January 29, 2016 2:32 PM > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1)____ > > > > __ __ > > > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > > of storage. We are in planning stages of implementation. We would > > like to migrate date from our existing GPFS installation (around > > 300TB) to new solution. ____ > > > > __ __ > > > > We were planning of adding ESS to our existing GPFS cluster and > > adding its disks and then deleting our old disks and having the data > > migrated this way. However, our existing block size on our projects > > filesystem is 1M and in order to extract as much performance out of > > ESS we would like its filesystem created with larger block size. > > Besides rsync do you have any suggestions of how to do this without > > downtime and in fastest way possible? ____ > > > > __ __ > > > > I have looked at AFM but it does not seem to migrate quotas and > > filesets so that may not be an optimal solution. ____ > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -- > -- > Dr Orlando Richards > Research Services Manager > Information Services > IT Infrastructure Division > Tel: 0131 650 4994 > skype: orlando.richards > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Feb 5 11:25:38 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 5 Feb 2016 11:25:38 +0000 Subject: [gpfsug-discuss] BM Spectrum Scale transparent cloud tiering In-Reply-To: <201601291718.u0THIPLr009799@d01av03.pok.ibm.com> References: <8505A552-5410-4F70-AA77-3DE5EF54BE09@nuance.com> <201601291718.u0THIPLr009799@d01av03.pok.ibm.com> Message-ID: Just to note if anyone is interested, the open beta is now "open" for the transparent cloud tiering, see: http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html?ce=sm6024&cmp=IBMSocial&ct=M16402YW&cm=h&IIO=BSYS&csr=blog&cr=casyst&ccy=us&s_tact=M16402YW Simon From: > on behalf of Marc A Kaplan > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Friday, 29 January 2016 at 17:18 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] BM Spectrum Scale transparent cloud tiering Since this official IBM website (pre)announces transparent cloud tiering ... http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html?ce=sm6024&cmp=IBMSocial&ct=M16402YW&cm=h&IIO=BSYS&csr=blog&cr=casyst&ccy=us&s_tact=M16402YW And since Oesterlin mentioned Cluster Export Service (CES), please allow me to (hopefully!) clarify: Transparent Cloud Tiering uses some new interfaces and functions within Spectrum Scale, it is not "just a rehash" of the long existing DMAPI HSM support. Transparent Cloud Tiering allows one to dynamically migrate Spectrum Scale files to and from foreign file and/or object stores. on the other hand ... Cluster Export Service, allows one to access Spectrum Scale files with foreign protocols, such as NFS, SMB, and Object(OpenStack) I suppose one could deploy both, using Spectrum Scale with Cluster Export Service for local, fast, immediate access to "hot" file and objects and some foreign object service, such as Amazon S3 or Cleversafe for long term "cold" storage. Oh, and just to add to the mix, in case you haven't heard yet, Cleversafe is a fairly recent IBM acquisition, http://www-03.ibm.com/press/us/en/pressrelease/47776.wss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Feb 8 10:07:29 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 8 Feb 2016 10:07:29 +0000 Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 In-Reply-To: <20160119171452.C7F963C1EAC@gpfsug.org> References: <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: Hi All, Just to note that we are NOW FULL for the next meet the devs in Feb. Simon From: > on behalf of Simon Thompson > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 19 January 2016 at 17:13 To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 Dear All, We are planning the next 'Meet the Devs' event for Wednesday 24th February 2016, 11am-3:30pm. The event will be held in central Oxford. The agenda promises to be hands on and give you the opportunity to speak face to face with the developers of Spectrum Scale. Guideline agenda: * TBC - please provide input on what you'd like to see! Lunch and refreshments will be provided. Please can you let me know by email if you are interested in attending by Wednesday 17th February. Thanks and we hope to see you there. Thanks to Andy at OERC for offering to host. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Feb 9 14:42:07 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 9 Feb 2016 14:42:07 +0000 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config Message-ID: Any ideas on how to get out of this? [root at gpfs01 ~]# mmlsnodeclass onegig Node Class Name Members --------------------- ----------------------------------------------------------- one gig [root at gpfs01 ~]# mmchconfig maxMBpS=DEFAULT -N onegig mmchconfig: No nodes were found that matched the input specification. mmchconfig: Command failed. Examine previous error messages to determine cause. [root at gpfs01 ~]# mmdelnodeclass onegig mmdelnodeclass: Node class "onegig" still appears in GPFS configuration node override section maxMBpS 120 [onegig] mmdelnodeclass: Command failed. Examine previous error messages to determine cause. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Feb 9 15:04:38 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 9 Feb 2016 10:04:38 -0500 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config In-Reply-To: References: Message-ID: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> Yeah. Try first changing the configuration so it does not depend on onegig. Then secondly you may want to delete the nodeclass. Any ideas on how to get out of this? [root at gpfs01 ~]# mmlsnodeclass onegig Node Class Name Members --------------------- ----------------------------------------------------------- one gig [root at gpfs01 ~]# mmchconfig maxMBpS=DEFAULT -N onegig mmchconfig: No nodes were found that matched the input specification. mmchconfig: Command failed. Examine previous error messages to determine cause. [root at gpfs01 ~]# mmdelnodeclass onegig mmdelnodeclass: Node class "onegig" still appears in GPFS configuration node override section maxMBpS 120 [onegig] mmdelnodeclass: Command failed. Examine previous error messages to determine cause. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Feb 9 15:07:30 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 9 Feb 2016 15:07:30 +0000 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config In-Reply-To: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> References: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> Message-ID: <9EA36B16-AF4D-45AC-86D8-B996059A8D61@nuance.com> Well, that would have been my guess as well. But I need to associate that value with ?something?? I?ve been trying a sequence of commands, no joy. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Marc A Kaplan > Reply-To: gpfsug main discussion list > Date: Tuesday, February 9, 2016 at 9:04 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Removing empty "nodeclass" from config Yeah. Try first changing the configuration so it does not depend on onegig. Then secondly you may want to delete the nodeclass. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Feb 9 15:34:17 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 9 Feb 2016 10:34:17 -0500 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config In-Reply-To: <9EA36B16-AF4D-45AC-86D8-B996059A8D61@nuance.com> References: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> <9EA36B16-AF4D-45AC-86D8-B996059A8D61@nuance.com> Message-ID: <201602091534.u19FYPCE020191@d01av02.pok.ibm.com> AH... I see, instead of `maxMBpS=default -N all` try a specific number. And then revert to "default" with a second command. Seems there are some bugs or peculiarities in this code. # mmchconfig maxMBpS=99999 -N all # mmchconfig maxMBpS=default -N all I tried some other stuff. If you're curious play around and do mmlsconfig after each mmchconfig and see how the settings "evolve"!! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From pinto at scinet.utoronto.ca Wed Feb 10 19:26:56 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Wed, 10 Feb 2016 14:26:56 -0500 Subject: [gpfsug-discuss] mmlsnode: Unable to determine the local node identity. Message-ID: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> Dear group I'm trying to deal with this in the most elegant way possible: Once upon the time there were nodeA and nodeB in the cluster, on a 'onDemand manual HA' fashion. * nodeA died, so I migrated the whole OS/software/application stack from backup over to 'nodeB', IP/hostname, etc, hence 'old nodeB' effectively became the new nodeA. * Getting the new nodeA to rejoin the cluster was already a pain, but through a mmdelnode and mmaddnode operation we eventually got it to mount gpfs. Well ... * Old nodeA is now fixed and back on the network, and I'd like to re-purpose it as the new standby nodeB (IP and hostname already applied). As the subject say, I'm now facing node identity issues. From the FSmgr I already tried to del/add nodeB, even nodeA, etc, however GPFS seems to keep some information cached somewhere in the cluster. * At this point I even turned old nodeA into a nodeC with a different IP, etc, but that doesn't help either. I can't even start gpfs on nodeC. Question: what is the appropriate process to clean this mess from the GPFS perspective? I can't touch the new nodeA. It's highly committed in production already. Thanks Jaime ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From pinto at scinet.utoronto.ca Wed Feb 10 20:24:21 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Wed, 10 Feb 2016 15:24:21 -0500 Subject: [gpfsug-discuss] mmlsnode: Unable to determine the local node identity. In-Reply-To: References: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> Message-ID: <20160210152421.63075r24zqb156d1@support.scinet.utoronto.ca> Quoting "Buterbaugh, Kevin L" : > Hi Jaime, > > Have you tried wiping out /var/mmfs/gen/* and /var/mmfs/etc/* on the > old nodeA? > > Kevin That did the trick. Thanks Kevin and all that responded privately. Jaime > >> On Feb 10, 2016, at 1:26 PM, Jaime Pinto wrote: >> >> Dear group >> >> I'm trying to deal with this in the most elegant way possible: >> >> Once upon the time there were nodeA and nodeB in the cluster, on a >> 'onDemand manual HA' fashion. >> >> * nodeA died, so I migrated the whole OS/software/application stack >> from backup over to 'nodeB', IP/hostname, etc, hence 'old nodeB' >> effectively became the new nodeA. >> >> * Getting the new nodeA to rejoin the cluster was already a pain, >> but through a mmdelnode and mmaddnode operation we eventually got >> it to mount gpfs. >> >> Well ... >> >> * Old nodeA is now fixed and back on the network, and I'd like to >> re-purpose it as the new standby nodeB (IP and hostname already >> applied). As the subject say, I'm now facing node identity issues. >> From the FSmgr I already tried to del/add nodeB, even nodeA, etc, >> however GPFS seems to keep some information cached somewhere in the >> cluster. >> >> * At this point I even turned old nodeA into a nodeC with a >> different IP, etc, but that doesn't help either. I can't even start >> gpfs on nodeC. >> >> Question: what is the appropriate process to clean this mess from >> the GPFS perspective? >> >> I can't touch the new nodeA. It's highly committed in production already. >> >> Thanks >> Jaime >> >> >> >> >> >> >> ************************************ >> --- >> Jaime Pinto >> SciNet HPC Consortium - Compute/Calcul Canada >> www.scinet.utoronto.ca - www.computecanada.org >> University of Toronto >> 256 McCaul Street, Room 235 >> Toronto, ON, M5T1W5 >> P: 416-978-2755 >> C: 416-505-1477 >> >> ---------------------------------------------------------------- >> This message was sent using IMP at SciNet Consortium, University of Toronto. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From makaplan at us.ibm.com Wed Feb 10 20:34:58 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 10 Feb 2016 15:34:58 -0500 Subject: [gpfsug-discuss] mmlsnode: Unable to determine the local nodeidentity. In-Reply-To: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> References: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> Message-ID: <201602102035.u1AKZ4v9030063@d01av01.pok.ibm.com> For starters, show us the output of mmlscluster mmgetstate -a cat /var/mmfs/gen/mmsdrfs Depending on how those look, this might be simple or not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Feb 11 14:42:40 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 11 Feb 2016 14:42:40 +0000 Subject: [gpfsug-discuss] IBM Interconnect - Any interest in an informal Spectrum Scale UG meetup? Message-ID: <3FA3ABD2-0B93-4A26-A841-84AE4A8505CA@nuance.com> I?ll be at IBM Interconnect the week of 2/21. Anyone else going? Is there interest in a meet-up or getting together informally? If anyone is interested, drop me a note and I?ll try and pull something together - robert.oesterlin at nuance.com Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Fri Feb 12 14:53:22 2016 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Fri, 12 Feb 2016 15:53:22 +0100 Subject: [gpfsug-discuss] Upcoming Spectrum Scale education events and user group meetings in Europe Message-ID: <201602121453.u1CErUAS012453@d06av07.portsmouth.uk.ibm.com> Here is an overview of upcoming Spectrum Scale education events and user group meetings in Europe. I plan to be at most of the events. Looking forward to meet you there! https://ibm.biz/BdHtBN -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From service at metamodul.com Sun Feb 14 13:59:36 2016 From: service at metamodul.com (MetaService) Date: Sun, 14 Feb 2016 14:59:36 +0100 Subject: [gpfsug-discuss] Migration from SONAS to Spectrum Scale - Limit of 200 TB for ACE migrations Message-ID: <1455458376.4507.92.camel@pluto> Hi, The Playbook: SONAS / Unified Migration to IBM Spectrum Scale - https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/fa32927c-e904-49cc-a4cc-870bcc8e307c/page/2ff0c6d7-a854-4d64-a98c-0dbfc611ffc6/attachment/a57f1d1e-c68e-44b0-bcde-20ce6b0aebd6/media/Migration_Playbook_PoC_SonasToSpectrumScale.pdf - mentioned that only ACE migration for SONAS FS up to 200TB are supported/recommended. Is this a limitation for the whole SONAS FS or for each fileset ? tia Hajo -- MetaModul GmbH Suederstr. 12 DE-25336 Elmshorn Mobil: +49 177 4393994 Geschaeftsfuehrer: Hans-Joachim Ehlers From douglasof at us.ibm.com Mon Feb 15 15:26:08 2016 From: douglasof at us.ibm.com (Douglas O'flaherty) Date: Mon, 15 Feb 2016 10:26:08 -0500 Subject: [gpfsug-discuss] IBM Interconnect - Any interest in an informal Spectrum Scale UG meetup? In-Reply-To: References: Message-ID: <201602151530.u1FFU4IG026030@d01av03.pok.ibm.com> Greetings: I like Bob's suggestion of an informal meet-up next week. How does Spectrum Scale beers sound? Tuesday right near the Expo should work. We'll scope out a place this week. We will have several places Scale is covered, including some references in different keynotes. There will be a demonstration of transparent cloud tiering - the Open Beta currently running - at the Interconnect Expo. There is summary of the several events in EU coming up. I'm looking for topics you want covered at the ISC User Group meeting. https://www.ibm.com/developerworks/community/blogs/storageneers/entry/Upcoming_Spectrum_Scale_education_events_and_user_group_meetings_in_Europe?lang=en_us The next US user group is still to be scheduled, so send in your ideas. doug ----- Message from "Oesterlin, Robert" on Thu, 11 Feb 2016 14:42:40 +0000 ----- To: gpfsug main discussion list Subject: [gpfsug-discuss] IBM Interconnect - Any interest in an informal Spectrum Scale UG meetup? I?ll be at IBM Interconnect the week of 2/21. Anyone else going? Is there interest in a meet-up or getting together informally? If anyone is interested, drop me a note and I?ll try and pull something together - robert.oesterlin at nuance.com Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From damir.krstic at gmail.com Wed Feb 17 21:07:33 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Wed, 17 Feb 2016 21:07:33 +0000 Subject: [gpfsug-discuss] question about remote cluster mounting Message-ID: In our current environment we have a storage gpfs cluster and a compute gpfs cluster. We use gpfs remote cluster mounting mechanism to mount storage cluster on compute cluster. So far so good. We are about to introduce 3rd storage cluster in our environment and question I have is about gpfs authorization keys. More specifically, when we initially did remote cluster mounting, we had to run mmauth command on both the storage cluster and the compute cluster and then share the keys between the clusters. With the third storage cluster, can we re-use authorization key from compute cluster and share it with the new storage cluster? The reason for this question is I am hoping to minimize downtime on our compute cluster and I remember having to shut gpfs down when issuing mmauth command so I am hoping I can re-use the compute cluster key without shutting gpfs down. Thanks, Damir -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Feb 17 21:40:05 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 17 Feb 2016 21:40:05 +0000 Subject: [gpfsug-discuss] question about remote cluster mounting In-Reply-To: References: Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05FB36F6@CHI-EXCHANGEW1.w2k.jumptrading.com> Yes, you may (and should) reuse the auth key from the compute cluster, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Damir Krstic Sent: Wednesday, February 17, 2016 3:08 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] question about remote cluster mounting In our current environment we have a storage gpfs cluster and a compute gpfs cluster. We use gpfs remote cluster mounting mechanism to mount storage cluster on compute cluster. So far so good. We are about to introduce 3rd storage cluster in our environment and question I have is about gpfs authorization keys. More specifically, when we initially did remote cluster mounting, we had to run mmauth command on both the storage cluster and the compute cluster and then share the keys between the clusters. With the third storage cluster, can we re-use authorization key from compute cluster and share it with the new storage cluster? The reason for this question is I am hoping to minimize downtime on our compute cluster and I remember having to shut gpfs down when issuing mmauth command so I am hoping I can re-use the compute cluster key without shutting gpfs down. Thanks, Damir ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From volobuev at us.ibm.com Wed Feb 17 22:54:36 2016 From: volobuev at us.ibm.com (Yuri L Volobuev) Date: Wed, 17 Feb 2016 14:54:36 -0800 Subject: [gpfsug-discuss] question about remote cluster mounting In-Reply-To: References: Message-ID: <201602172255.u1HMtIDp000702@d03av05.boulder.ibm.com> The authentication scheme used for GPFS multi-clustering is similar to what other frameworks (e.g. ssh) do for private/public auth: each cluster has a private key and a public key. The key pair only needs to be generated once (unless you want to periodically regenerate it for higher security; this is different from enabling authentication for the very first time and can be done without downtime). The public key can then be exchanged with multiple remote clusters. yuri From: Damir Krstic To: gpfsug main discussion list , Date: 02/17/2016 01:08 PM Subject: [gpfsug-discuss] question about remote cluster mounting Sent by: gpfsug-discuss-bounces at spectrumscale.org In our current environment we have a storage gpfs cluster and a compute gpfs cluster. We use gpfs remote cluster mounting mechanism to mount storage cluster on compute cluster. So far so good. We are about to introduce 3rd storage cluster in our environment and question I have is about gpfs authorization keys. More specifically, when we initially did remote cluster mounting, we had to run mmauth command on both the storage cluster and the compute cluster and then share the keys between the clusters. With the third storage cluster, can we re-use authorization key from compute cluster and share it with the new storage cluster? The reason for this question is I am hoping to minimize downtime on our compute cluster and I remember having to shut gpfs down when issuing mmauth command so I am hoping I can re-use the compute cluster key without shutting gpfs down. Thanks, Damir _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From damir.krstic at gmail.com Mon Feb 22 13:12:14 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Mon, 22 Feb 2016 13:12:14 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> <56AF2498.8010503@ed.ac.uk> <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: Sorry to revisit this question - AFM seems to be the best way to do this. I was wondering if anyone has done AFM migration. I am looking at this wiki page for instructions: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating%20Data%20Using%20AFM and I am little confused by step 3 "cut over users" <-- does this mean, unmount existing filesystem and point users to new filesystem? The reason we were looking at AFM is to not have downtime - make the transition as seamless as possible to the end user. Not sure what, then, AFM buys us if we still have to take "downtime" in order to cut users over to the new system. Thanks, Damir On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic wrote: > Thanks all for great suggestions. We will most likely end up using either > AFM or some mechanism of file copy (tar/rsync etc.). > > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > >> Along the same vein I've patched rsync to maintain source atimes in Linux >> for large transitions such as this. Along with the stadnard "patches" mod >> for destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. >> I've not yet ported it to 3.1.x >> https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff >> >> Ed Wahl >> OSC >> >> ________________________________________ >> From: gpfsug-discuss-bounces at spectrumscale.org [ >> gpfsug-discuss-bounces at spectrumscale.org] on behalf of Orlando Richards [ >> orlando.richards at ed.ac.uk] >> Sent: Monday, February 01, 2016 4:25 AM >> To: gpfsug-discuss at spectrumscale.org >> Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS >> appliance (GPFS4.1) >> >> For what it's worth - there's a patch for rsync which IBM provided a >> while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up >> on the gpfsug github here: >> >> https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync >> >> >> >> On 29/01/16 22:36, Sven Oehme wrote: >> > Doug, >> > >> > This won't really work if you make use of ACL's or use special GPFS >> > extended attributes or set quotas, filesets, etc >> > so unfortunate the answer is you need to use a combination of things and >> > there is work going on to make some of this simpler (e.g. for ACL's) , >> > but its a longer road to get there. so until then you need to think >> > about multiple aspects . >> > >> > 1. you need to get the data across and there are various ways to do >> this. >> > >> > a) AFM is the simplest of all as it not just takes care of ACL's and >> > extended attributes and alike as it understands the GPFS internals it >> > also is operating in parallel can prefetch data, etc so its a efficient >> > way to do this but as already pointed out doesn't transfer quota or >> > fileset informations. >> > >> > b) you can either use rsync or any other pipe based copy program. the >> > downside is that they are typical single threaded and do a file by file >> > approach, means very metadata intensive on the source as well as target >> > side and cause a lot of ios on both side. >> > >> > c) you can use the policy engine to create a list of files to transfer >> > to at least address the single threaded scan part, then partition the >> > data and run multiple instances of cp or rsync in parallel, still >> > doesn't fix the ACL / EA issues, but the data gets there faster. >> > >> > 2. you need to get ACL/EA informations over too. there are several >> > command line options to dump the data and restore it, they kind of >> > suffer the same problem as data transfers , which is why using AFM is >> > the best way of doing this if you rely on ACL/EA informations. >> > >> > 3. transfer quota / fileset infos. there are several ways to do this, >> > but all require some level of scripting to do this. >> > >> > if you have TSM/HSM you could also transfer the data using SOBAR it's >> > described in the advanced admin book. >> > >> > sven >> > >> > >> > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug >> > > > > wrote: >> > >> > I have found that a tar pipe is much faster than rsync for this sort >> > of thing. The fastest of these is ?star? (schily tar). On average it >> > is about 2x-5x faster than rsync for doing this. After one pass with >> > this, you can use rsync for a subsequent or last pass synch.____ >> > >> > __ __ >> > >> > e.g.____ >> > >> > $ cd /export/gpfs1/foo____ >> > >> > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ >> > >> > __ __ >> > >> > This also will not preserve filesets and quotas, though. You should >> > be able to automate that with a little bit of awk, perl, or >> whatnot.____ >> > >> > __ __ >> > >> > __ __ >> > >> > *From:*gpfsug-discuss-bounces at spectrumscale.org >> > >> > [mailto:gpfsug-discuss-bounces at spectrumscale.org >> > ] *On Behalf Of >> > *Damir Krstic >> > *Sent:* Friday, January 29, 2016 2:32 PM >> > *To:* gpfsug main discussion list >> > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS >> > appliance (GPFS4.1)____ >> > >> > __ __ >> > >> > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT >> > of storage. We are in planning stages of implementation. We would >> > like to migrate date from our existing GPFS installation (around >> > 300TB) to new solution. ____ >> > >> > __ __ >> > >> > We were planning of adding ESS to our existing GPFS cluster and >> > adding its disks and then deleting our old disks and having the data >> > migrated this way. However, our existing block size on our projects >> > filesystem is 1M and in order to extract as much performance out of >> > ESS we would like its filesystem created with larger block size. >> > Besides rsync do you have any suggestions of how to do this without >> > downtime and in fastest way possible? ____ >> > >> > __ __ >> > >> > I have looked at AFM but it does not seem to migrate quotas and >> > filesets so that may not be an optimal solution. ____ >> > >> > >> > _______________________________________________ >> > gpfsug-discuss mailing list >> > gpfsug-discuss at spectrumscale.org >> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > >> > >> > >> > >> > _______________________________________________ >> > gpfsug-discuss mailing list >> > gpfsug-discuss at spectrumscale.org >> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > >> >> -- >> -- >> Dr Orlando Richards >> Research Services Manager >> Information Services >> IT Infrastructure Division >> Tel: 0131 650 4994 >> skype: orlando.richards >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Mon Feb 22 13:39:16 2016 From: YARD at il.ibm.com (Yaron Daniel) Date: Mon, 22 Feb 2016 15:39:16 +0200 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance(GPFS4.1) In-Reply-To: References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com><56AF2498.8010503@ed.ac.uk><9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: <201602221339.u1MDdVfH012286@d06av07.portsmouth.uk.ibm.com> Hi AFM - Active File Management (AFM) is an asynchronous cross cluster utility It means u create new GPFS cluster - migrate the data without downtime , and when u r ready - u do last sync and cut-over. Hope this help. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Server, Storage and Data Services - Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel gpfsug-discuss-bounces at spectrumscale.org wrote on 02/22/2016 03:12:14 PM: > From: Damir Krstic > To: gpfsug main discussion list > Date: 02/22/2016 03:12 PM > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1) > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > Sorry to revisit this question - AFM seems to be the best way to do > this. I was wondering if anyone has done AFM migration. I am looking > at this wiki page for instructions: > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/ > wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating% > 20Data%20Using%20AFM > and I am little confused by step 3 "cut over users" <-- does this > mean, unmount existing filesystem and point users to new filesystem? > > The reason we were looking at AFM is to not have downtime - make the > transition as seamless as possible to the end user. Not sure what, > then, AFM buys us if we still have to take "downtime" in order to > cut users over to the new system. > > Thanks, > Damir > > On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic wrote: > Thanks all for great suggestions. We will most likely end up using > either AFM or some mechanism of file copy (tar/rsync etc.). > > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > Along the same vein I've patched rsync to maintain source atimes in > Linux for large transitions such as this. Along with the stadnard > "patches" mod for destination atimes it is quite useful. Works in > 3.0.8 and 3.0.9. I've not yet ported it to 3.1.x > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff > > Ed Wahl > OSC > > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss- > bounces at spectrumscale.org] on behalf of Orlando Richards [ > orlando.richards at ed.ac.uk] > Sent: Monday, February 01, 2016 4:25 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1) > > For what it's worth - there's a patch for rsync which IBM provided a > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up > on the gpfsug github here: > > https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync > > > > On 29/01/16 22:36, Sven Oehme wrote: > > Doug, > > > > This won't really work if you make use of ACL's or use special GPFS > > extended attributes or set quotas, filesets, etc > > so unfortunate the answer is you need to use a combination of things and > > there is work going on to make some of this simpler (e.g. for ACL's) , > > but its a longer road to get there. so until then you need to think > > about multiple aspects . > > > > 1. you need to get the data across and there are various ways to do this. > > > > a) AFM is the simplest of all as it not just takes care of ACL's and > > extended attributes and alike as it understands the GPFS internals it > > also is operating in parallel can prefetch data, etc so its a efficient > > way to do this but as already pointed out doesn't transfer quota or > > fileset informations. > > > > b) you can either use rsync or any other pipe based copy program. the > > downside is that they are typical single threaded and do a file by file > > approach, means very metadata intensive on the source as well as target > > side and cause a lot of ios on both side. > > > > c) you can use the policy engine to create a list of files to transfer > > to at least address the single threaded scan part, then partition the > > data and run multiple instances of cp or rsync in parallel, still > > doesn't fix the ACL / EA issues, but the data gets there faster. > > > > 2. you need to get ACL/EA informations over too. there are several > > command line options to dump the data and restore it, they kind of > > suffer the same problem as data transfers , which is why using AFM is > > the best way of doing this if you rely on ACL/EA informations. > > > > 3. transfer quota / fileset infos. there are several ways to do this, > > but all require some level of scripting to do this. > > > > if you have TSM/HSM you could also transfer the data using SOBAR it's > > described in the advanced admin book. > > > > sven > > > > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > > > wrote: > > > > I have found that a tar pipe is much faster than rsync for this sort > > of thing. The fastest of these is ?star? (schily tar). On average it > > is about 2x-5x faster than rsync for doing this. After one pass with > > this, you can use rsync for a subsequent or last pass synch.____ > > > > __ __ > > > > e.g.____ > > > > $ cd /export/gpfs1/foo____ > > > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > > > __ __ > > > > This also will not preserve filesets and quotas, though. You should > > be able to automate that with a little bit of awk, perl, or whatnot.____ > > > > __ __ > > > > __ __ > > > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > > ] *On Behalf Of > > *Damir Krstic > > *Sent:* Friday, January 29, 2016 2:32 PM > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1)____ > > > > __ __ > > > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > > of storage. We are in planning stages of implementation. We would > > like to migrate date from our existing GPFS installation (around > > 300TB) to new solution. ____ > > > > __ __ > > > > We were planning of adding ESS to our existing GPFS cluster and > > adding its disks and then deleting our old disks and having the data > > migrated this way. However, our existing block size on our projects > > filesystem is 1M and in order to extract as much performance out of > > ESS we would like its filesystem created with larger block size. > > Besides rsync do you have any suggestions of how to do this without > > downtime and in fastest way possible? ____ > > > > __ __ > > > > I have looked at AFM but it does not seem to migrate quotas and > > filesets so that may not be an optimal solution. ____ > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -- > -- > Dr Orlando Richards > Research Services Manager > Information Services > IT Infrastructure Division > Tel: 0131 650 4994 > skype: orlando.richards > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From damir.krstic at gmail.com Mon Feb 22 16:11:31 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Mon, 22 Feb 2016 16:11:31 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance(GPFS4.1) In-Reply-To: <201602221339.u1MDdVfH012286@d06av07.portsmouth.uk.ibm.com> References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> <56AF2498.8010503@ed.ac.uk> <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> <201602221339.u1MDdVfH012286@d06av07.portsmouth.uk.ibm.com> Message-ID: Thanks for the reply - but that explanation does not mean no downtime without elaborating on "cut over." I can do the sync via rsync or tar today but eventually I will have to cut over to the new system. Is this the case with AFM as well - once everything is synced over - cutting over means users will have to "cut over" by: 1. either mounting new AFM-synced system on all compute nodes with same mount as the old system (which means downtime to unmount the existing filesystem and mounting new filesystem) or 2. end-user training i.e. starting using new filesystem, move your own files you need because eventually we will shutdown the old filesystem. If, then, it's true that AFM requires some sort of cut over (either by disconnecting the old system and mounting new system as the old mount point, or by instruction to users to start using new filesystem at once) I am not sure that AFM gets me anything more than rsync or tar when it comes to taking a downtime (cutting over) for the end user. Thanks, Damir On Mon, Feb 22, 2016 at 7:39 AM Yaron Daniel wrote: > Hi > > AFM - Active File Management (AFM) is an asynchronous cross cluster > utility > > It means u create new GPFS cluster - migrate the data without downtime , > and when u r ready - u do last sync and cut-over. > > Hope this help. > > > > Regards > > > > ------------------------------ > > > > *Yaron Daniel* 94 Em Ha'Moshavot Rd > *Server, **Storage and Data Services* > *- > Team Leader* Petach Tiqva, 49527 > *Global Technology Services* Israel > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > *IBM Israel* > > > > > > gpfsug-discuss-bounces at spectrumscale.org wrote on 02/22/2016 03:12:14 PM: > > > From: Damir Krstic > > To: gpfsug main discussion list > > Date: 02/22/2016 03:12 PM > > > > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1) > > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > > Sorry to revisit this question - AFM seems to be the best way to do > > this. I was wondering if anyone has done AFM migration. I am looking > > at this wiki page for instructions: > > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/ > > wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating% > > 20Data%20Using%20AFM > > and I am little confused by step 3 "cut over users" <-- does this > > mean, unmount existing filesystem and point users to new filesystem? > > > > The reason we were looking at AFM is to not have downtime - make the > > transition as seamless as possible to the end user. Not sure what, > > then, AFM buys us if we still have to take "downtime" in order to > > cut users over to the new system. > > > > Thanks, > > Damir > > > > On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic > wrote: > > Thanks all for great suggestions. We will most likely end up using > > either AFM or some mechanism of file copy (tar/rsync etc.). > > > > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > > Along the same vein I've patched rsync to maintain source atimes in > > Linux for large transitions such as this. Along with the stadnard > > "patches" mod for destination atimes it is quite useful. Works in > > 3.0.8 and 3.0.9. I've not yet ported it to 3.1.x > > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff > > > > Ed Wahl > > OSC > > > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss- > > bounces at spectrumscale.org] on behalf of Orlando Richards [ > > orlando.richards at ed.ac.uk] > > Sent: Monday, February 01, 2016 4:25 AM > > To: gpfsug-discuss at spectrumscale.org > > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1) > > > > For what it's worth - there's a patch for rsync which IBM provided a > > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up > > on the gpfsug github here: > > > > https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync > > > > > > > > On 29/01/16 22:36, Sven Oehme wrote: > > > Doug, > > > > > > This won't really work if you make use of ACL's or use special GPFS > > > extended attributes or set quotas, filesets, etc > > > so unfortunate the answer is you need to use a combination of things > and > > > there is work going on to make some of this simpler (e.g. for ACL's) , > > > but its a longer road to get there. so until then you need to think > > > about multiple aspects . > > > > > > 1. you need to get the data across and there are various ways to do > this. > > > > > > a) AFM is the simplest of all as it not just takes care of ACL's and > > > extended attributes and alike as it understands the GPFS internals it > > > also is operating in parallel can prefetch data, etc so its a efficient > > > way to do this but as already pointed out doesn't transfer quota or > > > fileset informations. > > > > > > b) you can either use rsync or any other pipe based copy program. the > > > downside is that they are typical single threaded and do a file by file > > > approach, means very metadata intensive on the source as well as target > > > side and cause a lot of ios on both side. > > > > > > c) you can use the policy engine to create a list of files to transfer > > > to at least address the single threaded scan part, then partition the > > > data and run multiple instances of cp or rsync in parallel, still > > > doesn't fix the ACL / EA issues, but the data gets there faster. > > > > > > 2. you need to get ACL/EA informations over too. there are several > > > command line options to dump the data and restore it, they kind of > > > suffer the same problem as data transfers , which is why using AFM is > > > the best way of doing this if you rely on ACL/EA informations. > > > > > > 3. transfer quota / fileset infos. there are several ways to do this, > > > but all require some level of scripting to do this. > > > > > > if you have TSM/HSM you could also transfer the data using SOBAR it's > > > described in the advanced admin book. > > > > > > sven > > > > > > > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > > > > >> wrote: > > > > > > I have found that a tar pipe is much faster than rsync for this > sort > > > of thing. The fastest of these is ?star? (schily tar). On average > it > > > is about 2x-5x faster than rsync for doing this. After one pass > with > > > this, you can use rsync for a subsequent or last pass synch.____ > > > > > > __ __ > > > > > > e.g.____ > > > > > > $ cd /export/gpfs1/foo____ > > > > > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > > > > > __ __ > > > > > > This also will not preserve filesets and quotas, though. You should > > > be able to automate that with a little bit of awk, perl, or > whatnot.____ > > > > > > __ __ > > > > > > __ __ > > > > > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > > > > > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > > > > >] *On Behalf Of > > > *Damir Krstic > > > *Sent:* Friday, January 29, 2016 2:32 PM > > > *To:* gpfsug main discussion list > > > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > > appliance (GPFS4.1)____ > > > > > > __ __ > > > > > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > > > of storage. We are in planning stages of implementation. We would > > > like to migrate date from our existing GPFS installation (around > > > 300TB) to new solution. ____ > > > > > > __ __ > > > > > > We were planning of adding ESS to our existing GPFS cluster and > > > adding its disks and then deleting our old disks and having the > data > > > migrated this way. However, our existing block size on our projects > > > filesystem is 1M and in order to extract as much performance out of > > > ESS we would like its filesystem created with larger block size. > > > Besides rsync do you have any suggestions of how to do this without > > > downtime and in fastest way possible? ____ > > > > > > __ __ > > > > > > I have looked at AFM but it does not seem to migrate quotas and > > > filesets so that may not be an optimal solution. ____ > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > -- > > -- > > Dr Orlando Richards > > Research Services Manager > > Information Services > > IT Infrastructure Division > > Tel: 0131 650 4994 > > skype: orlando.richards > > > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From Luke.Raimbach at crick.ac.uk Wed Feb 24 14:05:07 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Wed, 24 Feb 2016 14:05:07 +0000 Subject: [gpfsug-discuss] AFM and Placement Policies Message-ID: Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From dhildeb at us.ibm.com Wed Feb 24 19:16:54 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Wed, 24 Feb 2016 11:16:54 -0800 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: References: Message-ID: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center From: Luke Raimbach To: gpfsug main discussion list Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From dhildeb at us.ibm.com Wed Feb 24 19:16:54 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Wed, 24 Feb 2016 11:16:54 -0800 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: References: Message-ID: <201602241923.u1OJNxMT006419@d01av04.pok.ibm.com> Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center From: Luke Raimbach To: gpfsug main discussion list Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Gethyn.Longworth at Rolls-Royce.com Thu Feb 25 10:42:39 2016 From: Gethyn.Longworth at Rolls-Royce.com (Longworth, Gethyn) Date: Thu, 25 Feb 2016 10:42:39 +0000 Subject: [gpfsug-discuss] Integration with Active Directory Message-ID: Hi all, I'm new to both GPFS and to this mailing list, so I thought I'd introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale.) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I've configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can't use RFC2307, as our IT department don't understand what this is), but the problem I'm having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate - I can run "id" on that node with a domain account and it provides the correct answer - whereas the other will not and denies any knowledge of the domain or user. >From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected - a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET | Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6181 bytes Desc: not available URL: -------------- next part -------------- The data contained in, or attached to, this e-mail, may contain confidential information. If you have received it in error you should notify the sender immediately by reply e-mail, delete the message from your system and contact +44 (0) 3301235850 (Security Operations Centre) if you need assistance. Please do not copy it for any purpose, or disclose its contents to any other person. An e-mail response to this address may be subject to interception or monitoring for operational reasons or for lawful business practices. (c) 2016 Rolls-Royce plc Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 1003142. Registered in England. From S.J.Thompson at bham.ac.uk Thu Feb 25 13:19:12 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 25 Feb 2016 13:19:12 +0000 Subject: [gpfsug-discuss] Integration with Active Directory Message-ID: Hi Gethyn, >From what I recall, CTDB used underneath is used to share the secret and only the primary named machine is joined, but CTDB and CES should work this backend part out for you. I do have a question though, do you want to have consistent UIDs across other systems? For example if you plan to use NFS to other *nix systems, then you probably want to think about LDAP mapping and using custom auth (we do this as out AD doesn't contain UIDs either). Simon From: > on behalf of "Longworth, Gethyn" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Thursday, 25 February 2016 at 10:42 To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Integration with Active Directory Hi all, I?m new to both GPFS and to this mailing list, so I thought I?d introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale?) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I?ve configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can?t use RFC2307, as our IT department don?t understand what this is), but the problem I?m having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate ? I can run ?id? on that node with a domain account and it provides the correct answer ? whereas the other will not and denies any knowledge of the domain or user. From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected ? a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET | Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: From poppe at us.ibm.com Thu Feb 25 17:01:00 2016 From: poppe at us.ibm.com (Monty Poppe) Date: Thu, 25 Feb 2016 11:01:00 -0600 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: References: Message-ID: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> All CES nodes should operate consistently across the cluster. Here are a few tips on debugging: /usr/lpp/mmfs/bin/wbinfo -p to ensure winbind is running properly /usr/lpp/mmfs/bin/wbinfo -P (capital P), to ensure winbind can communicate with AD server ensure the first nameserver in /etc/resolv.conf points to your AD server (check all nodes) mmuserauth service check --server-reachability for a more thorough validation that all nodes can communicate to the authentication server If you need to look at samba logs (/var/adm/ras/log.smbd & log.wb-) to see what's going on, change samba log levels issue: /usr/lpp/mmfs/bin/net conf setparm global 'log level' 3. Don't forget to set back to 0 or 1 when you are done! If you're willing to go with a later release, AD authentication with LDAP ID mapping has been added as a feature in the 4.2 release. ( https://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_adwithldap.htm?lang=en ) Monty Poppe Spectrum Scale Test poppe at us.ibm.com 512-286-8047 T/L 363-8047 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 02/25/2016 07:19 AM Subject: Re: [gpfsug-discuss] Integration with Active Directory Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Gethyn, From what I recall, CTDB used underneath is used to share the secret and only the primary named machine is joined, but CTDB and CES should work this backend part out for you. I do have a question though, do you want to have consistent UIDs across other systems? For example if you plan to use NFS to other *nix systems, then you probably want to think about LDAP mapping and using custom auth (we do this as out AD doesn't contain UIDs either). Simon From: on behalf of "Longworth, Gethyn" Reply-To: "gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Date: Thursday, 25 February 2016 at 10:42 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Integration with Active Directory Hi all, I?m new to both GPFS and to this mailing list, so I thought I?d introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale?) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I?ve configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can?t use RFC2307, as our IT department don?t understand what this is), but the problem I?m having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate ? I can run ?id? on that node with a domain account and it provides the correct answer ? whereas the other will not and denies any knowledge of the domain or user. From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected ? a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET | Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Thu Feb 25 17:46:02 2016 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 25 Feb 2016 17:46:02 +0000 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> References: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com>, Message-ID: <201602251746.u1PHk8Uw012701@d01av03.pok.ibm.com> An HTML attachment was scrubbed... URL: From Gethyn.Longworth at Rolls-Royce.com Fri Feb 26 09:04:50 2016 From: Gethyn.Longworth at Rolls-Royce.com (Longworth, Gethyn) Date: Fri, 26 Feb 2016 09:04:50 +0000 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> References: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> Message-ID: Monty, Simon, Christof, Many thanks for your help. I found that the firewall wasn?t configured correctly ? I made the assumption that the samba ?service? enabled the ctdb port (4379 the next person searching for this) as well ? enabling it manually and restarting the node has resolved it. I need to investigate the issue of consistent uids / gids between my linux machines. Obviously very easy when you have full control over the AD, but as ours is a local AD (which I can control) and most of the user IDs coming over on a trust it is much more tricky. Has anyone done an ldap set up where they are effectively adding extra user info (like uids / gids / samba info) to existing AD users without messing with the original AD? Thanks, Gethyn From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Monty Poppe Sent: 25 February 2016 17:01 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Integration with Active Directory All CES nodes should operate consistently across the cluster. Here are a few tips on debugging: /usr/lpp/mmfs/bin/wbinfo-p to ensure winbind is running properly /usr/lpp/mmfs/bin/wbinfo-P (capital P), to ensure winbind can communicate with AD server ensure the first nameserver in /etc/resolv.conf points to your AD server (check all nodes) mmuserauth service check --server-reachability for a more thorough validation that all nodes can communicate to the authentication server If you need to look at samba logs (/var/adm/ras/log.smbd & log.wb-) to see what's going on, change samba log levels issue: /usr/lpp/mmfs/bin/net conf setparm global 'log level' 3. Don't forget to set back to 0 or 1 when you are done! If you're willing to go with a later release, AD authentication with LDAP ID mapping has been added as a feature in the 4.2 release. ( https://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_adwithldap.htm?lang=en) Monty Poppe Spectrum Scale Test poppe at us.ibm.com 512-286-8047 T/L 363-8047 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 02/25/2016 07:19 AM Subject: Re: [gpfsug-discuss] Integration with Active Directory Sent by: gpfsug-discuss-bounces at spectrumscale.org _____ Hi Gethyn, >From what I recall, CTDB used underneath is used to share the secret and only the primary named machine is joined, but CTDB and CES should work this backend part out for you. I do have a question though, do you want to have consistent UIDs across other systems? For example if you plan to use NFS to other *nix systems, then you probably want to think about LDAP mapping and using custom auth (we do this as out AD doesn't contain UIDs either). Simon From: < gpfsug-discuss-bounces at spectrumscale.org> on behalf of "Longworth, Gethyn" < Gethyn.Longworth at Rolls-Royce.com> Reply-To: " gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Date: Thursday, 25 February 2016 at 10:42 To: " gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Subject: [gpfsug-discuss] Integration with Active Directory Hi all, I?m new to both GPFS and to this mailing list, so I thought I?d introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale?) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I?ve configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can?t use RFC2307, as our IT department don?t understand what this is), but the problem I?m having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate ? I can run ?id? on that node with a domain account and it provides the correct answer ? whereas the other will not and denies any knowledge of the domain or user. From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected ? a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET |Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6181 bytes Desc: not available URL: -------------- next part -------------- The data contained in, or attached to, this e-mail, may contain confidential information. If you have received it in error you should notify the sender immediately by reply e-mail, delete the message from your system and contact +44 (0) 3301235850 (Security Operations Centre) if you need assistance. Please do not copy it for any purpose, or disclose its contents to any other person. An e-mail response to this address may be subject to interception or monitoring for operational reasons or for lawful business practices. (c) 2016 Rolls-Royce plc Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 1003142. Registered in England. From S.J.Thompson at bham.ac.uk Fri Feb 26 10:12:21 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 26 Feb 2016 10:12:21 +0000 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: References: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> Message-ID: In theory you can do this with LDS ... My solution though is to run LDAP server (with replication) across the CTDB server nodes. Each node then points to itself and the other CTDB servers for the SMB config. We populate it with users and groups, names copied in from AD. Its a bit of a fudge to make it work, and we found for auxiliary groups that winbind wasn't doing quite what it should, so have to have the SIDs populated in the local LDAP server config. Simon From: > on behalf of "Longworth, Gethyn" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Friday, 26 February 2016 at 09:04 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] Integration with Active Directory Has anyone done an ldap set up where they are effectively adding extra user info (like uids / gids / samba info) to existing AD users without messing with the original AD? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Fri Feb 26 10:52:31 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Fri, 26 Feb 2016 10:52:31 +0000 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> References: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> Message-ID: Hi Dean, Thanks for this ? I had hoped this was the case. However what I?m now wondering is, if we operate the cache in independent-writer mode and the new file was pushed back home (conforming to cache, then home placement policies), then is subsequently evicted from the cache; if it needs to be pulled back for local operations in the cache, will the cache cluster see this file as ?new? for the third time? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Dean Hildebrand Sent: 24 February 2016 19:17 To: gpfsug main discussion list Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM and Placement Policies Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center [Inactive hide details for Luke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM S]Luke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache wri From: Luke Raimbach > To: gpfsug main discussion list > Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 105 bytes Desc: image001.gif URL: From dhildeb at us.ibm.com Fri Feb 26 18:58:47 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 26 Feb 2016 10:58:47 -0800 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: References: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> Message-ID: <201602261907.u1QJ7FZb019973@d03av03.boulder.ibm.com> Hi Luke, Cache eviction simply frees up space in the cache, but the inode/file is always the same. It does not delete and recreate the file in the cache. This is why you can continue to view files in the cache namespace even if they are evicted. Dean Hildebrand IBM Almaden Research Center From: Luke Raimbach To: gpfsug main discussion list Date: 02/26/2016 02:52 AM Subject: Re: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Dean, Thanks for this ? I had hoped this was the case. However what I?m now wondering is, if we operate the cache in independent-writer mode and the new file was pushed back home (conforming to cache, then home placement policies), then is subsequently evicted from the cache; if it needs to be pulled back for local operations in the cache, will the cache cluster see this file as ?new? for the third time? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Dean Hildebrand Sent: 24 February 2016 19:17 To: gpfsug main discussion list Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM and Placement Policies Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center Inactive hide details for Luke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM SLuke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache wri From: Luke Raimbach To: gpfsug main discussion list Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Luke.Raimbach at crick.ac.uk Mon Feb 29 14:31:57 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Mon, 29 Feb 2016 14:31:57 +0000 Subject: [gpfsug-discuss] AFM and Symbolic Links Message-ID: Hi All, Quick one: Does AFM follow symbolic links present at home in the cache fileset? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From dhildeb at us.ibm.com Mon Feb 29 16:59:11 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Mon, 29 Feb 2016 08:59:11 -0800 Subject: [gpfsug-discuss] AFM and Symbolic Links In-Reply-To: References: Message-ID: <201602291701.u1TH1owF031283@d03av05.boulder.ibm.com> Hi Luke, Quick response.... yes :) Dean From: Luke Raimbach To: gpfsug main discussion list Date: 02/29/2016 06:32 AM Subject: [gpfsug-discuss] AFM and Symbolic Links Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Quick one: Does AFM follow symbolic links present at home in the cache fileset? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From dhildeb at us.ibm.com Mon Feb 29 16:59:11 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Mon, 29 Feb 2016 08:59:11 -0800 Subject: [gpfsug-discuss] AFM and Symbolic Links In-Reply-To: References: Message-ID: <201602291702.u1TH2Ciu032313@d03av01.boulder.ibm.com> Hi Luke, Quick response.... yes :) Dean From: Luke Raimbach To: gpfsug main discussion list Date: 02/29/2016 06:32 AM Subject: [gpfsug-discuss] AFM and Symbolic Links Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Quick one: Does AFM follow symbolic links present at home in the cache fileset? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From UWEFALKE at de.ibm.com Mon Feb 1 08:39:05 2016 From: UWEFALKE at de.ibm.com (Uwe Falke) Date: Mon, 1 Feb 2016 09:39:05 +0100 Subject: [gpfsug-discuss] what's on a 'dataOnly' disk? In-Reply-To: <20160129170401.0ec9f72e@uphs.upenn.edu> References: <20160129170401.0ec9f72e@uphs.upenn.edu> Message-ID: <201602010839.u118dC24013651@d06av06.portsmouth.uk.ibm.com> Hi Mark, AFAIK, there will not be any file system corruption if just data blocks are altered by activities outside GPFS. Mind: the metadata just tell were to find the data, not what will be there. If you have the data replicated, you could compare the two replica. But mind: with some GPFS version, a replica compare tool was introduced which would fix differences by always assuming the first version it has read is the correct one -- which is wrong in half of the cases, I'd say. Only now (I think with SpSc 4.2), a version of that tool is available which allows the user to check the differences and possibly select the good version. If you have your data replicated and you may assume that the problem is affecting only disks in one failure group (FG), you could also set these disks down, add new disks to the FG and restripe the FS. Then, GNR works with end-to-end checksumming. This would not help you retrieving the original content but would allow you to identify altered file contents. Mit freundlichen Gr??en / Kind regards Dr. Uwe Falke IT Specialist High Performance Computing Services / Integrated Technology Services / Data Center Services ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Rathausstr. 7 09111 Chemnitz Phone: +49 371 6978 2165 Mobile: +49 175 575 2877 E-Mail: uwefalke at de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Business & Technology Services GmbH / Gesch?ftsf?hrung: Frank Hammer, Thorsten Moehring Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 17122 From orlando.richards at ed.ac.uk Mon Feb 1 09:25:44 2016 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Mon, 1 Feb 2016 09:25:44 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> Message-ID: <56AF2498.8010503@ed.ac.uk> For what it's worth - there's a patch for rsync which IBM provided a while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up on the gpfsug github here: https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync On 29/01/16 22:36, Sven Oehme wrote: > Doug, > > This won't really work if you make use of ACL's or use special GPFS > extended attributes or set quotas, filesets, etc > so unfortunate the answer is you need to use a combination of things and > there is work going on to make some of this simpler (e.g. for ACL's) , > but its a longer road to get there. so until then you need to think > about multiple aspects . > > 1. you need to get the data across and there are various ways to do this. > > a) AFM is the simplest of all as it not just takes care of ACL's and > extended attributes and alike as it understands the GPFS internals it > also is operating in parallel can prefetch data, etc so its a efficient > way to do this but as already pointed out doesn't transfer quota or > fileset informations. > > b) you can either use rsync or any other pipe based copy program. the > downside is that they are typical single threaded and do a file by file > approach, means very metadata intensive on the source as well as target > side and cause a lot of ios on both side. > > c) you can use the policy engine to create a list of files to transfer > to at least address the single threaded scan part, then partition the > data and run multiple instances of cp or rsync in parallel, still > doesn't fix the ACL / EA issues, but the data gets there faster. > > 2. you need to get ACL/EA informations over too. there are several > command line options to dump the data and restore it, they kind of > suffer the same problem as data transfers , which is why using AFM is > the best way of doing this if you rely on ACL/EA informations. > > 3. transfer quota / fileset infos. there are several ways to do this, > but all require some level of scripting to do this. > > if you have TSM/HSM you could also transfer the data using SOBAR it's > described in the advanced admin book. > > sven > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > wrote: > > I have found that a tar pipe is much faster than rsync for this sort > of thing. The fastest of these is ?star? (schily tar). On average it > is about 2x-5x faster than rsync for doing this. After one pass with > this, you can use rsync for a subsequent or last pass synch.____ > > __ __ > > e.g.____ > > $ cd /export/gpfs1/foo____ > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > __ __ > > This also will not preserve filesets and quotas, though. You should > be able to automate that with a little bit of awk, perl, or whatnot.____ > > __ __ > > __ __ > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > ] *On Behalf Of > *Damir Krstic > *Sent:* Friday, January 29, 2016 2:32 PM > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1)____ > > __ __ > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > of storage. We are in planning stages of implementation. We would > like to migrate date from our existing GPFS installation (around > 300TB) to new solution. ____ > > __ __ > > We were planning of adding ESS to our existing GPFS cluster and > adding its disks and then deleting our old disks and having the data > migrated this way. However, our existing block size on our projects > filesystem is 1M and in order to extract as much performance out of > ESS we would like its filesystem created with larger block size. > Besides rsync do you have any suggestions of how to do this without > downtime and in fastest way possible? ____ > > __ __ > > I have looked at AFM but it does not seem to migrate quotas and > filesets so that may not be an optimal solution. ____ > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Research Services Manager Information Services IT Infrastructure Division Tel: 0131 650 4994 skype: orlando.richards The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From Paul.Tomlinson at awe.co.uk Mon Feb 1 10:06:15 2016 From: Paul.Tomlinson at awe.co.uk (Paul.Tomlinson at awe.co.uk) Date: Mon, 1 Feb 2016 10:06:15 +0000 Subject: [gpfsug-discuss] EXTERNAL: Next meet the devs - 24th Feb 2016 In-Reply-To: <20160119171452.C7F963C1EAC@gpfsug.org> References: <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: <201602011006.u11A6Mui009286@msw1.awe.co.uk> Hi Simon, We would like to send Mark Roberts (HPC) from AWE if any places are available. If there any places I'm sure will be willing to provide a list of topics that interest us. Best Regards Paul Tomlinson High Performance Computing Direct: 0118 985 8060 or 0118 982 4147 Mobile 07920783365 VPN: 88864 AWE, Aldermaston, Reading, RG7 4PR From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of "Spectrum scale UG Chair (Simon Thompson)"< Sent: 19 January 2016 17:14 To: gpfsug-discuss at spectrumscale.org Subject: EXTERNAL: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 Dear All, We are planning the next 'Meet the Devs' event for Wednesday 24th February 2016, 11am-3:30pm. The event will be held in central Oxford. The agenda promises to be hands on and give you the opportunity to speak face to face with the developers of Spectrum Scale. Guideline agenda: * TBC - please provide input on what you'd like to see! Lunch and refreshments will be provided. Please can you let me know by email if you are interested in attending by Wednesday 17th February. Thanks and we hope to see you there. Thanks to Andy at OERC for offering to host. Simon The information in this email and in any attachment(s) is commercial in confidence. If you are not the named addressee(s) or if you receive this email in error then any distribution, copying or use of this communication or the information in it is strictly prohibited. Please notify us immediately by email at admin.internet(at)awe.co.uk, and then delete this message from your computer. While attachments are virus checked, AWE plc does not accept any liability in respect of any virus which is not detected. AWE Plc Registered in England and Wales Registration No 02763902 AWE, Aldermaston, Reading, RG7 4PR -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.kidger at uk.ibm.com Mon Feb 1 10:18:51 2016 From: daniel.kidger at uk.ibm.com (Daniel Kidger) Date: Mon, 1 Feb 2016 10:18:51 +0000 Subject: [gpfsug-discuss] EXTERNAL: Next meet the devs - 24th Feb 2016 In-Reply-To: <201602011006.u11A6Mui009286@msw1.awe.co.uk> References: <201602011006.u11A6Mui009286@msw1.awe.co.uk>, <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: <201602011018.u11AIuUt009534@d06av09.portsmouth.uk.ibm.com> An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Mon Feb 1 17:29:07 2016 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Mon, 1 Feb 2016 18:29:07 +0100 Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 Message-ID: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is composed of various components tested together for compatibility and correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and Power System Firmware. Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Publication Date: 29 January 2016 Summary of changes in ESS ver 4.0 a) ESS core - IBM Spectrum Scale RAID V4.2.0-1 - Updated GUI b) Support of Red Hat Enterprise Linux 7.1 - No changes from 3.0.x or 3.5.x c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1 - Updated from 3.x.y d) Install Toolkit - Updated Install Toolkit e) Updated firmware rpm - IP RAID Adapter FW - Host Adapter FW - Enclosure and drive FW Download: (612 MB) http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM +Spectrum+Scale +RAID&function=fixid&fixids=ESS_ADV_BASEIMAGE-4.0.0-power-Linux README: http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002500 Deployment and Administration Guides are available in IBM Knowledge Center. http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html - Elastic Storage Server: Quick Deployment Guide - Deploying the Elastic Storage Server - IBM Spectrum Scale RAID: Administration Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From volobuev at us.ibm.com Mon Feb 1 18:28:01 2016 From: volobuev at us.ibm.com (Yuri L Volobuev) Date: Mon, 1 Feb 2016 10:28:01 -0800 Subject: [gpfsug-discuss] what's on a 'dataOnly' disk? In-Reply-To: <20160129170401.0ec9f72e@uphs.upenn.edu> References: <20160129170401.0ec9f72e@uphs.upenn.edu> Message-ID: <201602011828.u11ISGDS029189@d01av04.pok.ibm.com> > What's on a 'dataOnly' GPFS 3.5.x NSD besides data and the NSD disk > header, if anything? That's it. In some cases there may also be a copy of the file system descriptor, but that doesn't really matter in your case. > I'm trying to understand some file corruption, and one potential > explanation would be if a (non-GPFS) server wrote to a LUN used as a > GPFS dataOnly NSD. > > We are not seeing any 'I/O' or filesystem errors, mmfsck (online) doesn't > detect any errors, and all NSDs are usable. However, some files seem to > have changes in content, with no changes in metadata (modify timestamp, > ownership), including files with the GPFS "immutable" ACL set. This is all consistent with the content on a dataOnly disk being overwritten outside of GPFS. > If an NSD was changed outside of GPFS control, would mmfsck detect > filesystem errors, or would the GPFS filesystem be consistent, even > though the content of some of the data blocks was altered? No. mmfsck can detect metadata corruption, but has no way to tell whether a data block has correct content or garbage. > Is there any metadata or checksum information maintained by GPFS, or any > means of doing a consistency check of the contents of files that would > correlate with blocks stored on a particular NSD? GPFS on top of traditional disks/RAID LUNs doesn't checksum data blocks, and thus can't tell whether a data block is good or bad. GPFS Native RAID has very strong on-disk data checksumming, OTOH. yuri -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuk at us.ibm.com Mon Feb 1 18:26:43 2016 From: liuk at us.ibm.com (Kenneth Liu) Date: Mon, 1 Feb 2016 10:26:43 -0800 Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 In-Reply-To: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> References: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> Message-ID: <201602011838.u11Ic39I004064@d03av02.boulder.ibm.com> And ISKLM to manage the encryption keys. Kenneth Liu Software Defined Infrastructure -- Spectrum Storage, Cleversafe & Platform Computing Sales Address: 4000 Executive Parkway San Ramon, CA 94583 Mobile #: (510) 584-7657 Email: liuk at us.ibm.com From: "Frank Kraemer" To: gpfsug-discuss at gpfsug.org Date: 02/01/2016 09:30 AM Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 Sent by: gpfsug-discuss-bounces at spectrumscale.org IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is composed of various components tested together for compatibility and correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and Power System Firmware. Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Publication Date: 29 January 2016 Summary of changes in ESS ver 4.0 a) ESS core - IBM Spectrum Scale RAID V4.2.0-1 - Updated GUI b) Support of Red Hat Enterprise Linux 7.1 - No changes from 3.0.x or 3.5.x c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1 - Updated from 3.x.y d) Install Toolkit - Updated Install Toolkit e) Updated firmware rpm - IP RAID Adapter FW - Host Adapter FW - Enclosure and drive FW Download: (612 MB) http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM +Spectrum+Scale +RAID&function=fixid&fixids=ESS_ADV_BASEIMAGE-4.0.0-power-Linux README: http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002500 Deployment and Administration Guides are available in IBM Knowledge Center. http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html - Elastic Storage Server: Quick Deployment Guide - Deploying the Elastic Storage Server - IBM Spectrum Scale RAID: Administration Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From liuk at us.ibm.com Mon Feb 1 18:26:43 2016 From: liuk at us.ibm.com (Kenneth Liu) Date: Mon, 1 Feb 2016 10:26:43 -0800 Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 In-Reply-To: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> References: <201602011729.u11HTXxY020908@d06av04.portsmouth.uk.ibm.com> Message-ID: <201602011838.u11Ic6D2004449@d03av02.boulder.ibm.com> And ISKLM to manage the encryption keys. Kenneth Liu Software Defined Infrastructure -- Spectrum Storage, Cleversafe & Platform Computing Sales Address: 4000 Executive Parkway San Ramon, CA 94583 Mobile #: (510) 584-7657 Email: liuk at us.ibm.com From: "Frank Kraemer" To: gpfsug-discuss at gpfsug.org Date: 02/01/2016 09:30 AM Subject: [gpfsug-discuss] Spectrum Scale *News* - IBM Elastic Storage Server (ESS) Advanced Edition v4.0 Sent by: gpfsug-discuss-bounces at spectrumscale.org IBM Elastic Storage Server (ESS) Advanced Edition ver 4.0 solution is composed of various components tested together for compatibility and correctness. Key components are RHEL, IBM Spectrum Scale RAID V4.2.0-1, and Power System Firmware. Update Name: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Fix ID: ESS_ADV_BASEIMAGE-4.0.0-power-Linux Publication Date: 29 January 2016 Summary of changes in ESS ver 4.0 a) ESS core - IBM Spectrum Scale RAID V4.2.0-1 - Updated GUI b) Support of Red Hat Enterprise Linux 7.1 - No changes from 3.0.x or 3.5.x c) Support of MLNX_OFED_LINUX-3.1-1.0.6.1 - Updated from 3.x.y d) Install Toolkit - Updated Install Toolkit e) Updated firmware rpm - IP RAID Adapter FW - Host Adapter FW - Enclosure and drive FW Download: (612 MB) http://www.ibm.com/support/fixcentral/swg/quickorder?product=ibm/StorageSoftware/IBM +Spectrum+Scale +RAID&function=fixid&fixids=ESS_ADV_BASEIMAGE-4.0.0-power-Linux README: http://www-01.ibm.com/support/docview.wss?rs=0&uid=isg400002500 Deployment and Administration Guides are available in IBM Knowledge Center. http://www.ibm.com/support/knowledgecenter/SSYSP8/sts_welcome.html - Elastic Storage Server: Quick Deployment Guide - Deploying the Elastic Storage Server - IBM Spectrum Scale RAID: Administration Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ewahl at osc.edu Mon Feb 1 18:39:12 2016 From: ewahl at osc.edu (Wahl, Edward) Date: Mon, 1 Feb 2016 18:39:12 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: <56AF2498.8010503@ed.ac.uk> References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> , <56AF2498.8010503@ed.ac.uk> Message-ID: <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Along the same vein I've patched rsync to maintain source atimes in Linux for large transitions such as this. Along with the stadnard "patches" mod for destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. I've not yet ported it to 3.1.x https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff Ed Wahl OSC ________________________________________ From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-bounces at spectrumscale.org] on behalf of Orlando Richards [orlando.richards at ed.ac.uk] Sent: Monday, February 01, 2016 4:25 AM To: gpfsug-discuss at spectrumscale.org Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) For what it's worth - there's a patch for rsync which IBM provided a while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up on the gpfsug github here: https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync On 29/01/16 22:36, Sven Oehme wrote: > Doug, > > This won't really work if you make use of ACL's or use special GPFS > extended attributes or set quotas, filesets, etc > so unfortunate the answer is you need to use a combination of things and > there is work going on to make some of this simpler (e.g. for ACL's) , > but its a longer road to get there. so until then you need to think > about multiple aspects . > > 1. you need to get the data across and there are various ways to do this. > > a) AFM is the simplest of all as it not just takes care of ACL's and > extended attributes and alike as it understands the GPFS internals it > also is operating in parallel can prefetch data, etc so its a efficient > way to do this but as already pointed out doesn't transfer quota or > fileset informations. > > b) you can either use rsync or any other pipe based copy program. the > downside is that they are typical single threaded and do a file by file > approach, means very metadata intensive on the source as well as target > side and cause a lot of ios on both side. > > c) you can use the policy engine to create a list of files to transfer > to at least address the single threaded scan part, then partition the > data and run multiple instances of cp or rsync in parallel, still > doesn't fix the ACL / EA issues, but the data gets there faster. > > 2. you need to get ACL/EA informations over too. there are several > command line options to dump the data and restore it, they kind of > suffer the same problem as data transfers , which is why using AFM is > the best way of doing this if you rely on ACL/EA informations. > > 3. transfer quota / fileset infos. there are several ways to do this, > but all require some level of scripting to do this. > > if you have TSM/HSM you could also transfer the data using SOBAR it's > described in the advanced admin book. > > sven > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > wrote: > > I have found that a tar pipe is much faster than rsync for this sort > of thing. The fastest of these is ?star? (schily tar). On average it > is about 2x-5x faster than rsync for doing this. After one pass with > this, you can use rsync for a subsequent or last pass synch.____ > > __ __ > > e.g.____ > > $ cd /export/gpfs1/foo____ > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > __ __ > > This also will not preserve filesets and quotas, though. You should > be able to automate that with a little bit of awk, perl, or whatnot.____ > > __ __ > > __ __ > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > ] *On Behalf Of > *Damir Krstic > *Sent:* Friday, January 29, 2016 2:32 PM > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1)____ > > __ __ > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > of storage. We are in planning stages of implementation. We would > like to migrate date from our existing GPFS installation (around > 300TB) to new solution. ____ > > __ __ > > We were planning of adding ESS to our existing GPFS cluster and > adding its disks and then deleting our old disks and having the data > migrated this way. However, our existing block size on our projects > filesystem is 1M and in order to extract as much performance out of > ESS we would like its filesystem created with larger block size. > Besides rsync do you have any suggestions of how to do this without > downtime and in fastest way possible? ____ > > __ __ > > I have looked at AFM but it does not seem to migrate quotas and > filesets so that may not be an optimal solution. ____ > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Research Services Manager Information Services IT Infrastructure Division Tel: 0131 650 4994 skype: orlando.richards The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From makaplan at us.ibm.com Mon Feb 1 18:44:50 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Mon, 1 Feb 2016 13:44:50 -0500 Subject: [gpfsug-discuss] what's on a 'dataOnly' disk? In-Reply-To: <201602011828.u11ISGDS029189@d01av04.pok.ibm.com> References: <20160129170401.0ec9f72e@uphs.upenn.edu> <201602011828.u11ISGDS029189@d01av04.pok.ibm.com> Message-ID: <201602011844.u11IirBd015334@d03av01.boulder.ibm.com> Just to add... Spectrum Scale is no different than most other file systems in this respect. It assumes the disk system and network systems will detect I/O errors, including data corruption. And it usually will ... but there are, as you've discovered, scenarios where it can not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Mon Feb 1 19:18:22 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 1 Feb 2016 19:18:22 +0000 Subject: [gpfsug-discuss] Question on FPO node - NSD recovery Message-ID: <427E3540-585D-4DD9-9E41-29C222548E03@nuance.com> When a node that?s part of an FPO file system (local disks) and the node is rebooted ? the NSDs come up as ?down? until I manually starts them. GPFS start on the node but the NSDs stay down. Is this the expected behavior or is there a config setting I missed somewhere? Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From kraemerf at de.ibm.com Tue Feb 2 08:23:43 2016 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Tue, 2 Feb 2016 09:23:43 +0100 Subject: [gpfsug-discuss] IBM Spectrum Protect with IBM Spectrum Scale - Introduction Message-ID: <201602020823.u128Nv9h015238@d06av05.portsmouth.uk.ibm.com> by Nils Haustein, see at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5334 Abstract: This presentation gives a short overview about the IBM Spectrum Family and briefly introduces IBM Spectrum Protect? (Tivoli Storage Manager, TSM) and IBM Spectrum Scale? (General Parallel File System, GPFS) in more detail. Subsequently it presents a solution integrating these two components and outlines its advantages. It further discusses use cases and deployment options. Last but not least this presentation elaborates on the client values running multiple Spectrum Protect instance in a Spectrum Scale cluster and presents performance test results highlighting that this solution scales with the growing data protection demands. Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tomasz.Wolski at ts.fujitsu.com Wed Feb 3 08:10:32 2016 From: Tomasz.Wolski at ts.fujitsu.com (Tomasz.Wolski at ts.fujitsu.com) Date: Wed, 3 Feb 2016 08:10:32 +0000 Subject: [gpfsug-discuss] DMAPI multi-thread safe Message-ID: <44d08c1749b9482787f5b90c8b7d6dbb@R01UKEXCASM223.r01.fujitsu.local> Hi Experts :) Could you please tell me if the DMAPI implementation for GPFS is multi-thread safe? Are there any limitation towards using multiple threads within a single DM application process? For example: DM events are processed by multiple threads, which call dm* functions for manipulating file attributes - will there be any problem when two threads try to access the same file at the same time? Is the libdmapi thread safe? Best regards, Tomasz Wolski -------------- next part -------------- An HTML attachment was scrubbed... URL: From stschmid at de.ibm.com Wed Feb 3 08:41:27 2016 From: stschmid at de.ibm.com (Stefan Schmidt) Date: Wed, 3 Feb 2016 09:41:27 +0100 Subject: [gpfsug-discuss] IBM Spectrum Protect with IBM Spectrum Scale - Introduction In-Reply-To: <201602020823.u128Nv9h015238@d06av05.portsmouth.uk.ibm.com> References: <201602020823.u128Nv9h015238@d06av05.portsmouth.uk.ibm.com> Message-ID: <201602030841.u138fY2l007402@d06av06.portsmouth.uk.ibm.com> Hi all, I want to add that IBM Spectrum Scale Raid ( ESS/GNR) is missing in the table I think. I know it's now a HW solution but the GNR package I thought would be named IBM Spectrum Scale Raid. Mit freundlichen Gr??en / Kind regards Stefan Schmidt Scrum Master IBM Spectrum Scale GUI / Senior IT Architect /PMP - Dept. M069 / IBM Spectrum Scale Software Development IBM Systems Group IBM Deutschland Phone: +49-6131-84-3465 IBM Deutschland Mobile: +49-170-6346601 Hechtsheimer Str. 2 E-Mail: stschmid at de.ibm.com 55131 Mainz Germany IBM Deutschland Research & Development GmbH / Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 From: Frank Kraemer/Germany/IBM at IBMDE To: gpfsug-discuss at gpfsug.org Date: 02.02.2016 09:24 Subject: [gpfsug-discuss] IBM Spectrum Protect with IBM Spectrum Scale - Introduction Sent by: gpfsug-discuss-bounces at spectrumscale.org by Nils Haustein, see at: http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5334 Abstract: This presentation gives a short overview about the IBM Spectrum Family and briefly introduces IBM Spectrum Protect? (Tivoli Storage Manager, TSM) and IBM Spectrum Scale? (General Parallel File System, GPFS) in more detail. Subsequently it presents a solution integrating these two components and outlines its advantages. It further discusses use cases and deployment options. Last but not least this presentation elaborates on the client values running multiple Spectrum Protect instance in a Spectrum Scale cluster and presents performance test results highlighting that this solution scales with the growing data protection demands. Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From robert at strubi.ox.ac.uk Wed Feb 3 16:53:59 2016 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Wed, 3 Feb 2016 16:53:59 +0000 (GMT) Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 In-Reply-To: <20160119171452.C7F963C1EAC@gpfsug.org> References: <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: <201602031653.060161@mail.strubi.ox.ac.uk> Hi Simon, I'll certainly be interested in wandering into town to attend this... please register me or whatever has to be done. Regards, Robert -- Dr. Robert Esnouf, University Research Lecturer, Head of Research Computing Core, NDM Research Computing Strategy Officer Room 10/028, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Email: robert at strubi.ox.ac.uk / robert at well.ox.ac.uk Tel: (+44) - 1865 - 287783 -------------- next part -------------- An embedded message was scrubbed... From: "Spectrum scale UG Chair (Simon Thompson)" Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 Date: Tue, 19 Jan 2016 17:13:42 +0000 Size: 5334 URL: From wsawdon at us.ibm.com Wed Feb 3 18:22:48 2016 From: wsawdon at us.ibm.com (Wayne Sawdon) Date: Wed, 3 Feb 2016 10:22:48 -0800 Subject: [gpfsug-discuss] DMAPI multi-thread safe In-Reply-To: <44d08c1749b9482787f5b90c8b7d6dbb@R01UKEXCASM223.r01.fujitsu.local> References: <44d08c1749b9482787f5b90c8b7d6dbb@R01UKEXCASM223.r01.fujitsu.local> Message-ID: <201602031822.u13IMv3c017365@d03av05.boulder.ibm.com> > From: "Tomasz.Wolski at ts.fujitsu.com" > > Could you please tell me if the DMAPI implementation for GPFS is > multi-thread safe? Are there any limitation towards using multiple > threads within a single DM application process? > For example: DM events are processed by multiple threads, which call > dm* functions for manipulating file attributes ? will there be any > problem when two threads try to access the same file at the same time? > > Is the libdmapi thread safe? > With the possible exception of dm_init_service it should be thread safe. Dmapi does offer access rights to allow or prevent concurrent access to a file. If you are not using the access rights, internally Spectrum Scale will serialize the dmapi calls like it would serialize for posix -- some calls will proceed in parallel (e.g. reads, non-overlapping writes) and some will be serialized (e.g. EA updates). -Wayne -------------- next part -------------- An HTML attachment was scrubbed... URL: From damir.krstic at gmail.com Thu Feb 4 21:15:56 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Thu, 04 Feb 2016 21:15:56 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> <56AF2498.8010503@ed.ac.uk> <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: Thanks all for great suggestions. We will most likely end up using either AFM or some mechanism of file copy (tar/rsync etc.). On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > Along the same vein I've patched rsync to maintain source atimes in Linux > for large transitions such as this. Along with the stadnard "patches" mod > for destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. > I've not yet ported it to 3.1.x > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff > > Ed Wahl > OSC > > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [ > gpfsug-discuss-bounces at spectrumscale.org] on behalf of Orlando Richards [ > orlando.richards at ed.ac.uk] > Sent: Monday, February 01, 2016 4:25 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance > (GPFS4.1) > > For what it's worth - there's a patch for rsync which IBM provided a > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up > on the gpfsug github here: > > https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync > > > > On 29/01/16 22:36, Sven Oehme wrote: > > Doug, > > > > This won't really work if you make use of ACL's or use special GPFS > > extended attributes or set quotas, filesets, etc > > so unfortunate the answer is you need to use a combination of things and > > there is work going on to make some of this simpler (e.g. for ACL's) , > > but its a longer road to get there. so until then you need to think > > about multiple aspects . > > > > 1. you need to get the data across and there are various ways to do this. > > > > a) AFM is the simplest of all as it not just takes care of ACL's and > > extended attributes and alike as it understands the GPFS internals it > > also is operating in parallel can prefetch data, etc so its a efficient > > way to do this but as already pointed out doesn't transfer quota or > > fileset informations. > > > > b) you can either use rsync or any other pipe based copy program. the > > downside is that they are typical single threaded and do a file by file > > approach, means very metadata intensive on the source as well as target > > side and cause a lot of ios on both side. > > > > c) you can use the policy engine to create a list of files to transfer > > to at least address the single threaded scan part, then partition the > > data and run multiple instances of cp or rsync in parallel, still > > doesn't fix the ACL / EA issues, but the data gets there faster. > > > > 2. you need to get ACL/EA informations over too. there are several > > command line options to dump the data and restore it, they kind of > > suffer the same problem as data transfers , which is why using AFM is > > the best way of doing this if you rely on ACL/EA informations. > > > > 3. transfer quota / fileset infos. there are several ways to do this, > > but all require some level of scripting to do this. > > > > if you have TSM/HSM you could also transfer the data using SOBAR it's > > described in the advanced admin book. > > > > sven > > > > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > > > wrote: > > > > I have found that a tar pipe is much faster than rsync for this sort > > of thing. The fastest of these is ?star? (schily tar). On average it > > is about 2x-5x faster than rsync for doing this. After one pass with > > this, you can use rsync for a subsequent or last pass synch.____ > > > > __ __ > > > > e.g.____ > > > > $ cd /export/gpfs1/foo____ > > > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > > > __ __ > > > > This also will not preserve filesets and quotas, though. You should > > be able to automate that with a little bit of awk, perl, or > whatnot.____ > > > > __ __ > > > > __ __ > > > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > > ] *On Behalf Of > > *Damir Krstic > > *Sent:* Friday, January 29, 2016 2:32 PM > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1)____ > > > > __ __ > > > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > > of storage. We are in planning stages of implementation. We would > > like to migrate date from our existing GPFS installation (around > > 300TB) to new solution. ____ > > > > __ __ > > > > We were planning of adding ESS to our existing GPFS cluster and > > adding its disks and then deleting our old disks and having the data > > migrated this way. However, our existing block size on our projects > > filesystem is 1M and in order to extract as much performance out of > > ESS we would like its filesystem created with larger block size. > > Besides rsync do you have any suggestions of how to do this without > > downtime and in fastest way possible? ____ > > > > __ __ > > > > I have looked at AFM but it does not seem to migrate quotas and > > filesets so that may not be an optimal solution. ____ > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -- > -- > Dr Orlando Richards > Research Services Manager > Information Services > IT Infrastructure Division > Tel: 0131 650 4994 > skype: orlando.richards > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Fri Feb 5 11:25:38 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 5 Feb 2016 11:25:38 +0000 Subject: [gpfsug-discuss] BM Spectrum Scale transparent cloud tiering In-Reply-To: <201601291718.u0THIPLr009799@d01av03.pok.ibm.com> References: <8505A552-5410-4F70-AA77-3DE5EF54BE09@nuance.com> <201601291718.u0THIPLr009799@d01av03.pok.ibm.com> Message-ID: Just to note if anyone is interested, the open beta is now "open" for the transparent cloud tiering, see: http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html?ce=sm6024&cmp=IBMSocial&ct=M16402YW&cm=h&IIO=BSYS&csr=blog&cr=casyst&ccy=us&s_tact=M16402YW Simon From: > on behalf of Marc A Kaplan > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Friday, 29 January 2016 at 17:18 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] BM Spectrum Scale transparent cloud tiering Since this official IBM website (pre)announces transparent cloud tiering ... http://www.ibm.com/developerworks/servicemanagement/tc/gpfs/evaluate.html?ce=sm6024&cmp=IBMSocial&ct=M16402YW&cm=h&IIO=BSYS&csr=blog&cr=casyst&ccy=us&s_tact=M16402YW And since Oesterlin mentioned Cluster Export Service (CES), please allow me to (hopefully!) clarify: Transparent Cloud Tiering uses some new interfaces and functions within Spectrum Scale, it is not "just a rehash" of the long existing DMAPI HSM support. Transparent Cloud Tiering allows one to dynamically migrate Spectrum Scale files to and from foreign file and/or object stores. on the other hand ... Cluster Export Service, allows one to access Spectrum Scale files with foreign protocols, such as NFS, SMB, and Object(OpenStack) I suppose one could deploy both, using Spectrum Scale with Cluster Export Service for local, fast, immediate access to "hot" file and objects and some foreign object service, such as Amazon S3 or Cleversafe for long term "cold" storage. Oh, and just to add to the mix, in case you haven't heard yet, Cleversafe is a fairly recent IBM acquisition, http://www-03.ibm.com/press/us/en/pressrelease/47776.wss -------------- next part -------------- An HTML attachment was scrubbed... URL: From S.J.Thompson at bham.ac.uk Mon Feb 8 10:07:29 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Mon, 8 Feb 2016 10:07:29 +0000 Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 In-Reply-To: <20160119171452.C7F963C1EAC@gpfsug.org> References: <20160119171452.C7F963C1EAC@gpfsug.org> Message-ID: Hi All, Just to note that we are NOW FULL for the next meet the devs in Feb. Simon From: > on behalf of Simon Thompson > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Tuesday, 19 January 2016 at 17:13 To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Next meet the devs - 24th Feb 2016 Dear All, We are planning the next 'Meet the Devs' event for Wednesday 24th February 2016, 11am-3:30pm. The event will be held in central Oxford. The agenda promises to be hands on and give you the opportunity to speak face to face with the developers of Spectrum Scale. Guideline agenda: * TBC - please provide input on what you'd like to see! Lunch and refreshments will be provided. Please can you let me know by email if you are interested in attending by Wednesday 17th February. Thanks and we hope to see you there. Thanks to Andy at OERC for offering to host. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Feb 9 14:42:07 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 9 Feb 2016 14:42:07 +0000 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config Message-ID: Any ideas on how to get out of this? [root at gpfs01 ~]# mmlsnodeclass onegig Node Class Name Members --------------------- ----------------------------------------------------------- one gig [root at gpfs01 ~]# mmchconfig maxMBpS=DEFAULT -N onegig mmchconfig: No nodes were found that matched the input specification. mmchconfig: Command failed. Examine previous error messages to determine cause. [root at gpfs01 ~]# mmdelnodeclass onegig mmdelnodeclass: Node class "onegig" still appears in GPFS configuration node override section maxMBpS 120 [onegig] mmdelnodeclass: Command failed. Examine previous error messages to determine cause. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Feb 9 15:04:38 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 9 Feb 2016 10:04:38 -0500 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config In-Reply-To: References: Message-ID: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> Yeah. Try first changing the configuration so it does not depend on onegig. Then secondly you may want to delete the nodeclass. Any ideas on how to get out of this? [root at gpfs01 ~]# mmlsnodeclass onegig Node Class Name Members --------------------- ----------------------------------------------------------- one gig [root at gpfs01 ~]# mmchconfig maxMBpS=DEFAULT -N onegig mmchconfig: No nodes were found that matched the input specification. mmchconfig: Command failed. Examine previous error messages to determine cause. [root at gpfs01 ~]# mmdelnodeclass onegig mmdelnodeclass: Node class "onegig" still appears in GPFS configuration node override section maxMBpS 120 [onegig] mmdelnodeclass: Command failed. Examine previous error messages to determine cause. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Feb 9 15:07:30 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 9 Feb 2016 15:07:30 +0000 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config In-Reply-To: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> References: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> Message-ID: <9EA36B16-AF4D-45AC-86D8-B996059A8D61@nuance.com> Well, that would have been my guess as well. But I need to associate that value with ?something?? I?ve been trying a sequence of commands, no joy. Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid From: > on behalf of Marc A Kaplan > Reply-To: gpfsug main discussion list > Date: Tuesday, February 9, 2016 at 9:04 AM To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Removing empty "nodeclass" from config Yeah. Try first changing the configuration so it does not depend on onegig. Then secondly you may want to delete the nodeclass. -------------- next part -------------- An HTML attachment was scrubbed... URL: From makaplan at us.ibm.com Tue Feb 9 15:34:17 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Tue, 9 Feb 2016 10:34:17 -0500 Subject: [gpfsug-discuss] Removing empty "nodeclass" from config In-Reply-To: <9EA36B16-AF4D-45AC-86D8-B996059A8D61@nuance.com> References: <201602091504.u19F4iCX026511@d01av04.pok.ibm.com> <9EA36B16-AF4D-45AC-86D8-B996059A8D61@nuance.com> Message-ID: <201602091534.u19FYPCE020191@d01av02.pok.ibm.com> AH... I see, instead of `maxMBpS=default -N all` try a specific number. And then revert to "default" with a second command. Seems there are some bugs or peculiarities in this code. # mmchconfig maxMBpS=99999 -N all # mmchconfig maxMBpS=default -N all I tried some other stuff. If you're curious play around and do mmlsconfig after each mmchconfig and see how the settings "evolve"!! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 21994 bytes Desc: not available URL: From pinto at scinet.utoronto.ca Wed Feb 10 19:26:56 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Wed, 10 Feb 2016 14:26:56 -0500 Subject: [gpfsug-discuss] mmlsnode: Unable to determine the local node identity. Message-ID: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> Dear group I'm trying to deal with this in the most elegant way possible: Once upon the time there were nodeA and nodeB in the cluster, on a 'onDemand manual HA' fashion. * nodeA died, so I migrated the whole OS/software/application stack from backup over to 'nodeB', IP/hostname, etc, hence 'old nodeB' effectively became the new nodeA. * Getting the new nodeA to rejoin the cluster was already a pain, but through a mmdelnode and mmaddnode operation we eventually got it to mount gpfs. Well ... * Old nodeA is now fixed and back on the network, and I'd like to re-purpose it as the new standby nodeB (IP and hostname already applied). As the subject say, I'm now facing node identity issues. From the FSmgr I already tried to del/add nodeB, even nodeA, etc, however GPFS seems to keep some information cached somewhere in the cluster. * At this point I even turned old nodeA into a nodeC with a different IP, etc, but that doesn't help either. I can't even start gpfs on nodeC. Question: what is the appropriate process to clean this mess from the GPFS perspective? I can't touch the new nodeA. It's highly committed in production already. Thanks Jaime ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From pinto at scinet.utoronto.ca Wed Feb 10 20:24:21 2016 From: pinto at scinet.utoronto.ca (Jaime Pinto) Date: Wed, 10 Feb 2016 15:24:21 -0500 Subject: [gpfsug-discuss] mmlsnode: Unable to determine the local node identity. In-Reply-To: References: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> Message-ID: <20160210152421.63075r24zqb156d1@support.scinet.utoronto.ca> Quoting "Buterbaugh, Kevin L" : > Hi Jaime, > > Have you tried wiping out /var/mmfs/gen/* and /var/mmfs/etc/* on the > old nodeA? > > Kevin That did the trick. Thanks Kevin and all that responded privately. Jaime > >> On Feb 10, 2016, at 1:26 PM, Jaime Pinto wrote: >> >> Dear group >> >> I'm trying to deal with this in the most elegant way possible: >> >> Once upon the time there were nodeA and nodeB in the cluster, on a >> 'onDemand manual HA' fashion. >> >> * nodeA died, so I migrated the whole OS/software/application stack >> from backup over to 'nodeB', IP/hostname, etc, hence 'old nodeB' >> effectively became the new nodeA. >> >> * Getting the new nodeA to rejoin the cluster was already a pain, >> but through a mmdelnode and mmaddnode operation we eventually got >> it to mount gpfs. >> >> Well ... >> >> * Old nodeA is now fixed and back on the network, and I'd like to >> re-purpose it as the new standby nodeB (IP and hostname already >> applied). As the subject say, I'm now facing node identity issues. >> From the FSmgr I already tried to del/add nodeB, even nodeA, etc, >> however GPFS seems to keep some information cached somewhere in the >> cluster. >> >> * At this point I even turned old nodeA into a nodeC with a >> different IP, etc, but that doesn't help either. I can't even start >> gpfs on nodeC. >> >> Question: what is the appropriate process to clean this mess from >> the GPFS perspective? >> >> I can't touch the new nodeA. It's highly committed in production already. >> >> Thanks >> Jaime >> >> >> >> >> >> >> ************************************ >> --- >> Jaime Pinto >> SciNet HPC Consortium - Compute/Calcul Canada >> www.scinet.utoronto.ca - www.computecanada.org >> University of Toronto >> 256 McCaul Street, Room 235 >> Toronto, ON, M5T1W5 >> P: 416-978-2755 >> C: 416-505-1477 >> >> ---------------------------------------------------------------- >> This message was sent using IMP at SciNet Consortium, University of Toronto. >> >> >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > ************************************ TELL US ABOUT YOUR SUCCESS STORIES http://www.scinethpc.ca/testimonials ************************************ --- Jaime Pinto SciNet HPC Consortium - Compute/Calcul Canada www.scinet.utoronto.ca - www.computecanada.org University of Toronto 256 McCaul Street, Room 235 Toronto, ON, M5T1W5 P: 416-978-2755 C: 416-505-1477 ---------------------------------------------------------------- This message was sent using IMP at SciNet Consortium, University of Toronto. From makaplan at us.ibm.com Wed Feb 10 20:34:58 2016 From: makaplan at us.ibm.com (Marc A Kaplan) Date: Wed, 10 Feb 2016 15:34:58 -0500 Subject: [gpfsug-discuss] mmlsnode: Unable to determine the local nodeidentity. In-Reply-To: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> References: <20160210142656.538382t54cbn61a8@support.scinet.utoronto.ca> Message-ID: <201602102035.u1AKZ4v9030063@d01av01.pok.ibm.com> For starters, show us the output of mmlscluster mmgetstate -a cat /var/mmfs/gen/mmsdrfs Depending on how those look, this might be simple or not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Thu Feb 11 14:42:40 2016 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Thu, 11 Feb 2016 14:42:40 +0000 Subject: [gpfsug-discuss] IBM Interconnect - Any interest in an informal Spectrum Scale UG meetup? Message-ID: <3FA3ABD2-0B93-4A26-A841-84AE4A8505CA@nuance.com> I?ll be at IBM Interconnect the week of 2/21. Anyone else going? Is there interest in a meet-up or getting together informally? If anyone is interested, drop me a note and I?ll try and pull something together - robert.oesterlin at nuance.com Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid -------------- next part -------------- An HTML attachment was scrubbed... URL: From TROPPENS at de.ibm.com Fri Feb 12 14:53:22 2016 From: TROPPENS at de.ibm.com (Ulf Troppens) Date: Fri, 12 Feb 2016 15:53:22 +0100 Subject: [gpfsug-discuss] Upcoming Spectrum Scale education events and user group meetings in Europe Message-ID: <201602121453.u1CErUAS012453@d06av07.portsmouth.uk.ibm.com> Here is an overview of upcoming Spectrum Scale education events and user group meetings in Europe. I plan to be at most of the events. Looking forward to meet you there! https://ibm.biz/BdHtBN -- IBM Spectrum Scale Development - Client Engagements & Solutions Delivery Consulting IT Specialist Author "Storage Networks Explained" IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: From service at metamodul.com Sun Feb 14 13:59:36 2016 From: service at metamodul.com (MetaService) Date: Sun, 14 Feb 2016 14:59:36 +0100 Subject: [gpfsug-discuss] Migration from SONAS to Spectrum Scale - Limit of 200 TB for ACE migrations Message-ID: <1455458376.4507.92.camel@pluto> Hi, The Playbook: SONAS / Unified Migration to IBM Spectrum Scale - https://www.ibm.com/developerworks/community/wikis/form/anonymous/api/wiki/fa32927c-e904-49cc-a4cc-870bcc8e307c/page/2ff0c6d7-a854-4d64-a98c-0dbfc611ffc6/attachment/a57f1d1e-c68e-44b0-bcde-20ce6b0aebd6/media/Migration_Playbook_PoC_SonasToSpectrumScale.pdf - mentioned that only ACE migration for SONAS FS up to 200TB are supported/recommended. Is this a limitation for the whole SONAS FS or for each fileset ? tia Hajo -- MetaModul GmbH Suederstr. 12 DE-25336 Elmshorn Mobil: +49 177 4393994 Geschaeftsfuehrer: Hans-Joachim Ehlers From douglasof at us.ibm.com Mon Feb 15 15:26:08 2016 From: douglasof at us.ibm.com (Douglas O'flaherty) Date: Mon, 15 Feb 2016 10:26:08 -0500 Subject: [gpfsug-discuss] IBM Interconnect - Any interest in an informal Spectrum Scale UG meetup? In-Reply-To: References: Message-ID: <201602151530.u1FFU4IG026030@d01av03.pok.ibm.com> Greetings: I like Bob's suggestion of an informal meet-up next week. How does Spectrum Scale beers sound? Tuesday right near the Expo should work. We'll scope out a place this week. We will have several places Scale is covered, including some references in different keynotes. There will be a demonstration of transparent cloud tiering - the Open Beta currently running - at the Interconnect Expo. There is summary of the several events in EU coming up. I'm looking for topics you want covered at the ISC User Group meeting. https://www.ibm.com/developerworks/community/blogs/storageneers/entry/Upcoming_Spectrum_Scale_education_events_and_user_group_meetings_in_Europe?lang=en_us The next US user group is still to be scheduled, so send in your ideas. doug ----- Message from "Oesterlin, Robert" on Thu, 11 Feb 2016 14:42:40 +0000 ----- To: gpfsug main discussion list Subject: [gpfsug-discuss] IBM Interconnect - Any interest in an informal Spectrum Scale UG meetup? I?ll be at IBM Interconnect the week of 2/21. Anyone else going? Is there interest in a meet-up or getting together informally? If anyone is interested, drop me a note and I?ll try and pull something together - robert.oesterlin at nuance.com Bob Oesterlin Sr Storage Engineer, Nuance HPC Grid _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From damir.krstic at gmail.com Wed Feb 17 21:07:33 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Wed, 17 Feb 2016 21:07:33 +0000 Subject: [gpfsug-discuss] question about remote cluster mounting Message-ID: In our current environment we have a storage gpfs cluster and a compute gpfs cluster. We use gpfs remote cluster mounting mechanism to mount storage cluster on compute cluster. So far so good. We are about to introduce 3rd storage cluster in our environment and question I have is about gpfs authorization keys. More specifically, when we initially did remote cluster mounting, we had to run mmauth command on both the storage cluster and the compute cluster and then share the keys between the clusters. With the third storage cluster, can we re-use authorization key from compute cluster and share it with the new storage cluster? The reason for this question is I am hoping to minimize downtime on our compute cluster and I remember having to shut gpfs down when issuing mmauth command so I am hoping I can re-use the compute cluster key without shutting gpfs down. Thanks, Damir -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Wed Feb 17 21:40:05 2016 From: bbanister at jumptrading.com (Bryan Banister) Date: Wed, 17 Feb 2016 21:40:05 +0000 Subject: [gpfsug-discuss] question about remote cluster mounting In-Reply-To: References: Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05FB36F6@CHI-EXCHANGEW1.w2k.jumptrading.com> Yes, you may (and should) reuse the auth key from the compute cluster, -Bryan From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Damir Krstic Sent: Wednesday, February 17, 2016 3:08 PM To: gpfsug main discussion list Subject: [gpfsug-discuss] question about remote cluster mounting In our current environment we have a storage gpfs cluster and a compute gpfs cluster. We use gpfs remote cluster mounting mechanism to mount storage cluster on compute cluster. So far so good. We are about to introduce 3rd storage cluster in our environment and question I have is about gpfs authorization keys. More specifically, when we initially did remote cluster mounting, we had to run mmauth command on both the storage cluster and the compute cluster and then share the keys between the clusters. With the third storage cluster, can we re-use authorization key from compute cluster and share it with the new storage cluster? The reason for this question is I am hoping to minimize downtime on our compute cluster and I remember having to shut gpfs down when issuing mmauth command so I am hoping I can re-use the compute cluster key without shutting gpfs down. Thanks, Damir ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From volobuev at us.ibm.com Wed Feb 17 22:54:36 2016 From: volobuev at us.ibm.com (Yuri L Volobuev) Date: Wed, 17 Feb 2016 14:54:36 -0800 Subject: [gpfsug-discuss] question about remote cluster mounting In-Reply-To: References: Message-ID: <201602172255.u1HMtIDp000702@d03av05.boulder.ibm.com> The authentication scheme used for GPFS multi-clustering is similar to what other frameworks (e.g. ssh) do for private/public auth: each cluster has a private key and a public key. The key pair only needs to be generated once (unless you want to periodically regenerate it for higher security; this is different from enabling authentication for the very first time and can be done without downtime). The public key can then be exchanged with multiple remote clusters. yuri From: Damir Krstic To: gpfsug main discussion list , Date: 02/17/2016 01:08 PM Subject: [gpfsug-discuss] question about remote cluster mounting Sent by: gpfsug-discuss-bounces at spectrumscale.org In our current environment we have a storage gpfs cluster and a compute gpfs cluster. We use gpfs remote cluster mounting mechanism to mount storage cluster on compute cluster. So far so good. We are about to introduce 3rd storage cluster in our environment and question I have is about gpfs authorization keys. More specifically, when we initially did remote cluster mounting, we had to run mmauth command on both the storage cluster and the compute cluster and then share the keys between the clusters. With the third storage cluster, can we re-use authorization key from compute cluster and share it with the new storage cluster? The reason for this question is I am hoping to minimize downtime on our compute cluster and I remember having to shut gpfs down when issuing mmauth command so I am hoping I can re-use the compute cluster key without shutting gpfs down. Thanks, Damir _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From damir.krstic at gmail.com Mon Feb 22 13:12:14 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Mon, 22 Feb 2016 13:12:14 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance (GPFS4.1) In-Reply-To: References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> <56AF2498.8010503@ed.ac.uk> <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: Sorry to revisit this question - AFM seems to be the best way to do this. I was wondering if anyone has done AFM migration. I am looking at this wiki page for instructions: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating%20Data%20Using%20AFM and I am little confused by step 3 "cut over users" <-- does this mean, unmount existing filesystem and point users to new filesystem? The reason we were looking at AFM is to not have downtime - make the transition as seamless as possible to the end user. Not sure what, then, AFM buys us if we still have to take "downtime" in order to cut users over to the new system. Thanks, Damir On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic wrote: > Thanks all for great suggestions. We will most likely end up using either > AFM or some mechanism of file copy (tar/rsync etc.). > > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > >> Along the same vein I've patched rsync to maintain source atimes in Linux >> for large transitions such as this. Along with the stadnard "patches" mod >> for destination atimes it is quite useful. Works in 3.0.8 and 3.0.9. >> I've not yet ported it to 3.1.x >> https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff >> >> Ed Wahl >> OSC >> >> ________________________________________ >> From: gpfsug-discuss-bounces at spectrumscale.org [ >> gpfsug-discuss-bounces at spectrumscale.org] on behalf of Orlando Richards [ >> orlando.richards at ed.ac.uk] >> Sent: Monday, February 01, 2016 4:25 AM >> To: gpfsug-discuss at spectrumscale.org >> Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS >> appliance (GPFS4.1) >> >> For what it's worth - there's a patch for rsync which IBM provided a >> while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up >> on the gpfsug github here: >> >> https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync >> >> >> >> On 29/01/16 22:36, Sven Oehme wrote: >> > Doug, >> > >> > This won't really work if you make use of ACL's or use special GPFS >> > extended attributes or set quotas, filesets, etc >> > so unfortunate the answer is you need to use a combination of things and >> > there is work going on to make some of this simpler (e.g. for ACL's) , >> > but its a longer road to get there. so until then you need to think >> > about multiple aspects . >> > >> > 1. you need to get the data across and there are various ways to do >> this. >> > >> > a) AFM is the simplest of all as it not just takes care of ACL's and >> > extended attributes and alike as it understands the GPFS internals it >> > also is operating in parallel can prefetch data, etc so its a efficient >> > way to do this but as already pointed out doesn't transfer quota or >> > fileset informations. >> > >> > b) you can either use rsync or any other pipe based copy program. the >> > downside is that they are typical single threaded and do a file by file >> > approach, means very metadata intensive on the source as well as target >> > side and cause a lot of ios on both side. >> > >> > c) you can use the policy engine to create a list of files to transfer >> > to at least address the single threaded scan part, then partition the >> > data and run multiple instances of cp or rsync in parallel, still >> > doesn't fix the ACL / EA issues, but the data gets there faster. >> > >> > 2. you need to get ACL/EA informations over too. there are several >> > command line options to dump the data and restore it, they kind of >> > suffer the same problem as data transfers , which is why using AFM is >> > the best way of doing this if you rely on ACL/EA informations. >> > >> > 3. transfer quota / fileset infos. there are several ways to do this, >> > but all require some level of scripting to do this. >> > >> > if you have TSM/HSM you could also transfer the data using SOBAR it's >> > described in the advanced admin book. >> > >> > sven >> > >> > >> > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug >> > > > > wrote: >> > >> > I have found that a tar pipe is much faster than rsync for this sort >> > of thing. The fastest of these is ?star? (schily tar). On average it >> > is about 2x-5x faster than rsync for doing this. After one pass with >> > this, you can use rsync for a subsequent or last pass synch.____ >> > >> > __ __ >> > >> > e.g.____ >> > >> > $ cd /export/gpfs1/foo____ >> > >> > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ >> > >> > __ __ >> > >> > This also will not preserve filesets and quotas, though. You should >> > be able to automate that with a little bit of awk, perl, or >> whatnot.____ >> > >> > __ __ >> > >> > __ __ >> > >> > *From:*gpfsug-discuss-bounces at spectrumscale.org >> > >> > [mailto:gpfsug-discuss-bounces at spectrumscale.org >> > ] *On Behalf Of >> > *Damir Krstic >> > *Sent:* Friday, January 29, 2016 2:32 PM >> > *To:* gpfsug main discussion list >> > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS >> > appliance (GPFS4.1)____ >> > >> > __ __ >> > >> > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT >> > of storage. We are in planning stages of implementation. We would >> > like to migrate date from our existing GPFS installation (around >> > 300TB) to new solution. ____ >> > >> > __ __ >> > >> > We were planning of adding ESS to our existing GPFS cluster and >> > adding its disks and then deleting our old disks and having the data >> > migrated this way. However, our existing block size on our projects >> > filesystem is 1M and in order to extract as much performance out of >> > ESS we would like its filesystem created with larger block size. >> > Besides rsync do you have any suggestions of how to do this without >> > downtime and in fastest way possible? ____ >> > >> > __ __ >> > >> > I have looked at AFM but it does not seem to migrate quotas and >> > filesets so that may not be an optimal solution. ____ >> > >> > >> > _______________________________________________ >> > gpfsug-discuss mailing list >> > gpfsug-discuss at spectrumscale.org >> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > >> > >> > >> > >> > _______________________________________________ >> > gpfsug-discuss mailing list >> > gpfsug-discuss at spectrumscale.org >> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > >> >> -- >> -- >> Dr Orlando Richards >> Research Services Manager >> Information Services >> IT Infrastructure Division >> Tel: 0131 650 4994 >> skype: orlando.richards >> >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at spectrumscale.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From YARD at il.ibm.com Mon Feb 22 13:39:16 2016 From: YARD at il.ibm.com (Yaron Daniel) Date: Mon, 22 Feb 2016 15:39:16 +0200 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance(GPFS4.1) In-Reply-To: References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com><56AF2498.8010503@ed.ac.uk><9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> Message-ID: <201602221339.u1MDdVfH012286@d06av07.portsmouth.uk.ibm.com> Hi AFM - Active File Management (AFM) is an asynchronous cross cluster utility It means u create new GPFS cluster - migrate the data without downtime , and when u r ready - u do last sync and cut-over. Hope this help. Regards Yaron Daniel 94 Em Ha'Moshavot Rd Server, Storage and Data Services - Team Leader Petach Tiqva, 49527 Global Technology Services Israel Phone: +972-3-916-5672 Fax: +972-3-916-5672 Mobile: +972-52-8395593 e-mail: yard at il.ibm.com IBM Israel gpfsug-discuss-bounces at spectrumscale.org wrote on 02/22/2016 03:12:14 PM: > From: Damir Krstic > To: gpfsug main discussion list > Date: 02/22/2016 03:12 PM > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1) > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > Sorry to revisit this question - AFM seems to be the best way to do > this. I was wondering if anyone has done AFM migration. I am looking > at this wiki page for instructions: > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/ > wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating% > 20Data%20Using%20AFM > and I am little confused by step 3 "cut over users" <-- does this > mean, unmount existing filesystem and point users to new filesystem? > > The reason we were looking at AFM is to not have downtime - make the > transition as seamless as possible to the end user. Not sure what, > then, AFM buys us if we still have to take "downtime" in order to > cut users over to the new system. > > Thanks, > Damir > > On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic wrote: > Thanks all for great suggestions. We will most likely end up using > either AFM or some mechanism of file copy (tar/rsync etc.). > > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > Along the same vein I've patched rsync to maintain source atimes in > Linux for large transitions such as this. Along with the stadnard > "patches" mod for destination atimes it is quite useful. Works in > 3.0.8 and 3.0.9. I've not yet ported it to 3.1.x > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff > > Ed Wahl > OSC > > ________________________________________ > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss- > bounces at spectrumscale.org] on behalf of Orlando Richards [ > orlando.richards at ed.ac.uk] > Sent: Monday, February 01, 2016 4:25 AM > To: gpfsug-discuss at spectrumscale.org > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > appliance (GPFS4.1) > > For what it's worth - there's a patch for rsync which IBM provided a > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up > on the gpfsug github here: > > https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync > > > > On 29/01/16 22:36, Sven Oehme wrote: > > Doug, > > > > This won't really work if you make use of ACL's or use special GPFS > > extended attributes or set quotas, filesets, etc > > so unfortunate the answer is you need to use a combination of things and > > there is work going on to make some of this simpler (e.g. for ACL's) , > > but its a longer road to get there. so until then you need to think > > about multiple aspects . > > > > 1. you need to get the data across and there are various ways to do this. > > > > a) AFM is the simplest of all as it not just takes care of ACL's and > > extended attributes and alike as it understands the GPFS internals it > > also is operating in parallel can prefetch data, etc so its a efficient > > way to do this but as already pointed out doesn't transfer quota or > > fileset informations. > > > > b) you can either use rsync or any other pipe based copy program. the > > downside is that they are typical single threaded and do a file by file > > approach, means very metadata intensive on the source as well as target > > side and cause a lot of ios on both side. > > > > c) you can use the policy engine to create a list of files to transfer > > to at least address the single threaded scan part, then partition the > > data and run multiple instances of cp or rsync in parallel, still > > doesn't fix the ACL / EA issues, but the data gets there faster. > > > > 2. you need to get ACL/EA informations over too. there are several > > command line options to dump the data and restore it, they kind of > > suffer the same problem as data transfers , which is why using AFM is > > the best way of doing this if you rely on ACL/EA informations. > > > > 3. transfer quota / fileset infos. there are several ways to do this, > > but all require some level of scripting to do this. > > > > if you have TSM/HSM you could also transfer the data using SOBAR it's > > described in the advanced admin book. > > > > sven > > > > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > > > wrote: > > > > I have found that a tar pipe is much faster than rsync for this sort > > of thing. The fastest of these is ?star? (schily tar). On average it > > is about 2x-5x faster than rsync for doing this. After one pass with > > this, you can use rsync for a subsequent or last pass synch.____ > > > > __ __ > > > > e.g.____ > > > > $ cd /export/gpfs1/foo____ > > > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > > > __ __ > > > > This also will not preserve filesets and quotas, though. You should > > be able to automate that with a little bit of awk, perl, or whatnot.____ > > > > __ __ > > > > __ __ > > > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > > ] *On Behalf Of > > *Damir Krstic > > *Sent:* Friday, January 29, 2016 2:32 PM > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1)____ > > > > __ __ > > > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > > of storage. We are in planning stages of implementation. We would > > like to migrate date from our existing GPFS installation (around > > 300TB) to new solution. ____ > > > > __ __ > > > > We were planning of adding ESS to our existing GPFS cluster and > > adding its disks and then deleting our old disks and having the data > > migrated this way. However, our existing block size on our projects > > filesystem is 1M and in order to extract as much performance out of > > ESS we would like its filesystem created with larger block size. > > Besides rsync do you have any suggestions of how to do this without > > downtime and in fastest way possible? ____ > > > > __ __ > > > > I have looked at AFM but it does not seem to migrate quotas and > > filesets so that may not be an optimal solution. ____ > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -- > -- > Dr Orlando Richards > Research Services Manager > Information Services > IT Infrastructure Division > Tel: 0131 650 4994 > skype: orlando.richards > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From damir.krstic at gmail.com Mon Feb 22 16:11:31 2016 From: damir.krstic at gmail.com (Damir Krstic) Date: Mon, 22 Feb 2016 16:11:31 +0000 Subject: [gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance(GPFS4.1) In-Reply-To: <201602221339.u1MDdVfH012286@d06av07.portsmouth.uk.ibm.com> References: <43610bd02e384543a68242a9597224bd@mbxtoa1.winmail.deshaw.com> <56AF2498.8010503@ed.ac.uk> <9DA9EC7A281AC7428A9618AFDC49049955BAF463@CIO-TNC-D1MBX10.osuad.osu.edu> <201602221339.u1MDdVfH012286@d06av07.portsmouth.uk.ibm.com> Message-ID: Thanks for the reply - but that explanation does not mean no downtime without elaborating on "cut over." I can do the sync via rsync or tar today but eventually I will have to cut over to the new system. Is this the case with AFM as well - once everything is synced over - cutting over means users will have to "cut over" by: 1. either mounting new AFM-synced system on all compute nodes with same mount as the old system (which means downtime to unmount the existing filesystem and mounting new filesystem) or 2. end-user training i.e. starting using new filesystem, move your own files you need because eventually we will shutdown the old filesystem. If, then, it's true that AFM requires some sort of cut over (either by disconnecting the old system and mounting new system as the old mount point, or by instruction to users to start using new filesystem at once) I am not sure that AFM gets me anything more than rsync or tar when it comes to taking a downtime (cutting over) for the end user. Thanks, Damir On Mon, Feb 22, 2016 at 7:39 AM Yaron Daniel wrote: > Hi > > AFM - Active File Management (AFM) is an asynchronous cross cluster > utility > > It means u create new GPFS cluster - migrate the data without downtime , > and when u r ready - u do last sync and cut-over. > > Hope this help. > > > > Regards > > > > ------------------------------ > > > > *Yaron Daniel* 94 Em Ha'Moshavot Rd > *Server, **Storage and Data Services* > *- > Team Leader* Petach Tiqva, 49527 > *Global Technology Services* Israel > Phone: +972-3-916-5672 > Fax: +972-3-916-5672 > Mobile: +972-52-8395593 > e-mail: yard at il.ibm.com > *IBM Israel* > > > > > > gpfsug-discuss-bounces at spectrumscale.org wrote on 02/22/2016 03:12:14 PM: > > > From: Damir Krstic > > To: gpfsug main discussion list > > Date: 02/22/2016 03:12 PM > > > > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1) > > > Sent by: gpfsug-discuss-bounces at spectrumscale.org > > > > > > Sorry to revisit this question - AFM seems to be the best way to do > > this. I was wondering if anyone has done AFM migration. I am looking > > at this wiki page for instructions: > > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/ > > wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating% > > 20Data%20Using%20AFM > > and I am little confused by step 3 "cut over users" <-- does this > > mean, unmount existing filesystem and point users to new filesystem? > > > > The reason we were looking at AFM is to not have downtime - make the > > transition as seamless as possible to the end user. Not sure what, > > then, AFM buys us if we still have to take "downtime" in order to > > cut users over to the new system. > > > > Thanks, > > Damir > > > > On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic > wrote: > > Thanks all for great suggestions. We will most likely end up using > > either AFM or some mechanism of file copy (tar/rsync etc.). > > > > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward wrote: > > Along the same vein I've patched rsync to maintain source atimes in > > Linux for large transitions such as this. Along with the stadnard > > "patches" mod for destination atimes it is quite useful. Works in > > 3.0.8 and 3.0.9. I've not yet ported it to 3.1.x > > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff > > > > Ed Wahl > > OSC > > > > ________________________________________ > > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss- > > bounces at spectrumscale.org] on behalf of Orlando Richards [ > > orlando.richards at ed.ac.uk] > > Sent: Monday, February 01, 2016 4:25 AM > > To: gpfsug-discuss at spectrumscale.org > > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > appliance (GPFS4.1) > > > > For what it's worth - there's a patch for rsync which IBM provided a > > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up > > on the gpfsug github here: > > > > https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync > > > > > > > > On 29/01/16 22:36, Sven Oehme wrote: > > > Doug, > > > > > > This won't really work if you make use of ACL's or use special GPFS > > > extended attributes or set quotas, filesets, etc > > > so unfortunate the answer is you need to use a combination of things > and > > > there is work going on to make some of this simpler (e.g. for ACL's) , > > > but its a longer road to get there. so until then you need to think > > > about multiple aspects . > > > > > > 1. you need to get the data across and there are various ways to do > this. > > > > > > a) AFM is the simplest of all as it not just takes care of ACL's and > > > extended attributes and alike as it understands the GPFS internals it > > > also is operating in parallel can prefetch data, etc so its a efficient > > > way to do this but as already pointed out doesn't transfer quota or > > > fileset informations. > > > > > > b) you can either use rsync or any other pipe based copy program. the > > > downside is that they are typical single threaded and do a file by file > > > approach, means very metadata intensive on the source as well as target > > > side and cause a lot of ios on both side. > > > > > > c) you can use the policy engine to create a list of files to transfer > > > to at least address the single threaded scan part, then partition the > > > data and run multiple instances of cp or rsync in parallel, still > > > doesn't fix the ACL / EA issues, but the data gets there faster. > > > > > > 2. you need to get ACL/EA informations over too. there are several > > > command line options to dump the data and restore it, they kind of > > > suffer the same problem as data transfers , which is why using AFM is > > > the best way of doing this if you rely on ACL/EA informations. > > > > > > 3. transfer quota / fileset infos. there are several ways to do this, > > > but all require some level of scripting to do this. > > > > > > if you have TSM/HSM you could also transfer the data using SOBAR it's > > > described in the advanced admin book. > > > > > > sven > > > > > > > > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug > > > > > >> wrote: > > > > > > I have found that a tar pipe is much faster than rsync for this > sort > > > of thing. The fastest of these is ?star? (schily tar). On average > it > > > is about 2x-5x faster than rsync for doing this. After one pass > with > > > this, you can use rsync for a subsequent or last pass synch.____ > > > > > > __ __ > > > > > > e.g.____ > > > > > > $ cd /export/gpfs1/foo____ > > > > > > $ star ?c H=xtar | (cd /export/gpfs2/foo; star ?xp)____ > > > > > > __ __ > > > > > > This also will not preserve filesets and quotas, though. You should > > > be able to automate that with a little bit of awk, perl, or > whatnot.____ > > > > > > __ __ > > > > > > __ __ > > > > > > *From:*gpfsug-discuss-bounces at spectrumscale.org > > > > > > > [mailto:gpfsug-discuss-bounces at spectrumscale.org > > > > >] *On Behalf Of > > > *Damir Krstic > > > *Sent:* Friday, January 29, 2016 2:32 PM > > > *To:* gpfsug main discussion list > > > *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS > > > appliance (GPFS4.1)____ > > > > > > __ __ > > > > > > We have recently purchased ESS appliance from IBM (GL6) with 1.5PT > > > of storage. We are in planning stages of implementation. We would > > > like to migrate date from our existing GPFS installation (around > > > 300TB) to new solution. ____ > > > > > > __ __ > > > > > > We were planning of adding ESS to our existing GPFS cluster and > > > adding its disks and then deleting our old disks and having the > data > > > migrated this way. However, our existing block size on our projects > > > filesystem is 1M and in order to extract as much performance out of > > > ESS we would like its filesystem created with larger block size. > > > Besides rsync do you have any suggestions of how to do this without > > > downtime and in fastest way possible? ____ > > > > > > __ __ > > > > > > I have looked at AFM but it does not seem to migrate quotas and > > > filesets so that may not be an optimal solution. ____ > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at spectrumscale.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > > -- > > -- > > Dr Orlando Richards > > Research Services Manager > > Information Services > > IT Infrastructure Division > > Tel: 0131 650 4994 > > skype: orlando.richards > > > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at spectrumscale.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 1851 bytes Desc: not available URL: From Luke.Raimbach at crick.ac.uk Wed Feb 24 14:05:07 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Wed, 24 Feb 2016 14:05:07 +0000 Subject: [gpfsug-discuss] AFM and Placement Policies Message-ID: Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From dhildeb at us.ibm.com Wed Feb 24 19:16:54 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Wed, 24 Feb 2016 11:16:54 -0800 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: References: Message-ID: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center From: Luke Raimbach To: gpfsug main discussion list Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From dhildeb at us.ibm.com Wed Feb 24 19:16:54 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Wed, 24 Feb 2016 11:16:54 -0800 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: References: Message-ID: <201602241923.u1OJNxMT006419@d01av04.pok.ibm.com> Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center From: Luke Raimbach To: gpfsug main discussion list Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Gethyn.Longworth at Rolls-Royce.com Thu Feb 25 10:42:39 2016 From: Gethyn.Longworth at Rolls-Royce.com (Longworth, Gethyn) Date: Thu, 25 Feb 2016 10:42:39 +0000 Subject: [gpfsug-discuss] Integration with Active Directory Message-ID: Hi all, I'm new to both GPFS and to this mailing list, so I thought I'd introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale.) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I've configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can't use RFC2307, as our IT department don't understand what this is), but the problem I'm having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate - I can run "id" on that node with a domain account and it provides the correct answer - whereas the other will not and denies any knowledge of the domain or user. >From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected - a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET | Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6181 bytes Desc: not available URL: -------------- next part -------------- The data contained in, or attached to, this e-mail, may contain confidential information. If you have received it in error you should notify the sender immediately by reply e-mail, delete the message from your system and contact +44 (0) 3301235850 (Security Operations Centre) if you need assistance. Please do not copy it for any purpose, or disclose its contents to any other person. An e-mail response to this address may be subject to interception or monitoring for operational reasons or for lawful business practices. (c) 2016 Rolls-Royce plc Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 1003142. Registered in England. From S.J.Thompson at bham.ac.uk Thu Feb 25 13:19:12 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 25 Feb 2016 13:19:12 +0000 Subject: [gpfsug-discuss] Integration with Active Directory Message-ID: Hi Gethyn, >From what I recall, CTDB used underneath is used to share the secret and only the primary named machine is joined, but CTDB and CES should work this backend part out for you. I do have a question though, do you want to have consistent UIDs across other systems? For example if you plan to use NFS to other *nix systems, then you probably want to think about LDAP mapping and using custom auth (we do this as out AD doesn't contain UIDs either). Simon From: > on behalf of "Longworth, Gethyn" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Thursday, 25 February 2016 at 10:42 To: "gpfsug-discuss at spectrumscale.org" > Subject: [gpfsug-discuss] Integration with Active Directory Hi all, I?m new to both GPFS and to this mailing list, so I thought I?d introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale?) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I?ve configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can?t use RFC2307, as our IT department don?t understand what this is), but the problem I?m having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate ? I can run ?id? on that node with a domain account and it provides the correct answer ? whereas the other will not and denies any knowledge of the domain or user. From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected ? a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET | Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email -------------- next part -------------- An HTML attachment was scrubbed... URL: From poppe at us.ibm.com Thu Feb 25 17:01:00 2016 From: poppe at us.ibm.com (Monty Poppe) Date: Thu, 25 Feb 2016 11:01:00 -0600 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: References: Message-ID: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> All CES nodes should operate consistently across the cluster. Here are a few tips on debugging: /usr/lpp/mmfs/bin/wbinfo -p to ensure winbind is running properly /usr/lpp/mmfs/bin/wbinfo -P (capital P), to ensure winbind can communicate with AD server ensure the first nameserver in /etc/resolv.conf points to your AD server (check all nodes) mmuserauth service check --server-reachability for a more thorough validation that all nodes can communicate to the authentication server If you need to look at samba logs (/var/adm/ras/log.smbd & log.wb-) to see what's going on, change samba log levels issue: /usr/lpp/mmfs/bin/net conf setparm global 'log level' 3. Don't forget to set back to 0 or 1 when you are done! If you're willing to go with a later release, AD authentication with LDAP ID mapping has been added as a feature in the 4.2 release. ( https://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_adwithldap.htm?lang=en ) Monty Poppe Spectrum Scale Test poppe at us.ibm.com 512-286-8047 T/L 363-8047 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 02/25/2016 07:19 AM Subject: Re: [gpfsug-discuss] Integration with Active Directory Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Gethyn, From what I recall, CTDB used underneath is used to share the secret and only the primary named machine is joined, but CTDB and CES should work this backend part out for you. I do have a question though, do you want to have consistent UIDs across other systems? For example if you plan to use NFS to other *nix systems, then you probably want to think about LDAP mapping and using custom auth (we do this as out AD doesn't contain UIDs either). Simon From: on behalf of "Longworth, Gethyn" Reply-To: "gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Date: Thursday, 25 February 2016 at 10:42 To: "gpfsug-discuss at spectrumscale.org" Subject: [gpfsug-discuss] Integration with Active Directory Hi all, I?m new to both GPFS and to this mailing list, so I thought I?d introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale?) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I?ve configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can?t use RFC2307, as our IT department don?t understand what this is), but the problem I?m having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate ? I can run ?id? on that node with a domain account and it provides the correct answer ? whereas the other will not and denies any knowledge of the domain or user. From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected ? a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET | Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From christof.schmitt at us.ibm.com Thu Feb 25 17:46:02 2016 From: christof.schmitt at us.ibm.com (Christof Schmitt) Date: Thu, 25 Feb 2016 17:46:02 +0000 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> References: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com>, Message-ID: <201602251746.u1PHk8Uw012701@d01av03.pok.ibm.com> An HTML attachment was scrubbed... URL: From Gethyn.Longworth at Rolls-Royce.com Fri Feb 26 09:04:50 2016 From: Gethyn.Longworth at Rolls-Royce.com (Longworth, Gethyn) Date: Fri, 26 Feb 2016 09:04:50 +0000 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> References: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> Message-ID: Monty, Simon, Christof, Many thanks for your help. I found that the firewall wasn?t configured correctly ? I made the assumption that the samba ?service? enabled the ctdb port (4379 the next person searching for this) as well ? enabling it manually and restarting the node has resolved it. I need to investigate the issue of consistent uids / gids between my linux machines. Obviously very easy when you have full control over the AD, but as ours is a local AD (which I can control) and most of the user IDs coming over on a trust it is much more tricky. Has anyone done an ldap set up where they are effectively adding extra user info (like uids / gids / samba info) to existing AD users without messing with the original AD? Thanks, Gethyn From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Monty Poppe Sent: 25 February 2016 17:01 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Integration with Active Directory All CES nodes should operate consistently across the cluster. Here are a few tips on debugging: /usr/lpp/mmfs/bin/wbinfo-p to ensure winbind is running properly /usr/lpp/mmfs/bin/wbinfo-P (capital P), to ensure winbind can communicate with AD server ensure the first nameserver in /etc/resolv.conf points to your AD server (check all nodes) mmuserauth service check --server-reachability for a more thorough validation that all nodes can communicate to the authentication server If you need to look at samba logs (/var/adm/ras/log.smbd & log.wb-) to see what's going on, change samba log levels issue: /usr/lpp/mmfs/bin/net conf setparm global 'log level' 3. Don't forget to set back to 0 or 1 when you are done! If you're willing to go with a later release, AD authentication with LDAP ID mapping has been added as a feature in the 4.2 release. ( https://www-01.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_adwithldap.htm?lang=en) Monty Poppe Spectrum Scale Test poppe at us.ibm.com 512-286-8047 T/L 363-8047 From: "Simon Thompson (Research Computing - IT Services)" To: gpfsug main discussion list Date: 02/25/2016 07:19 AM Subject: Re: [gpfsug-discuss] Integration with Active Directory Sent by: gpfsug-discuss-bounces at spectrumscale.org _____ Hi Gethyn, >From what I recall, CTDB used underneath is used to share the secret and only the primary named machine is joined, but CTDB and CES should work this backend part out for you. I do have a question though, do you want to have consistent UIDs across other systems? For example if you plan to use NFS to other *nix systems, then you probably want to think about LDAP mapping and using custom auth (we do this as out AD doesn't contain UIDs either). Simon From: < gpfsug-discuss-bounces at spectrumscale.org> on behalf of "Longworth, Gethyn" < Gethyn.Longworth at Rolls-Royce.com> Reply-To: " gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Date: Thursday, 25 February 2016 at 10:42 To: " gpfsug-discuss at spectrumscale.org" < gpfsug-discuss at spectrumscale.org> Subject: [gpfsug-discuss] Integration with Active Directory Hi all, I?m new to both GPFS and to this mailing list, so I thought I?d introduce myself and one of the issues I am having. I am a consultant to Rolls-Royce Aerospace currently working on a large facilities project, part of my remit is to deliver a data system. We selected GPFS (sorry Spectrum Scale?) for this three clusters, with two of the clusters using storage provided by Spectrum Accelerate, and the other by a pair of IBM SANs and a tape library back up. My current issue is to do with integration into Active Directory. I?ve configured my three node test cluster with two protocol nodes and a quorum (version 4.2.0.1 on RHEL 7.1) as the master for an automated id mapping system (we can?t use RFC2307, as our IT department don?t understand what this is), but the problem I?m having is to do with domain joins. The documentation suggests that using the CES cluster hostname to register in the domain will allow all nodes in the cluster to share the identity mapping, but only one of my protocol nodes will authenticate ? I can run ?id? on that node with a domain account and it provides the correct answer ? whereas the other will not and denies any knowledge of the domain or user. From a GPFS point of view, this results in a degraded CES, SMB, NFS and AUTH state. My small amount of AD knowledge says that this is expected ? a single entry (e.g. the cluster name) can only have one SID. So I guess that my question is, what have I missed? Is there something in AD that I need to configure to make this work? Does one of the nodes in the cluster end up as the master and the other a subordinate? How do I configure that within the confines of mmuserauth? As I said I am a bit new to this, and am essentially learning on the fly, so any pointers that you can provide would be appreciated! Cheers, Gethyn Longworth MEng CEng MIET |Consultant Systems Engineer | AEROSPACE P Please consider the environment before printing this email _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 6181 bytes Desc: not available URL: -------------- next part -------------- The data contained in, or attached to, this e-mail, may contain confidential information. If you have received it in error you should notify the sender immediately by reply e-mail, delete the message from your system and contact +44 (0) 3301235850 (Security Operations Centre) if you need assistance. Please do not copy it for any purpose, or disclose its contents to any other person. An e-mail response to this address may be subject to interception or monitoring for operational reasons or for lawful business practices. (c) 2016 Rolls-Royce plc Registered office: 62 Buckingham Gate, London SW1E 6AT Company number: 1003142. Registered in England. From S.J.Thompson at bham.ac.uk Fri Feb 26 10:12:21 2016 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 26 Feb 2016 10:12:21 +0000 Subject: [gpfsug-discuss] Integration with Active Directory In-Reply-To: References: <201602251701.u1PH19Aw014610@d03av02.boulder.ibm.com> Message-ID: In theory you can do this with LDS ... My solution though is to run LDAP server (with replication) across the CTDB server nodes. Each node then points to itself and the other CTDB servers for the SMB config. We populate it with users and groups, names copied in from AD. Its a bit of a fudge to make it work, and we found for auxiliary groups that winbind wasn't doing quite what it should, so have to have the SIDs populated in the local LDAP server config. Simon From: > on behalf of "Longworth, Gethyn" > Reply-To: "gpfsug-discuss at spectrumscale.org" > Date: Friday, 26 February 2016 at 09:04 To: "gpfsug-discuss at spectrumscale.org" > Subject: Re: [gpfsug-discuss] Integration with Active Directory Has anyone done an ldap set up where they are effectively adding extra user info (like uids / gids / samba info) to existing AD users without messing with the original AD? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Fri Feb 26 10:52:31 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Fri, 26 Feb 2016 10:52:31 +0000 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> References: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> Message-ID: Hi Dean, Thanks for this ? I had hoped this was the case. However what I?m now wondering is, if we operate the cache in independent-writer mode and the new file was pushed back home (conforming to cache, then home placement policies), then is subsequently evicted from the cache; if it needs to be pulled back for local operations in the cache, will the cache cluster see this file as ?new? for the third time? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Dean Hildebrand Sent: 24 February 2016 19:17 To: gpfsug main discussion list Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM and Placement Policies Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center [Inactive hide details for Luke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM S]Luke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache wri From: Luke Raimbach > To: gpfsug main discussion list > Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org ________________________________ Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 105 bytes Desc: image001.gif URL: From dhildeb at us.ibm.com Fri Feb 26 18:58:47 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 26 Feb 2016 10:58:47 -0800 Subject: [gpfsug-discuss] AFM and Placement Policies In-Reply-To: References: <201602241923.u1OJNwuY006395@d01av04.pok.ibm.com> Message-ID: <201602261907.u1QJ7FZb019973@d03av03.boulder.ibm.com> Hi Luke, Cache eviction simply frees up space in the cache, but the inode/file is always the same. It does not delete and recreate the file in the cache. This is why you can continue to view files in the cache namespace even if they are evicted. Dean Hildebrand IBM Almaden Research Center From: Luke Raimbach To: gpfsug main discussion list Date: 02/26/2016 02:52 AM Subject: Re: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi Dean, Thanks for this ? I had hoped this was the case. However what I?m now wondering is, if we operate the cache in independent-writer mode and the new file was pushed back home (conforming to cache, then home placement policies), then is subsequently evicted from the cache; if it needs to be pulled back for local operations in the cache, will the cache cluster see this file as ?new? for the third time? Cheers, Luke. From: gpfsug-discuss-bounces at spectrumscale.org [ mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Dean Hildebrand Sent: 24 February 2016 19:17 To: gpfsug main discussion list Cc: gpfsug main discussion list Subject: Re: [gpfsug-discuss] AFM and Placement Policies Hi Luke, The short answer is yes, when the file is created on the home, it is a 'brand new' creation that will conform to any and all new placement policies that you set on the home site. So if you are using NFS in the relationship, then it is simply created just like any other file is created over NFS. The same goes when using GPFS to the home cluster... Dean IBM Almaden Research Center Inactive hide details for Luke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM SLuke Raimbach ---02/24/2016 06:05:43 AM---Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache wri From: Luke Raimbach To: gpfsug main discussion list Date: 02/24/2016 06:05 AM Subject: [gpfsug-discuss] AFM and Placement Policies Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, I have two GPFS file systems (A and B) and an AFM Single Writer relationship: /fsB/cache writes back to /fsA/home I have a placement policy which sets extended attributes on file creation in /fsB/cache. When I create a new file in /fsB/cache/new.file and it is pushed back by AFM to /fsA/home/new.file, can the home fileset apply a different placement policy to add or modify extended attributes? I guess the deeper question is does each file system in this arrangement see the new.file as "new" in both locations? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From Luke.Raimbach at crick.ac.uk Mon Feb 29 14:31:57 2016 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Mon, 29 Feb 2016 14:31:57 +0000 Subject: [gpfsug-discuss] AFM and Symbolic Links Message-ID: Hi All, Quick one: Does AFM follow symbolic links present at home in the cache fileset? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From dhildeb at us.ibm.com Mon Feb 29 16:59:11 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Mon, 29 Feb 2016 08:59:11 -0800 Subject: [gpfsug-discuss] AFM and Symbolic Links In-Reply-To: References: Message-ID: <201602291701.u1TH1owF031283@d03av05.boulder.ibm.com> Hi Luke, Quick response.... yes :) Dean From: Luke Raimbach To: gpfsug main discussion list Date: 02/29/2016 06:32 AM Subject: [gpfsug-discuss] AFM and Symbolic Links Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Quick one: Does AFM follow symbolic links present at home in the cache fileset? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From dhildeb at us.ibm.com Mon Feb 29 16:59:11 2016 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Mon, 29 Feb 2016 08:59:11 -0800 Subject: [gpfsug-discuss] AFM and Symbolic Links In-Reply-To: References: Message-ID: <201602291702.u1TH2Ciu032313@d03av01.boulder.ibm.com> Hi Luke, Quick response.... yes :) Dean From: Luke Raimbach To: gpfsug main discussion list Date: 02/29/2016 06:32 AM Subject: [gpfsug-discuss] AFM and Symbolic Links Sent by: gpfsug-discuss-bounces at spectrumscale.org Hi All, Quick one: Does AFM follow symbolic links present at home in the cache fileset? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: