From pete at realisestudio.com Tue Jul 9 12:58:44 2013 From: pete at realisestudio.com (Pete Smith) Date: Tue, 9 Jul 2013 12:58:44 +0100 Subject: [gpfsug-discuss] software RAID? Message-ID: Hi all Slightly nuts question, I know ... but is anyone using software RAID? Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, obviously. TIA -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From pete at realisestudio.com Tue Jul 9 13:00:59 2013 From: pete at realisestudio.com (Pete Smith) Date: Tue, 9 Jul 2013 13:00:59 +0100 Subject: [gpfsug-discuss] green drives Message-ID: Even more mental ... anyone using green drives in their lowest HD tier? I've used them in a Nexsan with MAID capability, for nearline, and they were fine for this purpose, but I wouldn't expect them to sit happily in GPFS. Happy to be confirmed wrong in my suspicions. -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From orlando.richards at ed.ac.uk Tue Jul 9 15:49:05 2013 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Tue, 9 Jul 2013 15:49:05 +0100 (BST) Subject: [gpfsug-discuss] green drives In-Reply-To: References: Message-ID: On Tue, 9 Jul 2013, Pete Smith wrote: > Even more mental ... anyone using green drives in their lowest HD tier? > > I've used them in a Nexsan with MAID capability, for nearline, and > they were fine for this purpose, but I wouldn't expect them to sit > happily in GPFS. > > Happy to be confirmed wrong in my suspicions. > By "green" - do you mean the 5400rpm drives? Or something else (spin-down?)? If 5400rpm - I can't think of a reason they wouldn't perform to expectations in GPFS. Naturally, you'd want to keep your metadata off them - and use them for sequential activity if possible (put large files on them). > -- > Pete Smith > DevOp/System Administrator > Realise Studio > 12/13 Poland Street, London W1F 8QB > T. +44 (0)20 7165 9644 > > realisestudio.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From orlando.richards at ed.ac.uk Tue Jul 9 15:54:50 2013 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Tue, 9 Jul 2013 15:54:50 +0100 (BST) Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: Message-ID: On Tue, 9 Jul 2013, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > Hmm - for shared storage, or for a single-node disk server? If it's shared storage, I can imagine challenges with ensuring consistency across multiple servers - there'd presumably be no mirroring of in-flight or cached information between servers using the shared storage. If it's just one server connected to the disks you'd dodge that - though you'd want to be sure about consistency of data on disk in the event of a sudden server failure (power cut, etc). If you give it a go, I'd be interested to see how you get on with it. > TIA > > -- > Pete Smith > DevOp/System Administrator > Realise Studio > 12/13 Poland Street, London W1F 8QB > T. +44 (0)20 7165 9644 > > realisestudio.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From APPLEBY at uk.ibm.com Tue Jul 9 16:11:50 2013 From: APPLEBY at uk.ibm.com (Richard Appleby) Date: Tue, 9 Jul 2013 16:11:50 +0100 Subject: [gpfsug-discuss] AUTO: Richard Appleby/UK/IBM is out of the office until 26/07/99. (returning 28/10/2013) Message-ID: I am out of the office until 28/10/2013. Please direct enquires to either: My manager, John Palfreyman (x246542) My deputies, Chris Gibson (x246192) and Jonathan Waddilove (x248250) Note: This is an automated response to your message "[gpfsug-discuss] software RAID?" sent on 09/07/2013 12:58:44. This is the only notification you will receive while this person is away. From jonathan at buzzard.me.uk Tue Jul 9 16:38:11 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 09 Jul 2013 16:38:11 +0100 Subject: [gpfsug-discuss] green drives In-Reply-To: References: Message-ID: <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 15:49 +0100, orlando.richards at ed.ac.uk wrote: > On Tue, 9 Jul 2013, Pete Smith wrote: > > > Even more mental ... anyone using green drives in their lowest HD tier? > > > > I've used them in a Nexsan with MAID capability, for nearline, and > > they were fine for this purpose, but I wouldn't expect them to sit > > happily in GPFS. > > > > Happy to be confirmed wrong in my suspicions. > > > > By "green" - do you mean the 5400rpm drives? Or something else > (spin-down?)? > > If 5400rpm - I can't think of a reason they wouldn't perform to > expectations in GPFS. Naturally, you'd want to keep your metadata off them > - and use them for sequential activity if possible (put large files on > them). > You also I think need to make sure you are using "enterprise" versions of such drives. However I don't believe there are "enterprise" versions of the 5400rpm drive variants, therefore using them would be in my personal experience as dum as hell. Another point to bear in mind is you will save a lot less power than you might imagine. For example a Seagate Desktop HDD.15 4TB drive is 7.5W read/write, 5W idle and the name gives it away. While a Seagate Constellation ES.3 4TB drive is 11.3W read/write and 6.7W idle and enterprise rated. To make those numbers more meaningful for ~90TB or usable disk space doing three RAID6's of 8D+2P you will save ~120W. Is that really worth it? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jonathan at buzzard.me.uk Tue Jul 9 16:52:40 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 09 Jul 2013 16:52:40 +0100 Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: Message-ID: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 12:58 +0100, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > I presume you are talking about Linux software RAID on external JOBD array? My personal experience is that it sucks really really badly. Put another way what where fairly low lever operator tasks such as replacing a failed hard disk, now become the domain of guru level Linux admins. Then there are all the issues with having large numbers of drives hanging of the back of a Linux box. A Dell PowerVault MD3200/MD3260 with expansion enclosures as required is not a lot more expensive and a *LOT* less of a headache. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Tue Jul 9 17:18:42 2013 From: oehmes at us.ibm.com (Sven Oehme) Date: Tue, 9 Jul 2013 09:18:42 -0700 Subject: [gpfsug-discuss] software RAID? In-Reply-To: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> Message-ID: Hi, in case you are not aware of it, GPFS itself provides declustered distributed Software Raid capabilities with end-to-end checksum and many other features. it ships in form of a pre-canned Solution, take a look at http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ Sven From: Jonathan Buzzard To: gpfsug main discussion list Date: 07/09/2013 09:00 AM Subject: Re: [gpfsug-discuss] software RAID? Sent by: gpfsug-discuss-bounces at gpfsug.org On Tue, 2013-07-09 at 12:58 +0100, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > I presume you are talking about Linux software RAID on external JOBD array? My personal experience is that it sucks really really badly. Put another way what where fairly low lever operator tasks such as replacing a failed hard disk, now become the domain of guru level Linux admins. Then there are all the issues with having large numbers of drives hanging of the back of a Linux box. A Dell PowerVault MD3200/MD3260 with expansion enclosures as required is not a lot more expensive and a *LOT* less of a headache. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.bergman at uphs.upenn.edu Tue Jul 9 17:32:51 2013 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Tue, 09 Jul 2013 12:32:51 -0400 Subject: [gpfsug-discuss] green drives In-Reply-To: Your message of "Tue, 09 Jul 2013 16:38:11 BST." <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> References: <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> Message-ID: <32173.1373387571@localhost> In the message dated: Tue, 09 Jul 2013 13:00:59 +0100, The pithy ruminations from Pete Smith on <[gpfsug-discuss] green drives> were: => Even more mental ... anyone using green drives in their lowest HD tier? => => I've used them in a Nexsan with MAID capability, for nearline, and => they were fine for this purpose, but I wouldn't expect them to sit => happily in GPFS. Why not? We use an older Nexsan SATAboy, with MAID capability, as the slowest tier in our GPFS environment. GPFS doesn't know (or care) that the Nexsan hardware shuts down and spins up the disks on request--that's all hidden from the filesystem layer, except for a longer latency on some IO requests if the platters aren't spinning, there's nothing visible as far as GPFS is concerned. Mark => => Happy to be confirmed wrong in my suspicions. => => -- => Pete Smith => DevOp/System Administrator => Realise Studio => 12/13 Poland Street, London W1F 8QB => T. +44 (0)20 7165 9644 => => realisestudio.com From sfadden at us.ibm.com Tue Jul 9 17:13:53 2013 From: sfadden at us.ibm.com (Scott Fadden) Date: Tue, 9 Jul 2013 10:13:53 -0600 Subject: [gpfsug-discuss] AUTO: I am on vacation until Jan 03 - 2012 (returning 07/29/2013) Message-ID: I am out of the office until 07/29/2013. Talk to you next year. Note: This is an automated response to your message "[gpfsug-discuss] software RAID?" sent on 07/09/2013 5:58:44. This is the only notification you will receive while this person is away. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Jul 10 10:46:02 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 10 Jul 2013 10:46:02 +0100 Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> Message-ID: <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 09:18 -0700, Sven Oehme wrote: > Hi, > > in case you are not aware of it, GPFS itself provides declustered > distributed Software Raid capabilities with end-to-end checksum and > many other features. > it ships in form of a pre-canned Solution, take a look at > http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ > There is a world of difference between a tightly integrated system like that where every component down to the rack is controlled by a single vendor, and random JBOD expansion enclosure with random x86 server, random interconnect and random version of Linux. Noting of course where I work anything that comes in a vendor specified rack is a big problem due to the fact we use our own racks with water cooling. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Wed Jul 10 14:28:37 2013 From: oehmes at us.ibm.com (Sven Oehme) Date: Wed, 10 Jul 2013 06:28:37 -0700 Subject: [gpfsug-discuss] software RAID? In-Reply-To: <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> Message-ID: i am not sure what the exact % is , but multiple GSS customers use their own racks. GSS supports a variety of Interconnects and the clients can run a large number of Linux distros, AIX and Windows that are supported, even in intermix within one cluster. we also have quite a number of customers using IBM equipment as the storage resource, but their own servers for the clients, which is usually the majority of the nodes in a cluster. Sven From: Jonathan Buzzard To: gpfsug main discussion list Date: 07/10/2013 02:46 AM Subject: Re: [gpfsug-discuss] software RAID? Sent by: gpfsug-discuss-bounces at gpfsug.org On Tue, 2013-07-09 at 09:18 -0700, Sven Oehme wrote: > Hi, > > in case you are not aware of it, GPFS itself provides declustered > distributed Software Raid capabilities with end-to-end checksum and > many other features. > it ships in form of a pre-canned Solution, take a look at > http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ > There is a world of difference between a tightly integrated system like that where every component down to the rack is controlled by a single vendor, and random JBOD expansion enclosure with random x86 server, random interconnect and random version of Linux. Noting of course where I work anything that comes in a vendor specified rack is a big problem due to the fact we use our own racks with water cooling. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sandra.McLaughlin at astrazeneca.com Thu Jul 11 12:24:26 2013 From: Sandra.McLaughlin at astrazeneca.com (McLaughlin, Sandra M) Date: Thu, 11 Jul 2013 12:24:26 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs Message-ID: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> Hi, I would just like some opinions on the best way to serve a gpfs file system to server/workstations which are not directly connected to the storage. Background: We are in the process of moving from old storage (approx 20TB); lots of filesystems - JFS2 on AIX with HACMP. served out with NFS to a linux cluster and about 150 linux workstations and random other servers; to new storage (approx 250TB); 2 gpfs filesystems, Linux NSDs, using ctdb for NFS and Samba. We have also installed a server for TSM, which is SAN connected to the gpfs, and have some new compute servers which are also on the SAN, and therefore have pretty good performance. Should I still use the automounter ? Different maps or symbolic links to emulate the automounter names for the servers that are directly SAN-connected gpfs clients ? /home/username or whatever has to work on all systems. I found a bit in the gpfs problem determination guide suggesting that there is a way to use an automounter program map for gpfs (/usr/lpp/mmfs/bin/mmdynamicmap) but I can't find any other documentation about it. I would really like to hear how other people with a similar setup are doing this. Thanks, Sandra. Sandra McLaughlin Scientific Computing Specialist ___________________________________________________ AstraZeneca R&D | R&D Information 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG Tel +44 1625 517307 sandra.mclaughlin at astrazeneca.com -------------------------------------------------------------------------- AstraZeneca UK Limited is a company incorporated in England and Wales with registered number: 03674842 and a registered office at 2 Kingdom Street, London, W2 6BD. Confidentiality Notice: This message is private and may contain confidential, proprietary and legally privileged information. If you have received this message in error, please notify us and remove it from your system and note that you must not copy, distribute or take any action in reliance on it. Any unauthorised use or disclosure of the contents of this message is not permitted and may be unlawful. Disclaimer: Email messages may be subject to delays, interception, non-delivery and unauthorised alterations. Therefore, information expressed in this message is not given or endorsed by AstraZeneca UK Limited unless otherwise notified by an authorised representative independent of this message. No contractual relationship is created by this message by any person unless specifically indicated by agreement in writing other than email. Monitoring: AstraZeneca UK Limited may monitor email traffic data and content for the purposes of the prevention and detection of crime, ensuring the security of our computer systems and checking Compliance with our Code of Conduct and Policies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Fri Jul 12 14:59:51 2013 From: chair at gpfsug.org (Jez Tucker (GPFS UG Chair)) Date: Fri, 12 Jul 2013 14:59:51 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs In-Reply-To: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> References: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> Message-ID: <51E00BD7.9010905@gpfsug.org> Hey Sandra, The mmdynamicmap is used when auto-mounting GPFS on a node the GPFS software installed (see also /var/mmfs/gen/mmIndirectMap when gpfs -A is set to 'automount'.) For NFS clients, I like autofs a lot. There are two types of map, hence an example for each: Direct maps /etc/auto.master, add the line: /- /etc/auto.gpfsnfs /etc/auto.gpfsnfs, add the line: /path/to/mountpoint -fstype=nfs,nfsvers=3 ctdbclustername:/path/to/nfsexport Indirect Maps For home directories, you can mount them using an indirect map so as to only mount the logged in user's home directory. (or mount them all, using a direct map for their containing folder) /etc/auto.master, add the line: /path/to/homedirsmount /etc/auto.homedirs /etc/auto.homedirs, add the line: * homeserver:/path/to/homedirs/& Test in a sandpit. I would imagine you might need to make sure that your NFS mount point reflects the same path as on a GPFS client/server. Once you're happy this works, you can push out the maps from your ldap/puppet/other service. I'm sure other folks also have their methods, chime in. Regards, Jez --- GPFS UG Chair On 11/07/13 12:24, McLaughlin, Sandra M wrote: > > Hi, > > I would just like some opinions on the best way to serve a gpfs file > system to server/workstations which are not directly connected to the > storage. > > Background: We are in the process of moving from old storage (approx > 20TB); lots of filesystems -- JFS2 on AIX with HACMP. served out with > NFS to a linux cluster and about 150 linux workstations and random > other servers; to new storage (approx 250TB); 2 gpfs filesystems, > Linux NSDs, using ctdb for NFS and Samba. We have also installed a > server for TSM, which is SAN connected to the gpfs, and have some new > compute servers which are also on the SAN, and therefore have pretty > good performance. > > Should I still use the automounter ? Different maps or symbolic links > to emulate the automounter names for the servers that are directly > SAN-connected gpfs clients ? /home//username/ or whatever has to work > on all systems. > > I found a bit in the gpfs problem determination guide suggesting that there is a way to use an automounter program map for gpfs (/usr/lpp/mmfs/bin/mmdynamicmap)but I can't find any other documentation about it. > > I would really like to hear how other people with a similar setup are doing this. > > Thanks, Sandra. > > *Sandra McLaughlin* > > Scientific Computing Specialist > > ___________________________________________________ > > *AstraZeneca* > > *R&D*| R&D Information > > 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG > > Tel +44 1625 517307 > > sandra.mclaughlin at astrazeneca.com > > > ------------------------------------------------------------------------ > > AstraZeneca UK Limited is a company incorporated in England and Wales > with registered number: 03674842 and a registered office at 2 Kingdom > Street, London, W2 6BD. > > *Confidentiality Notice: *This message is private and may contain > confidential, proprietary and legally privileged information. If you > have received this message in error, please notify us and remove it > from your system and note that you must not copy, distribute or take > any action in reliance on it. Any unauthorised use or disclosure of > the contents of this message is not permitted and may be unlawful. > > *Disclaimer:* Email messages may be subject to delays, interception, > non-delivery and unauthorised alterations. Therefore, information > expressed in this message is not given or endorsed by AstraZeneca UK > Limited unless otherwise notified by an authorised representative > independent of this message. No contractual relationship is created by > this message by any person unless specifically indicated by agreement > in writing other than email. > > *Monitoring: *AstraZeneca UK Limited may monitor email traffic data > and content for the purposes of the prevention and detection of crime, > ensuring the security of our computer systems and checking compliance > with our Code of Conduct and policies. > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From pete at realisestudio.com Fri Jul 12 17:21:36 2013 From: pete at realisestudio.com (Pete Smith) Date: Fri, 12 Jul 2013 17:21:36 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs In-Reply-To: <51E00BD7.9010905@gpfsug.org> References: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> <51E00BD7.9010905@gpfsug.org> Message-ID: Push from ldap works great. On 12 July 2013 14:59, Jez Tucker (GPFS UG Chair) wrote: > Hey Sandra, > > The mmdynamicmap is used when auto-mounting GPFS on a node the GPFS > software installed (see also /var/mmfs/gen/mmIndirectMap when gpfs -A is set > to 'automount'.) > > > For NFS clients, I like autofs a lot. > There are two types of map, hence an example for each: > > > Direct maps > > /etc/auto.master, add the line: > /- /etc/auto.gpfsnfs > > /etc/auto.gpfsnfs, add the line: > /path/to/mountpoint -fstype=nfs,nfsvers=3 > ctdbclustername:/path/to/nfsexport > > > Indirect Maps > > For home directories, you can mount them using an indirect map so as to only > mount the logged in user's home directory. > (or mount them all, using a direct map for their containing folder) > > /etc/auto.master, add the line: > /path/to/homedirsmount /etc/auto.homedirs > > /etc/auto.homedirs, add the line: > * homeserver:/path/to/homedirs/& > > > > Test in a sandpit. > > I would imagine you might need to make sure that your NFS mount point > reflects the same path as on a GPFS client/server. > > Once you're happy this works, you can push out the maps from your > ldap/puppet/other service. > > I'm sure other folks also have their methods, chime in. > > Regards, > > Jez > --- > GPFS UG Chair > > > > On 11/07/13 12:24, McLaughlin, Sandra M wrote: > > Hi, > > > > I would just like some opinions on the best way to serve a gpfs file system > to server/workstations which are not directly connected to the storage. > > > > Background: We are in the process of moving from old storage (approx 20TB); > lots of filesystems ? JFS2 on AIX with HACMP. served out with NFS to a linux > cluster and about 150 linux workstations and random other servers; to new > storage (approx 250TB); 2 gpfs filesystems, Linux NSDs, using ctdb for NFS > and Samba. We have also installed a server for TSM, which is SAN connected > to the gpfs, and have some new compute servers which are also on the SAN, > and therefore have pretty good performance. > > > > Should I still use the automounter ? Different maps or symbolic links to > emulate the automounter names for the servers that are directly > SAN-connected gpfs clients ? /home/username or whatever has to work on all > systems. > > I found a bit in the gpfs problem determination guide suggesting that there > is a way to use an automounter program map for gpfs > (/usr/lpp/mmfs/bin/mmdynamicmap) but I can?t find any other documentation > about it. > > > > I would really like to hear how other people with a similar setup are doing > this. > > > > Thanks, Sandra. > > > > Sandra McLaughlin > > Scientific Computing Specialist > > ___________________________________________________ > > AstraZeneca > > R&D | R&D Information > > 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG > > Tel +44 1625 517307 > > sandra.mclaughlin at astrazeneca.com > > > > > > ________________________________ > > AstraZeneca UK Limited is a company incorporated in England and Wales with > registered number: 03674842 and a registered office at 2 Kingdom Street, > London, W2 6BD. > > Confidentiality Notice: This message is private and may contain > confidential, proprietary and legally privileged information. If you have > received this message in error, please notify us and remove it from your > system and note that you must not copy, distribute or take any action in > reliance on it. Any unauthorised use or disclosure of the contents of this > message is not permitted and may be unlawful. > > Disclaimer: Email messages may be subject to delays, interception, > non-delivery and unauthorised alterations. Therefore, information expressed > in this message is not given or endorsed by AstraZeneca UK Limited unless > otherwise notified by an authorised representative independent of this > message. No contractual relationship is created by this message by any > person unless specifically indicated by agreement in writing other than > email. > > Monitoring: AstraZeneca UK Limited may monitor email traffic data and > content for the purposes of the prevention and detection of crime, ensuring > the security of our computer systems and checking compliance with our Code > of Conduct and policies. > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From pete at realisestudio.com Tue Jul 9 12:58:44 2013 From: pete at realisestudio.com (Pete Smith) Date: Tue, 9 Jul 2013 12:58:44 +0100 Subject: [gpfsug-discuss] software RAID? Message-ID: Hi all Slightly nuts question, I know ... but is anyone using software RAID? Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, obviously. TIA -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From pete at realisestudio.com Tue Jul 9 13:00:59 2013 From: pete at realisestudio.com (Pete Smith) Date: Tue, 9 Jul 2013 13:00:59 +0100 Subject: [gpfsug-discuss] green drives Message-ID: Even more mental ... anyone using green drives in their lowest HD tier? I've used them in a Nexsan with MAID capability, for nearline, and they were fine for this purpose, but I wouldn't expect them to sit happily in GPFS. Happy to be confirmed wrong in my suspicions. -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From orlando.richards at ed.ac.uk Tue Jul 9 15:49:05 2013 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Tue, 9 Jul 2013 15:49:05 +0100 (BST) Subject: [gpfsug-discuss] green drives In-Reply-To: References: Message-ID: On Tue, 9 Jul 2013, Pete Smith wrote: > Even more mental ... anyone using green drives in their lowest HD tier? > > I've used them in a Nexsan with MAID capability, for nearline, and > they were fine for this purpose, but I wouldn't expect them to sit > happily in GPFS. > > Happy to be confirmed wrong in my suspicions. > By "green" - do you mean the 5400rpm drives? Or something else (spin-down?)? If 5400rpm - I can't think of a reason they wouldn't perform to expectations in GPFS. Naturally, you'd want to keep your metadata off them - and use them for sequential activity if possible (put large files on them). > -- > Pete Smith > DevOp/System Administrator > Realise Studio > 12/13 Poland Street, London W1F 8QB > T. +44 (0)20 7165 9644 > > realisestudio.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From orlando.richards at ed.ac.uk Tue Jul 9 15:54:50 2013 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Tue, 9 Jul 2013 15:54:50 +0100 (BST) Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: Message-ID: On Tue, 9 Jul 2013, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > Hmm - for shared storage, or for a single-node disk server? If it's shared storage, I can imagine challenges with ensuring consistency across multiple servers - there'd presumably be no mirroring of in-flight or cached information between servers using the shared storage. If it's just one server connected to the disks you'd dodge that - though you'd want to be sure about consistency of data on disk in the event of a sudden server failure (power cut, etc). If you give it a go, I'd be interested to see how you get on with it. > TIA > > -- > Pete Smith > DevOp/System Administrator > Realise Studio > 12/13 Poland Street, London W1F 8QB > T. +44 (0)20 7165 9644 > > realisestudio.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From APPLEBY at uk.ibm.com Tue Jul 9 16:11:50 2013 From: APPLEBY at uk.ibm.com (Richard Appleby) Date: Tue, 9 Jul 2013 16:11:50 +0100 Subject: [gpfsug-discuss] AUTO: Richard Appleby/UK/IBM is out of the office until 26/07/99. (returning 28/10/2013) Message-ID: I am out of the office until 28/10/2013. Please direct enquires to either: My manager, John Palfreyman (x246542) My deputies, Chris Gibson (x246192) and Jonathan Waddilove (x248250) Note: This is an automated response to your message "[gpfsug-discuss] software RAID?" sent on 09/07/2013 12:58:44. This is the only notification you will receive while this person is away. From jonathan at buzzard.me.uk Tue Jul 9 16:38:11 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 09 Jul 2013 16:38:11 +0100 Subject: [gpfsug-discuss] green drives In-Reply-To: References: Message-ID: <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 15:49 +0100, orlando.richards at ed.ac.uk wrote: > On Tue, 9 Jul 2013, Pete Smith wrote: > > > Even more mental ... anyone using green drives in their lowest HD tier? > > > > I've used them in a Nexsan with MAID capability, for nearline, and > > they were fine for this purpose, but I wouldn't expect them to sit > > happily in GPFS. > > > > Happy to be confirmed wrong in my suspicions. > > > > By "green" - do you mean the 5400rpm drives? Or something else > (spin-down?)? > > If 5400rpm - I can't think of a reason they wouldn't perform to > expectations in GPFS. Naturally, you'd want to keep your metadata off them > - and use them for sequential activity if possible (put large files on > them). > You also I think need to make sure you are using "enterprise" versions of such drives. However I don't believe there are "enterprise" versions of the 5400rpm drive variants, therefore using them would be in my personal experience as dum as hell. Another point to bear in mind is you will save a lot less power than you might imagine. For example a Seagate Desktop HDD.15 4TB drive is 7.5W read/write, 5W idle and the name gives it away. While a Seagate Constellation ES.3 4TB drive is 11.3W read/write and 6.7W idle and enterprise rated. To make those numbers more meaningful for ~90TB or usable disk space doing three RAID6's of 8D+2P you will save ~120W. Is that really worth it? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jonathan at buzzard.me.uk Tue Jul 9 16:52:40 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 09 Jul 2013 16:52:40 +0100 Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: Message-ID: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 12:58 +0100, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > I presume you are talking about Linux software RAID on external JOBD array? My personal experience is that it sucks really really badly. Put another way what where fairly low lever operator tasks such as replacing a failed hard disk, now become the domain of guru level Linux admins. Then there are all the issues with having large numbers of drives hanging of the back of a Linux box. A Dell PowerVault MD3200/MD3260 with expansion enclosures as required is not a lot more expensive and a *LOT* less of a headache. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Tue Jul 9 17:18:42 2013 From: oehmes at us.ibm.com (Sven Oehme) Date: Tue, 9 Jul 2013 09:18:42 -0700 Subject: [gpfsug-discuss] software RAID? In-Reply-To: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> Message-ID: Hi, in case you are not aware of it, GPFS itself provides declustered distributed Software Raid capabilities with end-to-end checksum and many other features. it ships in form of a pre-canned Solution, take a look at http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ Sven From: Jonathan Buzzard To: gpfsug main discussion list Date: 07/09/2013 09:00 AM Subject: Re: [gpfsug-discuss] software RAID? Sent by: gpfsug-discuss-bounces at gpfsug.org On Tue, 2013-07-09 at 12:58 +0100, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > I presume you are talking about Linux software RAID on external JOBD array? My personal experience is that it sucks really really badly. Put another way what where fairly low lever operator tasks such as replacing a failed hard disk, now become the domain of guru level Linux admins. Then there are all the issues with having large numbers of drives hanging of the back of a Linux box. A Dell PowerVault MD3200/MD3260 with expansion enclosures as required is not a lot more expensive and a *LOT* less of a headache. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.bergman at uphs.upenn.edu Tue Jul 9 17:32:51 2013 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Tue, 09 Jul 2013 12:32:51 -0400 Subject: [gpfsug-discuss] green drives In-Reply-To: Your message of "Tue, 09 Jul 2013 16:38:11 BST." <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> References: <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> Message-ID: <32173.1373387571@localhost> In the message dated: Tue, 09 Jul 2013 13:00:59 +0100, The pithy ruminations from Pete Smith on <[gpfsug-discuss] green drives> were: => Even more mental ... anyone using green drives in their lowest HD tier? => => I've used them in a Nexsan with MAID capability, for nearline, and => they were fine for this purpose, but I wouldn't expect them to sit => happily in GPFS. Why not? We use an older Nexsan SATAboy, with MAID capability, as the slowest tier in our GPFS environment. GPFS doesn't know (or care) that the Nexsan hardware shuts down and spins up the disks on request--that's all hidden from the filesystem layer, except for a longer latency on some IO requests if the platters aren't spinning, there's nothing visible as far as GPFS is concerned. Mark => => Happy to be confirmed wrong in my suspicions. => => -- => Pete Smith => DevOp/System Administrator => Realise Studio => 12/13 Poland Street, London W1F 8QB => T. +44 (0)20 7165 9644 => => realisestudio.com From sfadden at us.ibm.com Tue Jul 9 17:13:53 2013 From: sfadden at us.ibm.com (Scott Fadden) Date: Tue, 9 Jul 2013 10:13:53 -0600 Subject: [gpfsug-discuss] AUTO: I am on vacation until Jan 03 - 2012 (returning 07/29/2013) Message-ID: I am out of the office until 07/29/2013. Talk to you next year. Note: This is an automated response to your message "[gpfsug-discuss] software RAID?" sent on 07/09/2013 5:58:44. This is the only notification you will receive while this person is away. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Jul 10 10:46:02 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 10 Jul 2013 10:46:02 +0100 Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> Message-ID: <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 09:18 -0700, Sven Oehme wrote: > Hi, > > in case you are not aware of it, GPFS itself provides declustered > distributed Software Raid capabilities with end-to-end checksum and > many other features. > it ships in form of a pre-canned Solution, take a look at > http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ > There is a world of difference between a tightly integrated system like that where every component down to the rack is controlled by a single vendor, and random JBOD expansion enclosure with random x86 server, random interconnect and random version of Linux. Noting of course where I work anything that comes in a vendor specified rack is a big problem due to the fact we use our own racks with water cooling. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Wed Jul 10 14:28:37 2013 From: oehmes at us.ibm.com (Sven Oehme) Date: Wed, 10 Jul 2013 06:28:37 -0700 Subject: [gpfsug-discuss] software RAID? In-Reply-To: <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> Message-ID: i am not sure what the exact % is , but multiple GSS customers use their own racks. GSS supports a variety of Interconnects and the clients can run a large number of Linux distros, AIX and Windows that are supported, even in intermix within one cluster. we also have quite a number of customers using IBM equipment as the storage resource, but their own servers for the clients, which is usually the majority of the nodes in a cluster. Sven From: Jonathan Buzzard To: gpfsug main discussion list Date: 07/10/2013 02:46 AM Subject: Re: [gpfsug-discuss] software RAID? Sent by: gpfsug-discuss-bounces at gpfsug.org On Tue, 2013-07-09 at 09:18 -0700, Sven Oehme wrote: > Hi, > > in case you are not aware of it, GPFS itself provides declustered > distributed Software Raid capabilities with end-to-end checksum and > many other features. > it ships in form of a pre-canned Solution, take a look at > http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ > There is a world of difference between a tightly integrated system like that where every component down to the rack is controlled by a single vendor, and random JBOD expansion enclosure with random x86 server, random interconnect and random version of Linux. Noting of course where I work anything that comes in a vendor specified rack is a big problem due to the fact we use our own racks with water cooling. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sandra.McLaughlin at astrazeneca.com Thu Jul 11 12:24:26 2013 From: Sandra.McLaughlin at astrazeneca.com (McLaughlin, Sandra M) Date: Thu, 11 Jul 2013 12:24:26 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs Message-ID: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> Hi, I would just like some opinions on the best way to serve a gpfs file system to server/workstations which are not directly connected to the storage. Background: We are in the process of moving from old storage (approx 20TB); lots of filesystems - JFS2 on AIX with HACMP. served out with NFS to a linux cluster and about 150 linux workstations and random other servers; to new storage (approx 250TB); 2 gpfs filesystems, Linux NSDs, using ctdb for NFS and Samba. We have also installed a server for TSM, which is SAN connected to the gpfs, and have some new compute servers which are also on the SAN, and therefore have pretty good performance. Should I still use the automounter ? Different maps or symbolic links to emulate the automounter names for the servers that are directly SAN-connected gpfs clients ? /home/username or whatever has to work on all systems. I found a bit in the gpfs problem determination guide suggesting that there is a way to use an automounter program map for gpfs (/usr/lpp/mmfs/bin/mmdynamicmap) but I can't find any other documentation about it. I would really like to hear how other people with a similar setup are doing this. Thanks, Sandra. Sandra McLaughlin Scientific Computing Specialist ___________________________________________________ AstraZeneca R&D | R&D Information 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG Tel +44 1625 517307 sandra.mclaughlin at astrazeneca.com -------------------------------------------------------------------------- AstraZeneca UK Limited is a company incorporated in England and Wales with registered number: 03674842 and a registered office at 2 Kingdom Street, London, W2 6BD. Confidentiality Notice: This message is private and may contain confidential, proprietary and legally privileged information. If you have received this message in error, please notify us and remove it from your system and note that you must not copy, distribute or take any action in reliance on it. Any unauthorised use or disclosure of the contents of this message is not permitted and may be unlawful. Disclaimer: Email messages may be subject to delays, interception, non-delivery and unauthorised alterations. Therefore, information expressed in this message is not given or endorsed by AstraZeneca UK Limited unless otherwise notified by an authorised representative independent of this message. No contractual relationship is created by this message by any person unless specifically indicated by agreement in writing other than email. Monitoring: AstraZeneca UK Limited may monitor email traffic data and content for the purposes of the prevention and detection of crime, ensuring the security of our computer systems and checking Compliance with our Code of Conduct and Policies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Fri Jul 12 14:59:51 2013 From: chair at gpfsug.org (Jez Tucker (GPFS UG Chair)) Date: Fri, 12 Jul 2013 14:59:51 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs In-Reply-To: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> References: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> Message-ID: <51E00BD7.9010905@gpfsug.org> Hey Sandra, The mmdynamicmap is used when auto-mounting GPFS on a node the GPFS software installed (see also /var/mmfs/gen/mmIndirectMap when gpfs -A is set to 'automount'.) For NFS clients, I like autofs a lot. There are two types of map, hence an example for each: Direct maps /etc/auto.master, add the line: /- /etc/auto.gpfsnfs /etc/auto.gpfsnfs, add the line: /path/to/mountpoint -fstype=nfs,nfsvers=3 ctdbclustername:/path/to/nfsexport Indirect Maps For home directories, you can mount them using an indirect map so as to only mount the logged in user's home directory. (or mount them all, using a direct map for their containing folder) /etc/auto.master, add the line: /path/to/homedirsmount /etc/auto.homedirs /etc/auto.homedirs, add the line: * homeserver:/path/to/homedirs/& Test in a sandpit. I would imagine you might need to make sure that your NFS mount point reflects the same path as on a GPFS client/server. Once you're happy this works, you can push out the maps from your ldap/puppet/other service. I'm sure other folks also have their methods, chime in. Regards, Jez --- GPFS UG Chair On 11/07/13 12:24, McLaughlin, Sandra M wrote: > > Hi, > > I would just like some opinions on the best way to serve a gpfs file > system to server/workstations which are not directly connected to the > storage. > > Background: We are in the process of moving from old storage (approx > 20TB); lots of filesystems -- JFS2 on AIX with HACMP. served out with > NFS to a linux cluster and about 150 linux workstations and random > other servers; to new storage (approx 250TB); 2 gpfs filesystems, > Linux NSDs, using ctdb for NFS and Samba. We have also installed a > server for TSM, which is SAN connected to the gpfs, and have some new > compute servers which are also on the SAN, and therefore have pretty > good performance. > > Should I still use the automounter ? Different maps or symbolic links > to emulate the automounter names for the servers that are directly > SAN-connected gpfs clients ? /home//username/ or whatever has to work > on all systems. > > I found a bit in the gpfs problem determination guide suggesting that there is a way to use an automounter program map for gpfs (/usr/lpp/mmfs/bin/mmdynamicmap)but I can't find any other documentation about it. > > I would really like to hear how other people with a similar setup are doing this. > > Thanks, Sandra. > > *Sandra McLaughlin* > > Scientific Computing Specialist > > ___________________________________________________ > > *AstraZeneca* > > *R&D*| R&D Information > > 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG > > Tel +44 1625 517307 > > sandra.mclaughlin at astrazeneca.com > > > ------------------------------------------------------------------------ > > AstraZeneca UK Limited is a company incorporated in England and Wales > with registered number: 03674842 and a registered office at 2 Kingdom > Street, London, W2 6BD. > > *Confidentiality Notice: *This message is private and may contain > confidential, proprietary and legally privileged information. If you > have received this message in error, please notify us and remove it > from your system and note that you must not copy, distribute or take > any action in reliance on it. Any unauthorised use or disclosure of > the contents of this message is not permitted and may be unlawful. > > *Disclaimer:* Email messages may be subject to delays, interception, > non-delivery and unauthorised alterations. Therefore, information > expressed in this message is not given or endorsed by AstraZeneca UK > Limited unless otherwise notified by an authorised representative > independent of this message. No contractual relationship is created by > this message by any person unless specifically indicated by agreement > in writing other than email. > > *Monitoring: *AstraZeneca UK Limited may monitor email traffic data > and content for the purposes of the prevention and detection of crime, > ensuring the security of our computer systems and checking compliance > with our Code of Conduct and policies. > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From pete at realisestudio.com Fri Jul 12 17:21:36 2013 From: pete at realisestudio.com (Pete Smith) Date: Fri, 12 Jul 2013 17:21:36 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs In-Reply-To: <51E00BD7.9010905@gpfsug.org> References: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> <51E00BD7.9010905@gpfsug.org> Message-ID: Push from ldap works great. On 12 July 2013 14:59, Jez Tucker (GPFS UG Chair) wrote: > Hey Sandra, > > The mmdynamicmap is used when auto-mounting GPFS on a node the GPFS > software installed (see also /var/mmfs/gen/mmIndirectMap when gpfs -A is set > to 'automount'.) > > > For NFS clients, I like autofs a lot. > There are two types of map, hence an example for each: > > > Direct maps > > /etc/auto.master, add the line: > /- /etc/auto.gpfsnfs > > /etc/auto.gpfsnfs, add the line: > /path/to/mountpoint -fstype=nfs,nfsvers=3 > ctdbclustername:/path/to/nfsexport > > > Indirect Maps > > For home directories, you can mount them using an indirect map so as to only > mount the logged in user's home directory. > (or mount them all, using a direct map for their containing folder) > > /etc/auto.master, add the line: > /path/to/homedirsmount /etc/auto.homedirs > > /etc/auto.homedirs, add the line: > * homeserver:/path/to/homedirs/& > > > > Test in a sandpit. > > I would imagine you might need to make sure that your NFS mount point > reflects the same path as on a GPFS client/server. > > Once you're happy this works, you can push out the maps from your > ldap/puppet/other service. > > I'm sure other folks also have their methods, chime in. > > Regards, > > Jez > --- > GPFS UG Chair > > > > On 11/07/13 12:24, McLaughlin, Sandra M wrote: > > Hi, > > > > I would just like some opinions on the best way to serve a gpfs file system > to server/workstations which are not directly connected to the storage. > > > > Background: We are in the process of moving from old storage (approx 20TB); > lots of filesystems ? JFS2 on AIX with HACMP. served out with NFS to a linux > cluster and about 150 linux workstations and random other servers; to new > storage (approx 250TB); 2 gpfs filesystems, Linux NSDs, using ctdb for NFS > and Samba. We have also installed a server for TSM, which is SAN connected > to the gpfs, and have some new compute servers which are also on the SAN, > and therefore have pretty good performance. > > > > Should I still use the automounter ? Different maps or symbolic links to > emulate the automounter names for the servers that are directly > SAN-connected gpfs clients ? /home/username or whatever has to work on all > systems. > > I found a bit in the gpfs problem determination guide suggesting that there > is a way to use an automounter program map for gpfs > (/usr/lpp/mmfs/bin/mmdynamicmap) but I can?t find any other documentation > about it. > > > > I would really like to hear how other people with a similar setup are doing > this. > > > > Thanks, Sandra. > > > > Sandra McLaughlin > > Scientific Computing Specialist > > ___________________________________________________ > > AstraZeneca > > R&D | R&D Information > > 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG > > Tel +44 1625 517307 > > sandra.mclaughlin at astrazeneca.com > > > > > > ________________________________ > > AstraZeneca UK Limited is a company incorporated in England and Wales with > registered number: 03674842 and a registered office at 2 Kingdom Street, > London, W2 6BD. > > Confidentiality Notice: This message is private and may contain > confidential, proprietary and legally privileged information. If you have > received this message in error, please notify us and remove it from your > system and note that you must not copy, distribute or take any action in > reliance on it. Any unauthorised use or disclosure of the contents of this > message is not permitted and may be unlawful. > > Disclaimer: Email messages may be subject to delays, interception, > non-delivery and unauthorised alterations. Therefore, information expressed > in this message is not given or endorsed by AstraZeneca UK Limited unless > otherwise notified by an authorised representative independent of this > message. No contractual relationship is created by this message by any > person unless specifically indicated by agreement in writing other than > email. > > Monitoring: AstraZeneca UK Limited may monitor email traffic data and > content for the purposes of the prevention and detection of crime, ensuring > the security of our computer systems and checking compliance with our Code > of Conduct and policies. > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From pete at realisestudio.com Tue Jul 9 12:58:44 2013 From: pete at realisestudio.com (Pete Smith) Date: Tue, 9 Jul 2013 12:58:44 +0100 Subject: [gpfsug-discuss] software RAID? Message-ID: Hi all Slightly nuts question, I know ... but is anyone using software RAID? Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, obviously. TIA -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From pete at realisestudio.com Tue Jul 9 13:00:59 2013 From: pete at realisestudio.com (Pete Smith) Date: Tue, 9 Jul 2013 13:00:59 +0100 Subject: [gpfsug-discuss] green drives Message-ID: Even more mental ... anyone using green drives in their lowest HD tier? I've used them in a Nexsan with MAID capability, for nearline, and they were fine for this purpose, but I wouldn't expect them to sit happily in GPFS. Happy to be confirmed wrong in my suspicions. -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From orlando.richards at ed.ac.uk Tue Jul 9 15:49:05 2013 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Tue, 9 Jul 2013 15:49:05 +0100 (BST) Subject: [gpfsug-discuss] green drives In-Reply-To: References: Message-ID: On Tue, 9 Jul 2013, Pete Smith wrote: > Even more mental ... anyone using green drives in their lowest HD tier? > > I've used them in a Nexsan with MAID capability, for nearline, and > they were fine for this purpose, but I wouldn't expect them to sit > happily in GPFS. > > Happy to be confirmed wrong in my suspicions. > By "green" - do you mean the 5400rpm drives? Or something else (spin-down?)? If 5400rpm - I can't think of a reason they wouldn't perform to expectations in GPFS. Naturally, you'd want to keep your metadata off them - and use them for sequential activity if possible (put large files on them). > -- > Pete Smith > DevOp/System Administrator > Realise Studio > 12/13 Poland Street, London W1F 8QB > T. +44 (0)20 7165 9644 > > realisestudio.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From orlando.richards at ed.ac.uk Tue Jul 9 15:54:50 2013 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Tue, 9 Jul 2013 15:54:50 +0100 (BST) Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: Message-ID: On Tue, 9 Jul 2013, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > Hmm - for shared storage, or for a single-node disk server? If it's shared storage, I can imagine challenges with ensuring consistency across multiple servers - there'd presumably be no mirroring of in-flight or cached information between servers using the shared storage. If it's just one server connected to the disks you'd dodge that - though you'd want to be sure about consistency of data on disk in the event of a sudden server failure (power cut, etc). If you give it a go, I'd be interested to see how you get on with it. > TIA > > -- > Pete Smith > DevOp/System Administrator > Realise Studio > 12/13 Poland Street, London W1F 8QB > T. +44 (0)20 7165 9644 > > realisestudio.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From APPLEBY at uk.ibm.com Tue Jul 9 16:11:50 2013 From: APPLEBY at uk.ibm.com (Richard Appleby) Date: Tue, 9 Jul 2013 16:11:50 +0100 Subject: [gpfsug-discuss] AUTO: Richard Appleby/UK/IBM is out of the office until 26/07/99. (returning 28/10/2013) Message-ID: I am out of the office until 28/10/2013. Please direct enquires to either: My manager, John Palfreyman (x246542) My deputies, Chris Gibson (x246192) and Jonathan Waddilove (x248250) Note: This is an automated response to your message "[gpfsug-discuss] software RAID?" sent on 09/07/2013 12:58:44. This is the only notification you will receive while this person is away. From jonathan at buzzard.me.uk Tue Jul 9 16:38:11 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 09 Jul 2013 16:38:11 +0100 Subject: [gpfsug-discuss] green drives In-Reply-To: References: Message-ID: <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 15:49 +0100, orlando.richards at ed.ac.uk wrote: > On Tue, 9 Jul 2013, Pete Smith wrote: > > > Even more mental ... anyone using green drives in their lowest HD tier? > > > > I've used them in a Nexsan with MAID capability, for nearline, and > > they were fine for this purpose, but I wouldn't expect them to sit > > happily in GPFS. > > > > Happy to be confirmed wrong in my suspicions. > > > > By "green" - do you mean the 5400rpm drives? Or something else > (spin-down?)? > > If 5400rpm - I can't think of a reason they wouldn't perform to > expectations in GPFS. Naturally, you'd want to keep your metadata off them > - and use them for sequential activity if possible (put large files on > them). > You also I think need to make sure you are using "enterprise" versions of such drives. However I don't believe there are "enterprise" versions of the 5400rpm drive variants, therefore using them would be in my personal experience as dum as hell. Another point to bear in mind is you will save a lot less power than you might imagine. For example a Seagate Desktop HDD.15 4TB drive is 7.5W read/write, 5W idle and the name gives it away. While a Seagate Constellation ES.3 4TB drive is 11.3W read/write and 6.7W idle and enterprise rated. To make those numbers more meaningful for ~90TB or usable disk space doing three RAID6's of 8D+2P you will save ~120W. Is that really worth it? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jonathan at buzzard.me.uk Tue Jul 9 16:52:40 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 09 Jul 2013 16:52:40 +0100 Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: Message-ID: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 12:58 +0100, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > I presume you are talking about Linux software RAID on external JOBD array? My personal experience is that it sucks really really badly. Put another way what where fairly low lever operator tasks such as replacing a failed hard disk, now become the domain of guru level Linux admins. Then there are all the issues with having large numbers of drives hanging of the back of a Linux box. A Dell PowerVault MD3200/MD3260 with expansion enclosures as required is not a lot more expensive and a *LOT* less of a headache. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Tue Jul 9 17:18:42 2013 From: oehmes at us.ibm.com (Sven Oehme) Date: Tue, 9 Jul 2013 09:18:42 -0700 Subject: [gpfsug-discuss] software RAID? In-Reply-To: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> Message-ID: Hi, in case you are not aware of it, GPFS itself provides declustered distributed Software Raid capabilities with end-to-end checksum and many other features. it ships in form of a pre-canned Solution, take a look at http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ Sven From: Jonathan Buzzard To: gpfsug main discussion list Date: 07/09/2013 09:00 AM Subject: Re: [gpfsug-discuss] software RAID? Sent by: gpfsug-discuss-bounces at gpfsug.org On Tue, 2013-07-09 at 12:58 +0100, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > I presume you are talking about Linux software RAID on external JOBD array? My personal experience is that it sucks really really badly. Put another way what where fairly low lever operator tasks such as replacing a failed hard disk, now become the domain of guru level Linux admins. Then there are all the issues with having large numbers of drives hanging of the back of a Linux box. A Dell PowerVault MD3200/MD3260 with expansion enclosures as required is not a lot more expensive and a *LOT* less of a headache. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.bergman at uphs.upenn.edu Tue Jul 9 17:32:51 2013 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Tue, 09 Jul 2013 12:32:51 -0400 Subject: [gpfsug-discuss] green drives In-Reply-To: Your message of "Tue, 09 Jul 2013 16:38:11 BST." <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> References: <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> Message-ID: <32173.1373387571@localhost> In the message dated: Tue, 09 Jul 2013 13:00:59 +0100, The pithy ruminations from Pete Smith on <[gpfsug-discuss] green drives> were: => Even more mental ... anyone using green drives in their lowest HD tier? => => I've used them in a Nexsan with MAID capability, for nearline, and => they were fine for this purpose, but I wouldn't expect them to sit => happily in GPFS. Why not? We use an older Nexsan SATAboy, with MAID capability, as the slowest tier in our GPFS environment. GPFS doesn't know (or care) that the Nexsan hardware shuts down and spins up the disks on request--that's all hidden from the filesystem layer, except for a longer latency on some IO requests if the platters aren't spinning, there's nothing visible as far as GPFS is concerned. Mark => => Happy to be confirmed wrong in my suspicions. => => -- => Pete Smith => DevOp/System Administrator => Realise Studio => 12/13 Poland Street, London W1F 8QB => T. +44 (0)20 7165 9644 => => realisestudio.com From sfadden at us.ibm.com Tue Jul 9 17:13:53 2013 From: sfadden at us.ibm.com (Scott Fadden) Date: Tue, 9 Jul 2013 10:13:53 -0600 Subject: [gpfsug-discuss] AUTO: I am on vacation until Jan 03 - 2012 (returning 07/29/2013) Message-ID: I am out of the office until 07/29/2013. Talk to you next year. Note: This is an automated response to your message "[gpfsug-discuss] software RAID?" sent on 07/09/2013 5:58:44. This is the only notification you will receive while this person is away. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Jul 10 10:46:02 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 10 Jul 2013 10:46:02 +0100 Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> Message-ID: <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 09:18 -0700, Sven Oehme wrote: > Hi, > > in case you are not aware of it, GPFS itself provides declustered > distributed Software Raid capabilities with end-to-end checksum and > many other features. > it ships in form of a pre-canned Solution, take a look at > http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ > There is a world of difference between a tightly integrated system like that where every component down to the rack is controlled by a single vendor, and random JBOD expansion enclosure with random x86 server, random interconnect and random version of Linux. Noting of course where I work anything that comes in a vendor specified rack is a big problem due to the fact we use our own racks with water cooling. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Wed Jul 10 14:28:37 2013 From: oehmes at us.ibm.com (Sven Oehme) Date: Wed, 10 Jul 2013 06:28:37 -0700 Subject: [gpfsug-discuss] software RAID? In-Reply-To: <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> Message-ID: i am not sure what the exact % is , but multiple GSS customers use their own racks. GSS supports a variety of Interconnects and the clients can run a large number of Linux distros, AIX and Windows that are supported, even in intermix within one cluster. we also have quite a number of customers using IBM equipment as the storage resource, but their own servers for the clients, which is usually the majority of the nodes in a cluster. Sven From: Jonathan Buzzard To: gpfsug main discussion list Date: 07/10/2013 02:46 AM Subject: Re: [gpfsug-discuss] software RAID? Sent by: gpfsug-discuss-bounces at gpfsug.org On Tue, 2013-07-09 at 09:18 -0700, Sven Oehme wrote: > Hi, > > in case you are not aware of it, GPFS itself provides declustered > distributed Software Raid capabilities with end-to-end checksum and > many other features. > it ships in form of a pre-canned Solution, take a look at > http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ > There is a world of difference between a tightly integrated system like that where every component down to the rack is controlled by a single vendor, and random JBOD expansion enclosure with random x86 server, random interconnect and random version of Linux. Noting of course where I work anything that comes in a vendor specified rack is a big problem due to the fact we use our own racks with water cooling. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sandra.McLaughlin at astrazeneca.com Thu Jul 11 12:24:26 2013 From: Sandra.McLaughlin at astrazeneca.com (McLaughlin, Sandra M) Date: Thu, 11 Jul 2013 12:24:26 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs Message-ID: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> Hi, I would just like some opinions on the best way to serve a gpfs file system to server/workstations which are not directly connected to the storage. Background: We are in the process of moving from old storage (approx 20TB); lots of filesystems - JFS2 on AIX with HACMP. served out with NFS to a linux cluster and about 150 linux workstations and random other servers; to new storage (approx 250TB); 2 gpfs filesystems, Linux NSDs, using ctdb for NFS and Samba. We have also installed a server for TSM, which is SAN connected to the gpfs, and have some new compute servers which are also on the SAN, and therefore have pretty good performance. Should I still use the automounter ? Different maps or symbolic links to emulate the automounter names for the servers that are directly SAN-connected gpfs clients ? /home/username or whatever has to work on all systems. I found a bit in the gpfs problem determination guide suggesting that there is a way to use an automounter program map for gpfs (/usr/lpp/mmfs/bin/mmdynamicmap) but I can't find any other documentation about it. I would really like to hear how other people with a similar setup are doing this. Thanks, Sandra. Sandra McLaughlin Scientific Computing Specialist ___________________________________________________ AstraZeneca R&D | R&D Information 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG Tel +44 1625 517307 sandra.mclaughlin at astrazeneca.com -------------------------------------------------------------------------- AstraZeneca UK Limited is a company incorporated in England and Wales with registered number: 03674842 and a registered office at 2 Kingdom Street, London, W2 6BD. Confidentiality Notice: This message is private and may contain confidential, proprietary and legally privileged information. If you have received this message in error, please notify us and remove it from your system and note that you must not copy, distribute or take any action in reliance on it. Any unauthorised use or disclosure of the contents of this message is not permitted and may be unlawful. Disclaimer: Email messages may be subject to delays, interception, non-delivery and unauthorised alterations. Therefore, information expressed in this message is not given or endorsed by AstraZeneca UK Limited unless otherwise notified by an authorised representative independent of this message. No contractual relationship is created by this message by any person unless specifically indicated by agreement in writing other than email. Monitoring: AstraZeneca UK Limited may monitor email traffic data and content for the purposes of the prevention and detection of crime, ensuring the security of our computer systems and checking Compliance with our Code of Conduct and Policies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Fri Jul 12 14:59:51 2013 From: chair at gpfsug.org (Jez Tucker (GPFS UG Chair)) Date: Fri, 12 Jul 2013 14:59:51 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs In-Reply-To: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> References: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> Message-ID: <51E00BD7.9010905@gpfsug.org> Hey Sandra, The mmdynamicmap is used when auto-mounting GPFS on a node the GPFS software installed (see also /var/mmfs/gen/mmIndirectMap when gpfs -A is set to 'automount'.) For NFS clients, I like autofs a lot. There are two types of map, hence an example for each: Direct maps /etc/auto.master, add the line: /- /etc/auto.gpfsnfs /etc/auto.gpfsnfs, add the line: /path/to/mountpoint -fstype=nfs,nfsvers=3 ctdbclustername:/path/to/nfsexport Indirect Maps For home directories, you can mount them using an indirect map so as to only mount the logged in user's home directory. (or mount them all, using a direct map for their containing folder) /etc/auto.master, add the line: /path/to/homedirsmount /etc/auto.homedirs /etc/auto.homedirs, add the line: * homeserver:/path/to/homedirs/& Test in a sandpit. I would imagine you might need to make sure that your NFS mount point reflects the same path as on a GPFS client/server. Once you're happy this works, you can push out the maps from your ldap/puppet/other service. I'm sure other folks also have their methods, chime in. Regards, Jez --- GPFS UG Chair On 11/07/13 12:24, McLaughlin, Sandra M wrote: > > Hi, > > I would just like some opinions on the best way to serve a gpfs file > system to server/workstations which are not directly connected to the > storage. > > Background: We are in the process of moving from old storage (approx > 20TB); lots of filesystems -- JFS2 on AIX with HACMP. served out with > NFS to a linux cluster and about 150 linux workstations and random > other servers; to new storage (approx 250TB); 2 gpfs filesystems, > Linux NSDs, using ctdb for NFS and Samba. We have also installed a > server for TSM, which is SAN connected to the gpfs, and have some new > compute servers which are also on the SAN, and therefore have pretty > good performance. > > Should I still use the automounter ? Different maps or symbolic links > to emulate the automounter names for the servers that are directly > SAN-connected gpfs clients ? /home//username/ or whatever has to work > on all systems. > > I found a bit in the gpfs problem determination guide suggesting that there is a way to use an automounter program map for gpfs (/usr/lpp/mmfs/bin/mmdynamicmap)but I can't find any other documentation about it. > > I would really like to hear how other people with a similar setup are doing this. > > Thanks, Sandra. > > *Sandra McLaughlin* > > Scientific Computing Specialist > > ___________________________________________________ > > *AstraZeneca* > > *R&D*| R&D Information > > 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG > > Tel +44 1625 517307 > > sandra.mclaughlin at astrazeneca.com > > > ------------------------------------------------------------------------ > > AstraZeneca UK Limited is a company incorporated in England and Wales > with registered number: 03674842 and a registered office at 2 Kingdom > Street, London, W2 6BD. > > *Confidentiality Notice: *This message is private and may contain > confidential, proprietary and legally privileged information. If you > have received this message in error, please notify us and remove it > from your system and note that you must not copy, distribute or take > any action in reliance on it. Any unauthorised use or disclosure of > the contents of this message is not permitted and may be unlawful. > > *Disclaimer:* Email messages may be subject to delays, interception, > non-delivery and unauthorised alterations. Therefore, information > expressed in this message is not given or endorsed by AstraZeneca UK > Limited unless otherwise notified by an authorised representative > independent of this message. No contractual relationship is created by > this message by any person unless specifically indicated by agreement > in writing other than email. > > *Monitoring: *AstraZeneca UK Limited may monitor email traffic data > and content for the purposes of the prevention and detection of crime, > ensuring the security of our computer systems and checking compliance > with our Code of Conduct and policies. > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From pete at realisestudio.com Fri Jul 12 17:21:36 2013 From: pete at realisestudio.com (Pete Smith) Date: Fri, 12 Jul 2013 17:21:36 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs In-Reply-To: <51E00BD7.9010905@gpfsug.org> References: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> <51E00BD7.9010905@gpfsug.org> Message-ID: Push from ldap works great. On 12 July 2013 14:59, Jez Tucker (GPFS UG Chair) wrote: > Hey Sandra, > > The mmdynamicmap is used when auto-mounting GPFS on a node the GPFS > software installed (see also /var/mmfs/gen/mmIndirectMap when gpfs -A is set > to 'automount'.) > > > For NFS clients, I like autofs a lot. > There are two types of map, hence an example for each: > > > Direct maps > > /etc/auto.master, add the line: > /- /etc/auto.gpfsnfs > > /etc/auto.gpfsnfs, add the line: > /path/to/mountpoint -fstype=nfs,nfsvers=3 > ctdbclustername:/path/to/nfsexport > > > Indirect Maps > > For home directories, you can mount them using an indirect map so as to only > mount the logged in user's home directory. > (or mount them all, using a direct map for their containing folder) > > /etc/auto.master, add the line: > /path/to/homedirsmount /etc/auto.homedirs > > /etc/auto.homedirs, add the line: > * homeserver:/path/to/homedirs/& > > > > Test in a sandpit. > > I would imagine you might need to make sure that your NFS mount point > reflects the same path as on a GPFS client/server. > > Once you're happy this works, you can push out the maps from your > ldap/puppet/other service. > > I'm sure other folks also have their methods, chime in. > > Regards, > > Jez > --- > GPFS UG Chair > > > > On 11/07/13 12:24, McLaughlin, Sandra M wrote: > > Hi, > > > > I would just like some opinions on the best way to serve a gpfs file system > to server/workstations which are not directly connected to the storage. > > > > Background: We are in the process of moving from old storage (approx 20TB); > lots of filesystems ? JFS2 on AIX with HACMP. served out with NFS to a linux > cluster and about 150 linux workstations and random other servers; to new > storage (approx 250TB); 2 gpfs filesystems, Linux NSDs, using ctdb for NFS > and Samba. We have also installed a server for TSM, which is SAN connected > to the gpfs, and have some new compute servers which are also on the SAN, > and therefore have pretty good performance. > > > > Should I still use the automounter ? Different maps or symbolic links to > emulate the automounter names for the servers that are directly > SAN-connected gpfs clients ? /home/username or whatever has to work on all > systems. > > I found a bit in the gpfs problem determination guide suggesting that there > is a way to use an automounter program map for gpfs > (/usr/lpp/mmfs/bin/mmdynamicmap) but I can?t find any other documentation > about it. > > > > I would really like to hear how other people with a similar setup are doing > this. > > > > Thanks, Sandra. > > > > Sandra McLaughlin > > Scientific Computing Specialist > > ___________________________________________________ > > AstraZeneca > > R&D | R&D Information > > 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG > > Tel +44 1625 517307 > > sandra.mclaughlin at astrazeneca.com > > > > > > ________________________________ > > AstraZeneca UK Limited is a company incorporated in England and Wales with > registered number: 03674842 and a registered office at 2 Kingdom Street, > London, W2 6BD. > > Confidentiality Notice: This message is private and may contain > confidential, proprietary and legally privileged information. If you have > received this message in error, please notify us and remove it from your > system and note that you must not copy, distribute or take any action in > reliance on it. Any unauthorised use or disclosure of the contents of this > message is not permitted and may be unlawful. > > Disclaimer: Email messages may be subject to delays, interception, > non-delivery and unauthorised alterations. Therefore, information expressed > in this message is not given or endorsed by AstraZeneca UK Limited unless > otherwise notified by an authorised representative independent of this > message. No contractual relationship is created by this message by any > person unless specifically indicated by agreement in writing other than > email. > > Monitoring: AstraZeneca UK Limited may monitor email traffic data and > content for the purposes of the prevention and detection of crime, ensuring > the security of our computer systems and checking compliance with our Code > of Conduct and policies. > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From pete at realisestudio.com Tue Jul 9 12:58:44 2013 From: pete at realisestudio.com (Pete Smith) Date: Tue, 9 Jul 2013 12:58:44 +0100 Subject: [gpfsug-discuss] software RAID? Message-ID: Hi all Slightly nuts question, I know ... but is anyone using software RAID? Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, obviously. TIA -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From pete at realisestudio.com Tue Jul 9 13:00:59 2013 From: pete at realisestudio.com (Pete Smith) Date: Tue, 9 Jul 2013 13:00:59 +0100 Subject: [gpfsug-discuss] green drives Message-ID: Even more mental ... anyone using green drives in their lowest HD tier? I've used them in a Nexsan with MAID capability, for nearline, and they were fine for this purpose, but I wouldn't expect them to sit happily in GPFS. Happy to be confirmed wrong in my suspicions. -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com From orlando.richards at ed.ac.uk Tue Jul 9 15:49:05 2013 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Tue, 9 Jul 2013 15:49:05 +0100 (BST) Subject: [gpfsug-discuss] green drives In-Reply-To: References: Message-ID: On Tue, 9 Jul 2013, Pete Smith wrote: > Even more mental ... anyone using green drives in their lowest HD tier? > > I've used them in a Nexsan with MAID capability, for nearline, and > they were fine for this purpose, but I wouldn't expect them to sit > happily in GPFS. > > Happy to be confirmed wrong in my suspicions. > By "green" - do you mean the 5400rpm drives? Or something else (spin-down?)? If 5400rpm - I can't think of a reason they wouldn't perform to expectations in GPFS. Naturally, you'd want to keep your metadata off them - and use them for sequential activity if possible (put large files on them). > -- > Pete Smith > DevOp/System Administrator > Realise Studio > 12/13 Poland Street, London W1F 8QB > T. +44 (0)20 7165 9644 > > realisestudio.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From orlando.richards at ed.ac.uk Tue Jul 9 15:54:50 2013 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Tue, 9 Jul 2013 15:54:50 +0100 (BST) Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: Message-ID: On Tue, 9 Jul 2013, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > Hmm - for shared storage, or for a single-node disk server? If it's shared storage, I can imagine challenges with ensuring consistency across multiple servers - there'd presumably be no mirroring of in-flight or cached information between servers using the shared storage. If it's just one server connected to the disks you'd dodge that - though you'd want to be sure about consistency of data on disk in the event of a sudden server failure (power cut, etc). If you give it a go, I'd be interested to see how you get on with it. > TIA > > -- > Pete Smith > DevOp/System Administrator > Realise Studio > 12/13 Poland Street, London W1F 8QB > T. +44 (0)20 7165 9644 > > realisestudio.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From APPLEBY at uk.ibm.com Tue Jul 9 16:11:50 2013 From: APPLEBY at uk.ibm.com (Richard Appleby) Date: Tue, 9 Jul 2013 16:11:50 +0100 Subject: [gpfsug-discuss] AUTO: Richard Appleby/UK/IBM is out of the office until 26/07/99. (returning 28/10/2013) Message-ID: I am out of the office until 28/10/2013. Please direct enquires to either: My manager, John Palfreyman (x246542) My deputies, Chris Gibson (x246192) and Jonathan Waddilove (x248250) Note: This is an automated response to your message "[gpfsug-discuss] software RAID?" sent on 09/07/2013 12:58:44. This is the only notification you will receive while this person is away. From jonathan at buzzard.me.uk Tue Jul 9 16:38:11 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 09 Jul 2013 16:38:11 +0100 Subject: [gpfsug-discuss] green drives In-Reply-To: References: Message-ID: <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 15:49 +0100, orlando.richards at ed.ac.uk wrote: > On Tue, 9 Jul 2013, Pete Smith wrote: > > > Even more mental ... anyone using green drives in their lowest HD tier? > > > > I've used them in a Nexsan with MAID capability, for nearline, and > > they were fine for this purpose, but I wouldn't expect them to sit > > happily in GPFS. > > > > Happy to be confirmed wrong in my suspicions. > > > > By "green" - do you mean the 5400rpm drives? Or something else > (spin-down?)? > > If 5400rpm - I can't think of a reason they wouldn't perform to > expectations in GPFS. Naturally, you'd want to keep your metadata off them > - and use them for sequential activity if possible (put large files on > them). > You also I think need to make sure you are using "enterprise" versions of such drives. However I don't believe there are "enterprise" versions of the 5400rpm drive variants, therefore using them would be in my personal experience as dum as hell. Another point to bear in mind is you will save a lot less power than you might imagine. For example a Seagate Desktop HDD.15 4TB drive is 7.5W read/write, 5W idle and the name gives it away. While a Seagate Constellation ES.3 4TB drive is 11.3W read/write and 6.7W idle and enterprise rated. To make those numbers more meaningful for ~90TB or usable disk space doing three RAID6's of 8D+2P you will save ~120W. Is that really worth it? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From jonathan at buzzard.me.uk Tue Jul 9 16:52:40 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 09 Jul 2013 16:52:40 +0100 Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: Message-ID: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 12:58 +0100, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > I presume you are talking about Linux software RAID on external JOBD array? My personal experience is that it sucks really really badly. Put another way what where fairly low lever operator tasks such as replacing a failed hard disk, now become the domain of guru level Linux admins. Then there are all the issues with having large numbers of drives hanging of the back of a Linux box. A Dell PowerVault MD3200/MD3260 with expansion enclosures as required is not a lot more expensive and a *LOT* less of a headache. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Tue Jul 9 17:18:42 2013 From: oehmes at us.ibm.com (Sven Oehme) Date: Tue, 9 Jul 2013 09:18:42 -0700 Subject: [gpfsug-discuss] software RAID? In-Reply-To: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> Message-ID: Hi, in case you are not aware of it, GPFS itself provides declustered distributed Software Raid capabilities with end-to-end checksum and many other features. it ships in form of a pre-canned Solution, take a look at http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ Sven From: Jonathan Buzzard To: gpfsug main discussion list Date: 07/09/2013 09:00 AM Subject: Re: [gpfsug-discuss] software RAID? Sent by: gpfsug-discuss-bounces at gpfsug.org On Tue, 2013-07-09 at 12:58 +0100, Pete Smith wrote: > Hi all > > Slightly nuts question, I know ... but is anyone using software RAID? > > Our test rig has only 0, 1 or 10 as an option. And we'd like to use 6, > obviously. > I presume you are talking about Linux software RAID on external JOBD array? My personal experience is that it sucks really really badly. Put another way what where fairly low lever operator tasks such as replacing a failed hard disk, now become the domain of guru level Linux admins. Then there are all the issues with having large numbers of drives hanging of the back of a Linux box. A Dell PowerVault MD3200/MD3260 with expansion enclosures as required is not a lot more expensive and a *LOT* less of a headache. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.bergman at uphs.upenn.edu Tue Jul 9 17:32:51 2013 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Tue, 09 Jul 2013 12:32:51 -0400 Subject: [gpfsug-discuss] green drives In-Reply-To: Your message of "Tue, 09 Jul 2013 16:38:11 BST." <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> References: <1373384291.8644.32.camel@buzzard.phy.strath.ac.uk> Message-ID: <32173.1373387571@localhost> In the message dated: Tue, 09 Jul 2013 13:00:59 +0100, The pithy ruminations from Pete Smith on <[gpfsug-discuss] green drives> were: => Even more mental ... anyone using green drives in their lowest HD tier? => => I've used them in a Nexsan with MAID capability, for nearline, and => they were fine for this purpose, but I wouldn't expect them to sit => happily in GPFS. Why not? We use an older Nexsan SATAboy, with MAID capability, as the slowest tier in our GPFS environment. GPFS doesn't know (or care) that the Nexsan hardware shuts down and spins up the disks on request--that's all hidden from the filesystem layer, except for a longer latency on some IO requests if the platters aren't spinning, there's nothing visible as far as GPFS is concerned. Mark => => Happy to be confirmed wrong in my suspicions. => => -- => Pete Smith => DevOp/System Administrator => Realise Studio => 12/13 Poland Street, London W1F 8QB => T. +44 (0)20 7165 9644 => => realisestudio.com From sfadden at us.ibm.com Tue Jul 9 17:13:53 2013 From: sfadden at us.ibm.com (Scott Fadden) Date: Tue, 9 Jul 2013 10:13:53 -0600 Subject: [gpfsug-discuss] AUTO: I am on vacation until Jan 03 - 2012 (returning 07/29/2013) Message-ID: I am out of the office until 07/29/2013. Talk to you next year. Note: This is an automated response to your message "[gpfsug-discuss] software RAID?" sent on 07/09/2013 5:58:44. This is the only notification you will receive while this person is away. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Wed Jul 10 10:46:02 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Wed, 10 Jul 2013 10:46:02 +0100 Subject: [gpfsug-discuss] software RAID? In-Reply-To: References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> Message-ID: <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> On Tue, 2013-07-09 at 09:18 -0700, Sven Oehme wrote: > Hi, > > in case you are not aware of it, GPFS itself provides declustered > distributed Software Raid capabilities with end-to-end checksum and > many other features. > it ships in form of a pre-canned Solution, take a look at > http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ > There is a world of difference between a tightly integrated system like that where every component down to the rack is controlled by a single vendor, and random JBOD expansion enclosure with random x86 server, random interconnect and random version of Linux. Noting of course where I work anything that comes in a vendor specified rack is a big problem due to the fact we use our own racks with water cooling. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From oehmes at us.ibm.com Wed Jul 10 14:28:37 2013 From: oehmes at us.ibm.com (Sven Oehme) Date: Wed, 10 Jul 2013 06:28:37 -0700 Subject: [gpfsug-discuss] software RAID? In-Reply-To: <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> References: <1373385160.8644.44.camel@buzzard.phy.strath.ac.uk> <1373449562.8644.67.camel@buzzard.phy.strath.ac.uk> Message-ID: i am not sure what the exact % is , but multiple GSS customers use their own racks. GSS supports a variety of Interconnects and the clients can run a large number of Linux distros, AIX and Windows that are supported, even in intermix within one cluster. we also have quite a number of customers using IBM equipment as the storage resource, but their own servers for the clients, which is usually the majority of the nodes in a cluster. Sven From: Jonathan Buzzard To: gpfsug main discussion list Date: 07/10/2013 02:46 AM Subject: Re: [gpfsug-discuss] software RAID? Sent by: gpfsug-discuss-bounces at gpfsug.org On Tue, 2013-07-09 at 09:18 -0700, Sven Oehme wrote: > Hi, > > in case you are not aware of it, GPFS itself provides declustered > distributed Software Raid capabilities with end-to-end checksum and > many other features. > it ships in form of a pre-canned Solution, take a look at > http://www-03.ibm.com/systems/x/hardware/largescale/gpfsstorage/ > There is a world of difference between a tightly integrated system like that where every component down to the rack is controlled by a single vendor, and random JBOD expansion enclosure with random x86 server, random interconnect and random version of Linux. Noting of course where I work anything that comes in a vendor specified rack is a big problem due to the fact we use our own racks with water cooling. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sandra.McLaughlin at astrazeneca.com Thu Jul 11 12:24:26 2013 From: Sandra.McLaughlin at astrazeneca.com (McLaughlin, Sandra M) Date: Thu, 11 Jul 2013 12:24:26 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs Message-ID: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> Hi, I would just like some opinions on the best way to serve a gpfs file system to server/workstations which are not directly connected to the storage. Background: We are in the process of moving from old storage (approx 20TB); lots of filesystems - JFS2 on AIX with HACMP. served out with NFS to a linux cluster and about 150 linux workstations and random other servers; to new storage (approx 250TB); 2 gpfs filesystems, Linux NSDs, using ctdb for NFS and Samba. We have also installed a server for TSM, which is SAN connected to the gpfs, and have some new compute servers which are also on the SAN, and therefore have pretty good performance. Should I still use the automounter ? Different maps or symbolic links to emulate the automounter names for the servers that are directly SAN-connected gpfs clients ? /home/username or whatever has to work on all systems. I found a bit in the gpfs problem determination guide suggesting that there is a way to use an automounter program map for gpfs (/usr/lpp/mmfs/bin/mmdynamicmap) but I can't find any other documentation about it. I would really like to hear how other people with a similar setup are doing this. Thanks, Sandra. Sandra McLaughlin Scientific Computing Specialist ___________________________________________________ AstraZeneca R&D | R&D Information 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG Tel +44 1625 517307 sandra.mclaughlin at astrazeneca.com -------------------------------------------------------------------------- AstraZeneca UK Limited is a company incorporated in England and Wales with registered number: 03674842 and a registered office at 2 Kingdom Street, London, W2 6BD. Confidentiality Notice: This message is private and may contain confidential, proprietary and legally privileged information. If you have received this message in error, please notify us and remove it from your system and note that you must not copy, distribute or take any action in reliance on it. Any unauthorised use or disclosure of the contents of this message is not permitted and may be unlawful. Disclaimer: Email messages may be subject to delays, interception, non-delivery and unauthorised alterations. Therefore, information expressed in this message is not given or endorsed by AstraZeneca UK Limited unless otherwise notified by an authorised representative independent of this message. No contractual relationship is created by this message by any person unless specifically indicated by agreement in writing other than email. Monitoring: AstraZeneca UK Limited may monitor email traffic data and content for the purposes of the prevention and detection of crime, ensuring the security of our computer systems and checking Compliance with our Code of Conduct and Policies. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Fri Jul 12 14:59:51 2013 From: chair at gpfsug.org (Jez Tucker (GPFS UG Chair)) Date: Fri, 12 Jul 2013 14:59:51 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs In-Reply-To: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> References: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> Message-ID: <51E00BD7.9010905@gpfsug.org> Hey Sandra, The mmdynamicmap is used when auto-mounting GPFS on a node the GPFS software installed (see also /var/mmfs/gen/mmIndirectMap when gpfs -A is set to 'automount'.) For NFS clients, I like autofs a lot. There are two types of map, hence an example for each: Direct maps /etc/auto.master, add the line: /- /etc/auto.gpfsnfs /etc/auto.gpfsnfs, add the line: /path/to/mountpoint -fstype=nfs,nfsvers=3 ctdbclustername:/path/to/nfsexport Indirect Maps For home directories, you can mount them using an indirect map so as to only mount the logged in user's home directory. (or mount them all, using a direct map for their containing folder) /etc/auto.master, add the line: /path/to/homedirsmount /etc/auto.homedirs /etc/auto.homedirs, add the line: * homeserver:/path/to/homedirs/& Test in a sandpit. I would imagine you might need to make sure that your NFS mount point reflects the same path as on a GPFS client/server. Once you're happy this works, you can push out the maps from your ldap/puppet/other service. I'm sure other folks also have their methods, chime in. Regards, Jez --- GPFS UG Chair On 11/07/13 12:24, McLaughlin, Sandra M wrote: > > Hi, > > I would just like some opinions on the best way to serve a gpfs file > system to server/workstations which are not directly connected to the > storage. > > Background: We are in the process of moving from old storage (approx > 20TB); lots of filesystems -- JFS2 on AIX with HACMP. served out with > NFS to a linux cluster and about 150 linux workstations and random > other servers; to new storage (approx 250TB); 2 gpfs filesystems, > Linux NSDs, using ctdb for NFS and Samba. We have also installed a > server for TSM, which is SAN connected to the gpfs, and have some new > compute servers which are also on the SAN, and therefore have pretty > good performance. > > Should I still use the automounter ? Different maps or symbolic links > to emulate the automounter names for the servers that are directly > SAN-connected gpfs clients ? /home//username/ or whatever has to work > on all systems. > > I found a bit in the gpfs problem determination guide suggesting that there is a way to use an automounter program map for gpfs (/usr/lpp/mmfs/bin/mmdynamicmap)but I can't find any other documentation about it. > > I would really like to hear how other people with a similar setup are doing this. > > Thanks, Sandra. > > *Sandra McLaughlin* > > Scientific Computing Specialist > > ___________________________________________________ > > *AstraZeneca* > > *R&D*| R&D Information > > 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG > > Tel +44 1625 517307 > > sandra.mclaughlin at astrazeneca.com > > > ------------------------------------------------------------------------ > > AstraZeneca UK Limited is a company incorporated in England and Wales > with registered number: 03674842 and a registered office at 2 Kingdom > Street, London, W2 6BD. > > *Confidentiality Notice: *This message is private and may contain > confidential, proprietary and legally privileged information. If you > have received this message in error, please notify us and remove it > from your system and note that you must not copy, distribute or take > any action in reliance on it. Any unauthorised use or disclosure of > the contents of this message is not permitted and may be unlawful. > > *Disclaimer:* Email messages may be subject to delays, interception, > non-delivery and unauthorised alterations. Therefore, information > expressed in this message is not given or endorsed by AstraZeneca UK > Limited unless otherwise notified by an authorised representative > independent of this message. No contractual relationship is created by > this message by any person unless specifically indicated by agreement > in writing other than email. > > *Monitoring: *AstraZeneca UK Limited may monitor email traffic data > and content for the purposes of the prevention and detection of crime, > ensuring the security of our computer systems and checking compliance > with our Code of Conduct and policies. > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From pete at realisestudio.com Fri Jul 12 17:21:36 2013 From: pete at realisestudio.com (Pete Smith) Date: Fri, 12 Jul 2013 17:21:36 +0100 Subject: [gpfsug-discuss] gpfs/nfs/autofs In-Reply-To: <51E00BD7.9010905@gpfsug.org> References: <1332E4641C13494B98BB1E3DB32CC8150E5CFE03@ukaprdembx02.rd.astrazeneca.net> <51E00BD7.9010905@gpfsug.org> Message-ID: Push from ldap works great. On 12 July 2013 14:59, Jez Tucker (GPFS UG Chair) wrote: > Hey Sandra, > > The mmdynamicmap is used when auto-mounting GPFS on a node the GPFS > software installed (see also /var/mmfs/gen/mmIndirectMap when gpfs -A is set > to 'automount'.) > > > For NFS clients, I like autofs a lot. > There are two types of map, hence an example for each: > > > Direct maps > > /etc/auto.master, add the line: > /- /etc/auto.gpfsnfs > > /etc/auto.gpfsnfs, add the line: > /path/to/mountpoint -fstype=nfs,nfsvers=3 > ctdbclustername:/path/to/nfsexport > > > Indirect Maps > > For home directories, you can mount them using an indirect map so as to only > mount the logged in user's home directory. > (or mount them all, using a direct map for their containing folder) > > /etc/auto.master, add the line: > /path/to/homedirsmount /etc/auto.homedirs > > /etc/auto.homedirs, add the line: > * homeserver:/path/to/homedirs/& > > > > Test in a sandpit. > > I would imagine you might need to make sure that your NFS mount point > reflects the same path as on a GPFS client/server. > > Once you're happy this works, you can push out the maps from your > ldap/puppet/other service. > > I'm sure other folks also have their methods, chime in. > > Regards, > > Jez > --- > GPFS UG Chair > > > > On 11/07/13 12:24, McLaughlin, Sandra M wrote: > > Hi, > > > > I would just like some opinions on the best way to serve a gpfs file system > to server/workstations which are not directly connected to the storage. > > > > Background: We are in the process of moving from old storage (approx 20TB); > lots of filesystems ? JFS2 on AIX with HACMP. served out with NFS to a linux > cluster and about 150 linux workstations and random other servers; to new > storage (approx 250TB); 2 gpfs filesystems, Linux NSDs, using ctdb for NFS > and Samba. We have also installed a server for TSM, which is SAN connected > to the gpfs, and have some new compute servers which are also on the SAN, > and therefore have pretty good performance. > > > > Should I still use the automounter ? Different maps or symbolic links to > emulate the automounter names for the servers that are directly > SAN-connected gpfs clients ? /home/username or whatever has to work on all > systems. > > I found a bit in the gpfs problem determination guide suggesting that there > is a way to use an automounter program map for gpfs > (/usr/lpp/mmfs/bin/mmdynamicmap) but I can?t find any other documentation > about it. > > > > I would really like to hear how other people with a similar setup are doing > this. > > > > Thanks, Sandra. > > > > Sandra McLaughlin > > Scientific Computing Specialist > > ___________________________________________________ > > AstraZeneca > > R&D | R&D Information > > 30F49, Mereside, Alderley Park, GB-Macclesfield, SK10 4TG > > Tel +44 1625 517307 > > sandra.mclaughlin at astrazeneca.com > > > > > > ________________________________ > > AstraZeneca UK Limited is a company incorporated in England and Wales with > registered number: 03674842 and a registered office at 2 Kingdom Street, > London, W2 6BD. > > Confidentiality Notice: This message is private and may contain > confidential, proprietary and legally privileged information. If you have > received this message in error, please notify us and remove it from your > system and note that you must not copy, distribute or take any action in > reliance on it. Any unauthorised use or disclosure of the contents of this > message is not permitted and may be unlawful. > > Disclaimer: Email messages may be subject to delays, interception, > non-delivery and unauthorised alterations. Therefore, information expressed > in this message is not given or endorsed by AstraZeneca UK Limited unless > otherwise notified by an authorised representative independent of this > message. No contractual relationship is created by this message by any > person unless specifically indicated by agreement in writing other than > email. > > Monitoring: AstraZeneca UK Limited may monitor email traffic data and > content for the purposes of the prevention and detection of crime, ensuring > the security of our computer systems and checking compliance with our Code > of Conduct and policies. > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Pete Smith DevOp/System Administrator Realise Studio 12/13 Poland Street, London W1F 8QB T. +44 (0)20 7165 9644 realisestudio.com