From Jez.Tucker at rushes.co.uk Thu Aug 2 10:37:00 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 2 Aug 2012 09:37:00 +0000 Subject: [gpfsug-discuss] Storagepool and threshold migrations Message-ID: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> Allo Wondering if anyone uses multiple stgpools under a namespace where all stgpools have thresholds attached. Assume two storage pools A and B under namespace N. If storage pool A hits a threshold then mmapplypolicy will be called to migrate stgpool A However, whilst that is occurring, if stgpool B hits its threshold then same mmapplypolicy will be called. Obviously, this can't work if you're using--single-instance or a file locking method to stop the policy being applied every 2 mins due to the lowDiskSpace event. How do you handle this? Move any migration logic out of the main 'running policy' and change the callback to --parms "%eventName %fsName %storagePool" with locking on a per stgpool basis? --- Jez Tucker Senior Sysadmin Rushes DDI: +44 (0) 207 851 6276 http://www.rushes.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Aug 6 18:23:36 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 6 Aug 2012 17:23:36 +0000 Subject: [gpfsug-discuss] GPFS & VMware Workstation Message-ID: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can't find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Mon Aug 6 18:30:49 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Mon, 6 Aug 2012 10:30:49 -0700 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: Two things to looks for: 1. Make sure the virtual LUNS do not do any server caching of data. 2. Use nsddevices file (did you try this?) Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/06/2012 10:25 AM Subject: [gpfsug-discuss] GPFS & VMware Workstation Sent by: gpfsug-discuss-bounces at gpfsug.org Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can?t find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Mon Aug 6 18:31:40 2012 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 6 Aug 2012 18:31:40 +0100 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I do this on virtualbox a lot. I use an OpenFiler VM to provide iSCSI targets to all of the VMs. Works great as long as you dont actually put much data on it. Not enough IOPS to go round. It would run much better if I had an SSD. Regards, Vic On 6 Aug 2012, at 18:23, Jez Tucker wrote: > Has anyone managed to set this up? (Completely unsupported) > > What sort of vmware disks did you use? > > I created lsilogic vmdks and could actually create do mmcrnsd. > That said, mmcrfs fails as it can?t find the disks. > > > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Aug 6 18:53:35 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 6 Aug 2012 17:53:35 +0000 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B41BB@WARVWEXC1.uk.deluxe-eu.com> It seems vmware needs disk locking switched off. Funny that ;-) To create a disk: vmware-vdiskmanager ?a lsi-logic ?c ?s 10GB ?t 2 clusterdisk1.vmdk Add the disk to your nsd server #1. Save config. Edit .vmx file for server. Add the line: disk.locking = ?false? Boot server. Do this for your other quorum-managers. Disk type in nsddevices is ?generic?. Badda bing. One virtual test cluster. Thanks all. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 06 August 2012 18:31 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] GPFS & VMware Workstation Two things to looks for: 1. Make sure the virtual LUNS do not do any server caching of data. 2. Use nsddevices file (did you try this?) Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/06/2012 10:25 AM Subject: [gpfsug-discuss] GPFS & VMware Workstation Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can?t find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattw at vpac.org Tue Aug 7 05:32:11 2012 From: mattw at vpac.org (Matthew Wallis) Date: Tue, 7 Aug 2012 14:32:11 +1000 (EST) Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <1697484944.65711.1344313730219.JavaMail.root@mail> Message-ID: <967227407.65713.1344313931532.JavaMail.root@mail> I know this question is a month or so old, but I figure this is a ping to see if the list is still alive or not :-) >Curiosity... > > How many of you run Windows, Linux and OS X as clients > (GPFS/NFS/CIFS), in any configuration? > >Jez We have 2 small clusters of 42 nodes each, one of them is all Linux, the other a mixture of Linux and Windows Server 2008R2 clients, and to make it more fun, we dual boot the client nodes. 4 NSDs running RHEL 6, in a GPFS cluster with the 42 compute nodes running CentOS 5 and Windows Server. 4 Service nodes, 3 running CentOS 5, and 1 running Windows Server, remote mounting the FS from the above cluster. 1 single host remote Windows Server remote mounting one of the FS for streaming data capture from a camera. 1 occasional headache for me. Matt. -- Matthew Wallis, Systems Administrator Victorian Partnership for Advanced Computing. Ph: +61 3 9925 4645 Fax: +61 3 9925 4647 From Jez.Tucker at rushes.co.uk Tue Aug 7 08:36:47 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 7 Aug 2012 07:36:47 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <967227407.65713.1344313931532.JavaMail.root@mail> References: <1697484944.65711.1344313730219.JavaMail.root@mail> <967227407.65713.1344313931532.JavaMail.root@mail> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B432F@WARVWEXC1.uk.deluxe-eu.com> Indeed it is. Nice to know what our members are running. I should really make a histogram or suchlike. Any python monkeys out there? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Matthew Wallis > Sent: 07 August 2012 05:32 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] Your GPFS O/S support? > > > I know this question is a month or so old, but I figure this is a ping to see if > the list is still alive or not :-) > > >Curiosity... > > > > How many of you run Windows, Linux and OS X as clients > > (GPFS/NFS/CIFS), in any configuration? > > > >Jez > > We have 2 small clusters of 42 nodes each, one of them is all Linux, the > other a mixture of Linux and Windows Server 2008R2 clients, and to make it > more fun, we dual boot the client nodes. > > 4 NSDs running RHEL 6, in a GPFS cluster with the 42 compute nodes running > CentOS 5 and Windows Server. > > 4 Service nodes, 3 running CentOS 5, and 1 running Windows Server, remote > mounting the FS from the above cluster. > > 1 single host remote Windows Server remote mounting one of the FS for > streaming data capture from a camera. > > 1 occasional headache for me. > > Matt. > > -- > Matthew Wallis, Systems Administrator > Victorian Partnership for Advanced Computing. > Ph: +61 3 9925 4645 Fax: +61 3 9925 4647 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From robert at strubi.ox.ac.uk Tue Aug 7 12:09:31 2012 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Tue, 7 Aug 2012 12:09:31 +0100 (BST) Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <201208071109.062254@mail.strubi.ox.ac.uk> Dear GPFS users, Please excuse what is possibly a naive question from a not-yet GPFS admin. We are seriously considering GPFS to provide storage for our compute clusters. We are probably looking at about 600-900TB served into 2000+ Linux cores over InfiniBand. DDN SFA10K and SFA12K seem like good fits. Our domain-specific need is high I/O rates from multiple readers (100-1000) all accessing parts of the same set of 1000-5000 large files (typically 30GB BAM files, for those in the know). We could easily sustain read rates of 5-10GB/s or more if the system would cope. My question is how should we go about configuring the number and specifications of the NSDs? Are there any good rules of thumb? And are there any folk out there using GPFS for high I/O rates like this in a similar setup who would be happy to have their brains/experiences picked? Thanks in advance and best wishes, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 From Jez.Tucker at rushes.co.uk Tue Aug 7 12:32:55 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 7 Aug 2012 11:32:55 +0000 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B44DC@WARVWEXC1.uk.deluxe-eu.com> The HPC folks should probably step in here. Not having such a large system, I'll point you at : https://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_complan.htm > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Robert Esnouf > Sent: 07 August 2012 12:10 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] A GPFS newbie > > > Dear GPFS users, > > Please excuse what is possibly a naive question from a not-yet GPFS admin. > We are seriously considering GPFS to provide storage for our compute > clusters. We are probably looking at about 600-900TB served into 2000+ > Linux cores over InfiniBand. > DDN SFA10K and SFA12K seem like good fits. Our domain-specific need is > high I/O rates from multiple readers (100-1000) all accessing parts of the > same set of 1000-5000 large files (typically 30GB BAM files, for those in the > know). We could easily sustain read rates of 5-10GB/s or more if the system > would cope. > > My question is how should we go about configuring the number and > specifications of the NSDs? Are there any good rules of thumb? And are > there any folk out there using GPFS for high I/O rates like this in a similar > setup who would be happy to have their brains/experiences picked? > > Thanks in advance and best wishes, > Robert Esnouf > > -- > > Dr. Robert Esnouf, > University Research Lecturer > and Head of Research Computing, > Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt > Drive, Oxford OX3 7BN, UK > > Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 > and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From mattw at vpac.org Tue Aug 7 12:43:00 2012 From: mattw at vpac.org (Matthew Wallis) Date: Tue, 7 Aug 2012 21:43:00 +1000 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <419502BD-5AAC-43E3-8116-4A96DDBC64C5@vpac.org> Hi Robert, On 07/08/2012, at 9:09 PM, Robert Esnouf wrote: > > Dear GPFS users, > > My question is how should we go about configuring the number > and specifications of the NSDs? Are there any good rules of > thumb? And are there any folk out there using GPFS for high > I/O rates like this in a similar setup who would be happy to > have their brains/experiences picked? From IBM, a x3650 M3 should be able to provide around 2.4GB/sec over QDR IB. That's with 12GB of RAM and dual quad core X5667s They believe with the M4 you should be able to sustain somewhere near double that, but we'll say 4GB/sec for safety. So with 4 of those you should be pushing somewhere north of 16GB/sec. With FDR IB and PCIe 3.0, I can certainly believe it's possible, I think they've doubled the minimum RAM in the recent proposal we had from them. In our benchmarks we certainly found the M3's capable of it, for daily use, our workloads are too mixed, we don't have anyone doing sustained reads or writes on those types of files. Might have to be a bit more expansive on your node configuration though, I can get 2000 cores in 32 nodes these days, so that spec would give you 512MB/sec per node if everyone is reading and writing at once. If you're only doing 16 cores per node, then that's 125 nodes, and only 131MB/sec per node. Matt. From j.buzzard at dundee.ac.uk Tue Aug 7 12:56:14 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Tue, 7 Aug 2012 12:56:14 +0100 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <5021025E.6090102@dundee.ac.uk> On 07/08/12 12:09, Robert Esnouf wrote: > > Dear GPFS users, > > Please excuse what is possibly a naive question from a not-yet > GPFS admin. We are seriously considering GPFS to provide > storage for our compute clusters. We are probably looking at > about 600-900TB served into 2000+ Linux cores over InfiniBand. > DDN SFA10K and SFA12K seem like good fits. Our domain-specific > need is high I/O rates from multiple readers (100-1000) all > accessing parts of the same set of 1000-5000 large files > (typically 30GB BAM files, for those in the know). We could > easily sustain read rates of 5-10GB/s or more if the system > would cope. > > My question is how should we go about configuring the number > and specifications of the NSDs? Are there any good rules of > thumb? And are there any folk out there using GPFS for high > I/O rates like this in a similar setup who would be happy to > have their brains/experiences picked? > I would guess the biggest question is how sequential is the work load? Also how many cores per box, aka how many cores per storage interface card? The next question would be how much of your data is "old cruft" that is files which have not been used in a long time, but are not going to be deleted because they might be useful? If this is a reasonably high number then tiering/ILM is a worthwhile strategy to follow. Of course if you can afford to buy all your data disks in 600GB 3.5" 15kRPM disks then that is the way to go. Use SSD's for your metadata disks is I would say a must. How much depends on how many files you have. More detailed answers would require more information. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH The University of Dundee is a registered Scottish Charity, No: SC015096 From s.watkins at nhm.ac.uk Wed Aug 8 11:24:42 2012 From: s.watkins at nhm.ac.uk (Steff Watkins) Date: Wed, 8 Aug 2012 10:24:42 +0000 Subject: [gpfsug-discuss] Upgrade path Message-ID: Hello, I'm currently looking after a GPFS setup with six nodes and about 80TB disk. The current GPFS level is 3.4.0 and I'm looking to upgrade it. The (vague) plan is to do a rolling upgrade of the various nodes working through them one at a time leaving the cluster manager node until last then doing a failover of that role to another node and then upgrading the last host. Is there a standard upgrade methodology for GPFS systems or any tricks, tips or traps to know about before I go ahead with this? Also is it 'safe' assume that I could upgrade straight from 3.4.0 to 3.5.x or are there any intermediary steps that need to be performed as well? Any help or advice appreciated, Steff Watkins ----- Steff Watkins Natural History Museum, Cromwell Road, London,SW7 5BD Systems programmer Email: s.watkins at nhm.ac.uk Systems Team Phone: +44 (0)20 7942 6000 opt 2 ======== "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG From viccornell at gmail.com Wed Aug 8 12:32:19 2012 From: viccornell at gmail.com (Vic Cornell) Date: Wed, 8 Aug 2012 12:32:19 +0100 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: References: Message-ID: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> As with all of these things the Wiki is your friend. In this case it will point you at the documentation. The bits you want are here. http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm and http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm You can both 3.4 and 3.5 nodes in a cluster - but I personally wouldn't do it unless I had to. Regards, Vic On 8 Aug 2012, at 11:24, Steff Watkins wrote: > Hello, > > I'm currently looking after a GPFS setup with six nodes and about 80TB disk. The current GPFS level is 3.4.0 and I'm looking to upgrade it. The (vague) plan is to do a rolling upgrade of the various nodes working through them one at a time leaving the cluster manager node until last then doing a failover of that role to another node and then upgrading the last host. > > Is there a standard upgrade methodology for GPFS systems or any tricks, tips or traps to know about before I go ahead with this? > > Also is it 'safe' assume that I could upgrade straight from 3.4.0 to 3.5.x or are there any intermediary steps that need to be performed as well? > > Any help or advice appreciated, > Steff Watkins > > ----- > Steff Watkins Natural History Museum, Cromwell Road, London,SW7 5BD > Systems programmer Email: s.watkins at nhm.ac.uk > Systems Team Phone: +44 (0)20 7942 6000 opt 2 > ======== > "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From s.watkins at nhm.ac.uk Wed Aug 8 13:59:04 2012 From: s.watkins at nhm.ac.uk (Steff Watkins) Date: Wed, 8 Aug 2012 12:59:04 +0000 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> References: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> Message-ID: > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Vic Cornell > Sent: Wednesday, August 08, 2012 12:32 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Upgrade path > > As with all of these things the Wiki is your friend. > > In this case it will point you at the documentation. > > The bits you want are here. > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.clust > er.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm > > and > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.clust > er.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm > > You can both 3.4 and 3.5 nodes in a cluster - but I personally wouldn't do it > unless I had to. > > Regards, > > Vic As I'm relatively new to the list (been here about two months) I've missed/not been aware of the wiki. Very big thanks to you for putting me onto this. It looks like it's got pretty much everything I'll need for the moment to get the upgrades done. Regards, Steff Watkins ----- Steff Watkins Natural History Museum, Cromwell Road, London,SW75BD Systems programmer Email: s.watkins at nhm.ac.uk Systems Team Phone: +44 (0)20 7942 6000 opt 2 ======== "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG From Jez.Tucker at rushes.co.uk Wed Aug 8 14:06:27 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 8 Aug 2012 13:06:27 +0000 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: References: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B4D52@WARVWEXC1.uk.deluxe-eu.com> I should mention - though the website is, well, dire atm, if there's useful links I'm more than happy to put them up there. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Steff Watkins > Sent: 08 August 2012 13:59 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Upgrade path > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Vic Cornell > > Sent: Wednesday, August 08, 2012 12:32 PM > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] Upgrade path > > > > As with all of these things the Wiki is your friend. > > > > In this case it will point you at the documentation. > > > > The bits you want are here. > > > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.c > > lust er.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm > > > > and > > > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.c > > lust er.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm > > > > You can both 3.4 and 3.5 nodes in a cluster - but I personally > > wouldn't do it unless I had to. > > > > Regards, > > > > Vic > > As I'm relatively new to the list (been here about two months) I've > missed/not been aware of the wiki. > > Very big thanks to you for putting me onto this. It looks like it's got pretty > much everything I'll need for the moment to get the upgrades done. > > Regards, > Steff Watkins > > ----- > Steff Watkins Natural History Museum, Cromwell Road, > London,SW75BD > Systems programmer Email: s.watkins at nhm.ac.uk > Systems Team Phone: +44 (0)20 7942 6000 opt 2 > ======== > "Many were increasingly of the opinion that they'd all made a big mistake in > coming down from the trees in the first place. And some said that even the > trees had been a bad move, and that no one should ever have left the > oceans." - HHGTTG _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From crobson at ocf.co.uk Wed Aug 8 15:29:55 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Wed, 8 Aug 2012 15:29:55 +0100 Subject: [gpfsug-discuss] Agenda for September meeting Message-ID: Dear All, The time is nearly here for our next group meeting. We have organised another fantastic day of speakers for you and really hope you continue to support as well as you have done previously. Please see below the agenda for the next user group meeting: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 pNFS and GPFS Dean Hildebrand, Research Staff Member - Storage Systems IBM Almaden Research Center 12:30 Lunch (Buffet provided) 13:30 SAN Volume Controller/V7000, Easy Tier and Real Time Compression 14:30 WOS: Web Object Scalar Vic Cornell, DDN 14:50 GPFS Metadata + SSDs Andrew Dean, OCF 15:20 User experience of GPFS 15:50 Stupid GPFS Tricks 2012 16:00 Group discussion: Challenges, experiences and questions Led by Jez Tucker, Group Chairperson 16:20 Close The meeting will take place on 20th September at Bishopswood Golf Club, Bishopswood, Bishopswood Lane, Tadley, Hampshire, RG26 4AT. Please register with me if you will be attending the day no later than 6th September. Places are limited and available on a first come first served basis. I look forward to seeing as many of you there as possible! Best wishes Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GPFSUGAgendaSeptember2012.pdf Type: application/pdf Size: 65989 bytes Desc: GPFSUGAgendaSeptember2012.pdf URL: From robert at strubi.ox.ac.uk Tue Aug 14 17:02:47 2012 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Tue, 14 Aug 2012 17:02:47 +0100 (BST) Subject: [gpfsug-discuss] Agenda for September meeting In-Reply-To: References: Message-ID: <201208141602.062512@mail.strubi.ox.ac.uk> Dear Claire, I would be interested in attending the GPFS User Group Meeting on 20th September. I am not a GPFS user yet, although we are seriously looking at it and may have an evaluation system by then. If it is still OK for me to attend then please let me know. Best wishes, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 ---- Original message ---- >Date: Wed, 8 Aug 2012 15:29:55 +0100 >From: gpfsug-discuss-bounces at gpfsug.org (on behalf of Claire Robson ) >Subject: [gpfsug-discuss] Agenda for September meeting >To: "gpfsug-discuss at gpfsug.org" > > Dear All, > > > > The time is nearly here for our next group meeting. > We have organised another fantastic day of speakers > for you and really hope you continue to support as > well as you have done previously. Please see below > the agenda for the next user group meeting: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group > Secretary > > 11:05 pNFS and GPFS > > Dean Hildebrand, Research Staff Member - Storage > Systems > > IBM Almaden Research Center > > 12:30 Lunch (Buffet provided) > > 13:30 SAN Volume Controller/V7000, Easy Tier and > Real Time Compression > > 14:30 WOS: Web Object Scalar > > Vic Cornell, DDN > > 14:50 GPFS Metadata + SSDs > > Andrew Dean, OCF > > 15:20 User experience of GPFS > > 15:50 Stupid GPFS Tricks 2012 > > 16:00 Group discussion: Challenges, experiences > and questions > > Led by Jez Tucker, Group Chairperson > > 16:20 Close > > > > The meeting will take place on 20^th September at > Bishopswood Golf Club, Bishopswood, Bishopswood > Lane, Tadley, Hampshire, RG26 4AT. > > Please register with me if you will be attending the > day no later than 6^th September. Places are limited > and available on a first come first served basis. > > > > I look forward to seeing as many of you there as > possible! > > > > Best wishes > > > > Claire Robson > > GPFS User Group Secretary > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > > > > > > > OCF plc is a company registered in England and > Wales. Registered number 4132533, VAT number GB 780 > 6803 14. Registered office address: OCF plc, 5 > Rotunda Business Centre, Thorncliffe Park, > Chapeltown, Sheffield, S35 2PG > > > > This message is private and confidential. If you > have received this message in error, please notify > us immediately and remove it from your system. > > >________________ >GPFSUGAgendaSeptember2012.pdf (89k bytes) >________________ >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Jez.Tucker at rushes.co.uk Tue Aug 14 19:24:30 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 14 Aug 2012 18:24:30 +0000 Subject: [gpfsug-discuss] Per stgpool locking gpfs->tsm hsm script updated Message-ID: <39571EA9316BE44899D59C7A640C13F5305B94A7@WARVWEXC1.uk.deluxe-eu.com> Hello all Just pushed the latest version of my script to the git repo. - Works on multiple storage pools - Directory lockfiles (atomic) - Use N tape drives - PID stored for easy use fo kill -s SIGTERM `cat/path/to/pidfile` - More informative logged into /var/adm/ras/mmfs.log.latest See: https://github.com/gpfsug/gpfsug-tools/tree/master/scripts/hsm Obv. Use at own risk and test first on non-critical data. Bugfixes / stupidty pointed out is appreciated. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Wed Aug 15 16:03:18 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Wed, 15 Aug 2012 16:03:18 +0100 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 28/08/2012) Message-ID: I am out of the office until 28/08/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 8, Issue 5" sent on 15/8/2012 12:00:01. This is the only notification you will receive while this person is away. From bevans at canditmedia.co.uk Wed Aug 15 19:39:44 2012 From: bevans at canditmedia.co.uk (Barry Evans) Date: Wed, 15 Aug 2012 19:39:44 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba Message-ID: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Hello all, Anyone had success with windows extended attributes actually passing through samba over to GPFS? On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read only through Win 7 explorer and attrib I get: [2012/08/15 18:13:32.023966, 1] modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS attributes failed: -1 This is with gpfs:winattr set to yes. I also tried enabling 'store dos attributes' for a laugh but the result was no different. I've not tried bumping up the loglevel yet, this may reveal something more interesting. Many Thanks, Barry Evans Technical Director CandIT Media UK Ltd +44 7500 667 671 bevans at canditmedia.co.uk From orlando.richards at ed.ac.uk Wed Aug 15 22:20:17 2012 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Wed, 15 Aug 2012 22:20:17 +0100 (BST) Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Message-ID: I had in my head that you'd need to be running samba 3.6 for that to work - although that was a while ago, and they may have backported it. On Wed, 15 Aug 2012, Barry Evans wrote: > Hello all, > > Anyone had success with windows extended attributes actually passing through samba over to GPFS? > > On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read only through Win 7 explorer and attrib I get: > > [2012/08/15 18:13:32.023966, 1] modules/vfs_gpfs.c:1003(gpfs_get_xattr) > gpfs_get_xattr: Get GPFS attributes failed: -1 > > This is with gpfs:winattr set to yes. I also tried enabling 'store dos attributes' for a laugh but the result was no different. I've not tried bumping up the loglevel yet, this may reveal something more interesting. > > Many Thanks, > Barry Evans > Technical Director > CandIT Media UK Ltd > +44 7500 667 671 > bevans at canditmedia.co.uk > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From mail at arif-ali.co.uk Wed Aug 15 22:24:33 2012 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 15 Aug 2012 22:24:33 +0100 Subject: [gpfsug-discuss] open-source and gpfs Message-ID: All, I was hoping to use GPFS in an open-source project, which has little to nil funding (We have enough for infrastructure). How would I approach to get the ability to use GPFS for a non-profit open-source project. Would I need to somehow buy a license, as I know there aren't any license agreements that gpfs comes with, and that it is all about trust in terms of licensing. Any feedback on this would be great. -- Arif Ali catch me on freenode IRC, username: arif-ali From j.buzzard at dundee.ac.uk Wed Aug 15 23:07:34 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 15 Aug 2012 23:07:34 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Message-ID: <502C1DA6.7090002@dundee.ac.uk> Barry Evans wrote: > Hello all, > > Anyone had success with windows extended attributes actually passing > through samba over to GPFS? > > On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read > only through Win 7 explorer and attrib I get: > > [2012/08/15 18:13:32.023966, 1] > modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS > attributes failed: -1 > > This is with gpfs:winattr set to yes. I also tried enabling 'store > dos attributes' for a laugh but the result was no different. I've not > tried bumping up the loglevel yet, this may reveal something more > interesting. Hum, worked for me with 3.4.0-13 and now with 3.4.0-15, using samba3x packages that comes with CentOS 5.6 in the past and CentOS 5.8 currently. Note I have to rebuild the Samba packages to get the vfs_gpfs module which you need to load. The relevant bits of the smb.conf are # general options vfs objects = shadow_copy2 fileid gpfs # the GPFS stuff fileid : algorithm = fsname gpfs : sharemodes = yes gpfs : winattr = yes force unknown acl user = yes nfs4 : mode = special nfs4 : chown = no nfs4 : acedup = merge # store DOS attributes in extended attributes (vfs_gpfs then stores them in the file system) ea support = yes store dos attributes = yes map readonly = no map archive = no map system = no map hidden = no Though I would note that working out what all the configuration options required to make this (and other stuff) work where took some considerable amount of time. I guess there is a reason why IBM charge $$$ for the SONAS and StoreWise Unified products. Note that if you are going for that full make my Samba/GPFS file server look as close as possible to a pucker MS Windows server, you might want to consider setting the following GPFS options cifsBypassShareLocksOnRename cifsBypassTraversalChecking allowWriteWithDeleteChild All fairly self explanatory, and make GPFS follow Windows schematics more closely, though they are "undocumented". There is also there is an undocumented option for ACL's on mmchfs (I am working on 3.4.0-15) so that you can do mmchfs test -k samba Even shows up in the output of mmlsfs. Not entirely sure what samba ACL's are mind you... JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH The University of Dundee is a registered Scottish Charity, No: SC015096 From Jez.Tucker at rushes.co.uk Thu Aug 16 15:27:19 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 16 Aug 2012 14:27:19 +0000 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From sfadden at us.ibm.com Thu Aug 16 16:00:04 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Thu, 16 Aug 2012 08:00:04 -0700 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I am not aware of any special pricing for open source projects. For more details contact you IBM representative or business partner. If you don't know who that is, let me know and I can help you track them down. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/16/2012 07:27 AM Subject: Re: [gpfsug-discuss] open-source and gpfs Sent by: gpfsug-discuss-bounces at gpfsug.org TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Thu Aug 16 16:26:08 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 16 Aug 2012 15:26:08 +0000 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305BA1C8@WARVWEXC1.uk.deluxe-eu.com> I'll sort this out with mine and report back to the list. Questions wrt O/S projects: 1) Cost 2) License terms 3) What can be distributed from the portability layer etc. Any others? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 16 August 2012 16:00 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] open-source and gpfs I am not aware of any special pricing for open source projects. For more details contact you IBM representative or business partner. If you don't know who that is, let me know and I can help you track them down. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker > To: gpfsug main discussion list >, Date: 08/16/2012 07:27 AM Subject: Re: [gpfsug-discuss] open-source and gpfs Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bevans at canditmedia.co.uk Thu Aug 16 16:40:03 2012 From: bevans at canditmedia.co.uk (Barry Evans) Date: Thu, 16 Aug 2012 16:40:03 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <502C1DA6.7090002@dundee.ac.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> <502C1DA6.7090002@dundee.ac.uk> Message-ID: Yep, that works a treat, thanks Jonathan! I was missing ea support and the map = no options Cheers, Barry On 15 Aug 2012, at 23:07, Jonathan Buzzard wrote: > Barry Evans wrote: >> Hello all, >> >> Anyone had success with windows extended attributes actually passing >> through samba over to GPFS? >> >> On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read >> only through Win 7 explorer and attrib I get: >> >> [2012/08/15 18:13:32.023966, 1] >> modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS >> attributes failed: -1 >> >> This is with gpfs:winattr set to yes. I also tried enabling 'store >> dos attributes' for a laugh but the result was no different. I've not >> tried bumping up the loglevel yet, this may reveal something more >> interesting. > > Hum, worked for me with 3.4.0-13 and now with 3.4.0-15, using samba3x > packages that comes with CentOS 5.6 in the past and CentOS 5.8 > currently. Note I have to rebuild the Samba packages to get the vfs_gpfs > module which you need to load. The relevant bits of the smb.conf are > > # general options > vfs objects = shadow_copy2 fileid gpfs > > # the GPFS stuff > fileid : algorithm = fsname > gpfs : sharemodes = yes > gpfs : winattr = yes > force unknown acl user = yes > nfs4 : mode = special > nfs4 : chown = no > nfs4 : acedup = merge > > # store DOS attributes in extended attributes (vfs_gpfs then stores them > in the file system) > ea support = yes > store dos attributes = yes > map readonly = no > map archive = no > map system = no > map hidden = no > > > Though I would note that working out what all the configuration options > required to make this (and other stuff) work where took some > considerable amount of time. I guess there is a reason why IBM charge > $$$ for the SONAS and StoreWise Unified products. > > Note that if you are going for that full make my Samba/GPFS file server > look as close as possible to a pucker MS Windows server, you might want > to consider setting the following GPFS options > > cifsBypassShareLocksOnRename > cifsBypassTraversalChecking > allowWriteWithDeleteChild > > All fairly self explanatory, and make GPFS follow Windows schematics > more closely, though they are "undocumented". > > There is also there is an undocumented option for ACL's on mmchfs (I am > working on 3.4.0-15) so that you can do > > mmchfs test -k samba > > Even shows up in the output of mmlsfs. Not entirely sure what samba > ACL's are mind you... > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > > The University of Dundee is a registered Scottish Charity, No: SC015096 > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Aug 24 12:42:51 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 24 Aug 2012 11:42:51 +0000 Subject: [gpfsug-discuss] mmbackup Message-ID: <39571EA9316BE44899D59C7A640C13F5305BE156@WARVWEXC1.uk.deluxe-eu.com> Does anyone have to hand a copy of both policies which mmbackup uses for full and incremental? --- Jez Tucker Senior Sysadmin Rushes DDI: +44 (0) 207 851 6276 http://www.rushes.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Thu Aug 2 10:37:00 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 2 Aug 2012 09:37:00 +0000 Subject: [gpfsug-discuss] Storagepool and threshold migrations Message-ID: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> Allo Wondering if anyone uses multiple stgpools under a namespace where all stgpools have thresholds attached. Assume two storage pools A and B under namespace N. If storage pool A hits a threshold then mmapplypolicy will be called to migrate stgpool A However, whilst that is occurring, if stgpool B hits its threshold then same mmapplypolicy will be called. Obviously, this can't work if you're using--single-instance or a file locking method to stop the policy being applied every 2 mins due to the lowDiskSpace event. How do you handle this? Move any migration logic out of the main 'running policy' and change the callback to --parms "%eventName %fsName %storagePool" with locking on a per stgpool basis? --- Jez Tucker Senior Sysadmin Rushes DDI: +44 (0) 207 851 6276 http://www.rushes.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Aug 6 18:23:36 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 6 Aug 2012 17:23:36 +0000 Subject: [gpfsug-discuss] GPFS & VMware Workstation Message-ID: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can't find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Mon Aug 6 18:30:49 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Mon, 6 Aug 2012 10:30:49 -0700 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: Two things to looks for: 1. Make sure the virtual LUNS do not do any server caching of data. 2. Use nsddevices file (did you try this?) Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/06/2012 10:25 AM Subject: [gpfsug-discuss] GPFS & VMware Workstation Sent by: gpfsug-discuss-bounces at gpfsug.org Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can?t find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Mon Aug 6 18:31:40 2012 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 6 Aug 2012 18:31:40 +0100 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I do this on virtualbox a lot. I use an OpenFiler VM to provide iSCSI targets to all of the VMs. Works great as long as you dont actually put much data on it. Not enough IOPS to go round. It would run much better if I had an SSD. Regards, Vic On 6 Aug 2012, at 18:23, Jez Tucker wrote: > Has anyone managed to set this up? (Completely unsupported) > > What sort of vmware disks did you use? > > I created lsilogic vmdks and could actually create do mmcrnsd. > That said, mmcrfs fails as it can?t find the disks. > > > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Aug 6 18:53:35 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 6 Aug 2012 17:53:35 +0000 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B41BB@WARVWEXC1.uk.deluxe-eu.com> It seems vmware needs disk locking switched off. Funny that ;-) To create a disk: vmware-vdiskmanager ?a lsi-logic ?c ?s 10GB ?t 2 clusterdisk1.vmdk Add the disk to your nsd server #1. Save config. Edit .vmx file for server. Add the line: disk.locking = ?false? Boot server. Do this for your other quorum-managers. Disk type in nsddevices is ?generic?. Badda bing. One virtual test cluster. Thanks all. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 06 August 2012 18:31 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] GPFS & VMware Workstation Two things to looks for: 1. Make sure the virtual LUNS do not do any server caching of data. 2. Use nsddevices file (did you try this?) Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/06/2012 10:25 AM Subject: [gpfsug-discuss] GPFS & VMware Workstation Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can?t find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattw at vpac.org Tue Aug 7 05:32:11 2012 From: mattw at vpac.org (Matthew Wallis) Date: Tue, 7 Aug 2012 14:32:11 +1000 (EST) Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <1697484944.65711.1344313730219.JavaMail.root@mail> Message-ID: <967227407.65713.1344313931532.JavaMail.root@mail> I know this question is a month or so old, but I figure this is a ping to see if the list is still alive or not :-) >Curiosity... > > How many of you run Windows, Linux and OS X as clients > (GPFS/NFS/CIFS), in any configuration? > >Jez We have 2 small clusters of 42 nodes each, one of them is all Linux, the other a mixture of Linux and Windows Server 2008R2 clients, and to make it more fun, we dual boot the client nodes. 4 NSDs running RHEL 6, in a GPFS cluster with the 42 compute nodes running CentOS 5 and Windows Server. 4 Service nodes, 3 running CentOS 5, and 1 running Windows Server, remote mounting the FS from the above cluster. 1 single host remote Windows Server remote mounting one of the FS for streaming data capture from a camera. 1 occasional headache for me. Matt. -- Matthew Wallis, Systems Administrator Victorian Partnership for Advanced Computing. Ph: +61 3 9925 4645 Fax: +61 3 9925 4647 From Jez.Tucker at rushes.co.uk Tue Aug 7 08:36:47 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 7 Aug 2012 07:36:47 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <967227407.65713.1344313931532.JavaMail.root@mail> References: <1697484944.65711.1344313730219.JavaMail.root@mail> <967227407.65713.1344313931532.JavaMail.root@mail> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B432F@WARVWEXC1.uk.deluxe-eu.com> Indeed it is. Nice to know what our members are running. I should really make a histogram or suchlike. Any python monkeys out there? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Matthew Wallis > Sent: 07 August 2012 05:32 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] Your GPFS O/S support? > > > I know this question is a month or so old, but I figure this is a ping to see if > the list is still alive or not :-) > > >Curiosity... > > > > How many of you run Windows, Linux and OS X as clients > > (GPFS/NFS/CIFS), in any configuration? > > > >Jez > > We have 2 small clusters of 42 nodes each, one of them is all Linux, the > other a mixture of Linux and Windows Server 2008R2 clients, and to make it > more fun, we dual boot the client nodes. > > 4 NSDs running RHEL 6, in a GPFS cluster with the 42 compute nodes running > CentOS 5 and Windows Server. > > 4 Service nodes, 3 running CentOS 5, and 1 running Windows Server, remote > mounting the FS from the above cluster. > > 1 single host remote Windows Server remote mounting one of the FS for > streaming data capture from a camera. > > 1 occasional headache for me. > > Matt. > > -- > Matthew Wallis, Systems Administrator > Victorian Partnership for Advanced Computing. > Ph: +61 3 9925 4645 Fax: +61 3 9925 4647 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From robert at strubi.ox.ac.uk Tue Aug 7 12:09:31 2012 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Tue, 7 Aug 2012 12:09:31 +0100 (BST) Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <201208071109.062254@mail.strubi.ox.ac.uk> Dear GPFS users, Please excuse what is possibly a naive question from a not-yet GPFS admin. We are seriously considering GPFS to provide storage for our compute clusters. We are probably looking at about 600-900TB served into 2000+ Linux cores over InfiniBand. DDN SFA10K and SFA12K seem like good fits. Our domain-specific need is high I/O rates from multiple readers (100-1000) all accessing parts of the same set of 1000-5000 large files (typically 30GB BAM files, for those in the know). We could easily sustain read rates of 5-10GB/s or more if the system would cope. My question is how should we go about configuring the number and specifications of the NSDs? Are there any good rules of thumb? And are there any folk out there using GPFS for high I/O rates like this in a similar setup who would be happy to have their brains/experiences picked? Thanks in advance and best wishes, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 From Jez.Tucker at rushes.co.uk Tue Aug 7 12:32:55 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 7 Aug 2012 11:32:55 +0000 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B44DC@WARVWEXC1.uk.deluxe-eu.com> The HPC folks should probably step in here. Not having such a large system, I'll point you at : https://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_complan.htm > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Robert Esnouf > Sent: 07 August 2012 12:10 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] A GPFS newbie > > > Dear GPFS users, > > Please excuse what is possibly a naive question from a not-yet GPFS admin. > We are seriously considering GPFS to provide storage for our compute > clusters. We are probably looking at about 600-900TB served into 2000+ > Linux cores over InfiniBand. > DDN SFA10K and SFA12K seem like good fits. Our domain-specific need is > high I/O rates from multiple readers (100-1000) all accessing parts of the > same set of 1000-5000 large files (typically 30GB BAM files, for those in the > know). We could easily sustain read rates of 5-10GB/s or more if the system > would cope. > > My question is how should we go about configuring the number and > specifications of the NSDs? Are there any good rules of thumb? And are > there any folk out there using GPFS for high I/O rates like this in a similar > setup who would be happy to have their brains/experiences picked? > > Thanks in advance and best wishes, > Robert Esnouf > > -- > > Dr. Robert Esnouf, > University Research Lecturer > and Head of Research Computing, > Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt > Drive, Oxford OX3 7BN, UK > > Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 > and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From mattw at vpac.org Tue Aug 7 12:43:00 2012 From: mattw at vpac.org (Matthew Wallis) Date: Tue, 7 Aug 2012 21:43:00 +1000 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <419502BD-5AAC-43E3-8116-4A96DDBC64C5@vpac.org> Hi Robert, On 07/08/2012, at 9:09 PM, Robert Esnouf wrote: > > Dear GPFS users, > > My question is how should we go about configuring the number > and specifications of the NSDs? Are there any good rules of > thumb? And are there any folk out there using GPFS for high > I/O rates like this in a similar setup who would be happy to > have their brains/experiences picked? From IBM, a x3650 M3 should be able to provide around 2.4GB/sec over QDR IB. That's with 12GB of RAM and dual quad core X5667s They believe with the M4 you should be able to sustain somewhere near double that, but we'll say 4GB/sec for safety. So with 4 of those you should be pushing somewhere north of 16GB/sec. With FDR IB and PCIe 3.0, I can certainly believe it's possible, I think they've doubled the minimum RAM in the recent proposal we had from them. In our benchmarks we certainly found the M3's capable of it, for daily use, our workloads are too mixed, we don't have anyone doing sustained reads or writes on those types of files. Might have to be a bit more expansive on your node configuration though, I can get 2000 cores in 32 nodes these days, so that spec would give you 512MB/sec per node if everyone is reading and writing at once. If you're only doing 16 cores per node, then that's 125 nodes, and only 131MB/sec per node. Matt. From j.buzzard at dundee.ac.uk Tue Aug 7 12:56:14 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Tue, 7 Aug 2012 12:56:14 +0100 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <5021025E.6090102@dundee.ac.uk> On 07/08/12 12:09, Robert Esnouf wrote: > > Dear GPFS users, > > Please excuse what is possibly a naive question from a not-yet > GPFS admin. We are seriously considering GPFS to provide > storage for our compute clusters. We are probably looking at > about 600-900TB served into 2000+ Linux cores over InfiniBand. > DDN SFA10K and SFA12K seem like good fits. Our domain-specific > need is high I/O rates from multiple readers (100-1000) all > accessing parts of the same set of 1000-5000 large files > (typically 30GB BAM files, for those in the know). We could > easily sustain read rates of 5-10GB/s or more if the system > would cope. > > My question is how should we go about configuring the number > and specifications of the NSDs? Are there any good rules of > thumb? And are there any folk out there using GPFS for high > I/O rates like this in a similar setup who would be happy to > have their brains/experiences picked? > I would guess the biggest question is how sequential is the work load? Also how many cores per box, aka how many cores per storage interface card? The next question would be how much of your data is "old cruft" that is files which have not been used in a long time, but are not going to be deleted because they might be useful? If this is a reasonably high number then tiering/ILM is a worthwhile strategy to follow. Of course if you can afford to buy all your data disks in 600GB 3.5" 15kRPM disks then that is the way to go. Use SSD's for your metadata disks is I would say a must. How much depends on how many files you have. More detailed answers would require more information. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH The University of Dundee is a registered Scottish Charity, No: SC015096 From s.watkins at nhm.ac.uk Wed Aug 8 11:24:42 2012 From: s.watkins at nhm.ac.uk (Steff Watkins) Date: Wed, 8 Aug 2012 10:24:42 +0000 Subject: [gpfsug-discuss] Upgrade path Message-ID: Hello, I'm currently looking after a GPFS setup with six nodes and about 80TB disk. The current GPFS level is 3.4.0 and I'm looking to upgrade it. The (vague) plan is to do a rolling upgrade of the various nodes working through them one at a time leaving the cluster manager node until last then doing a failover of that role to another node and then upgrading the last host. Is there a standard upgrade methodology for GPFS systems or any tricks, tips or traps to know about before I go ahead with this? Also is it 'safe' assume that I could upgrade straight from 3.4.0 to 3.5.x or are there any intermediary steps that need to be performed as well? Any help or advice appreciated, Steff Watkins ----- Steff Watkins Natural History Museum, Cromwell Road, London,SW7 5BD Systems programmer Email: s.watkins at nhm.ac.uk Systems Team Phone: +44 (0)20 7942 6000 opt 2 ======== "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG From viccornell at gmail.com Wed Aug 8 12:32:19 2012 From: viccornell at gmail.com (Vic Cornell) Date: Wed, 8 Aug 2012 12:32:19 +0100 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: References: Message-ID: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> As with all of these things the Wiki is your friend. In this case it will point you at the documentation. The bits you want are here. http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm and http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm You can both 3.4 and 3.5 nodes in a cluster - but I personally wouldn't do it unless I had to. Regards, Vic On 8 Aug 2012, at 11:24, Steff Watkins wrote: > Hello, > > I'm currently looking after a GPFS setup with six nodes and about 80TB disk. The current GPFS level is 3.4.0 and I'm looking to upgrade it. The (vague) plan is to do a rolling upgrade of the various nodes working through them one at a time leaving the cluster manager node until last then doing a failover of that role to another node and then upgrading the last host. > > Is there a standard upgrade methodology for GPFS systems or any tricks, tips or traps to know about before I go ahead with this? > > Also is it 'safe' assume that I could upgrade straight from 3.4.0 to 3.5.x or are there any intermediary steps that need to be performed as well? > > Any help or advice appreciated, > Steff Watkins > > ----- > Steff Watkins Natural History Museum, Cromwell Road, London,SW7 5BD > Systems programmer Email: s.watkins at nhm.ac.uk > Systems Team Phone: +44 (0)20 7942 6000 opt 2 > ======== > "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From s.watkins at nhm.ac.uk Wed Aug 8 13:59:04 2012 From: s.watkins at nhm.ac.uk (Steff Watkins) Date: Wed, 8 Aug 2012 12:59:04 +0000 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> References: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> Message-ID: > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Vic Cornell > Sent: Wednesday, August 08, 2012 12:32 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Upgrade path > > As with all of these things the Wiki is your friend. > > In this case it will point you at the documentation. > > The bits you want are here. > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.clust > er.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm > > and > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.clust > er.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm > > You can both 3.4 and 3.5 nodes in a cluster - but I personally wouldn't do it > unless I had to. > > Regards, > > Vic As I'm relatively new to the list (been here about two months) I've missed/not been aware of the wiki. Very big thanks to you for putting me onto this. It looks like it's got pretty much everything I'll need for the moment to get the upgrades done. Regards, Steff Watkins ----- Steff Watkins Natural History Museum, Cromwell Road, London,SW75BD Systems programmer Email: s.watkins at nhm.ac.uk Systems Team Phone: +44 (0)20 7942 6000 opt 2 ======== "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG From Jez.Tucker at rushes.co.uk Wed Aug 8 14:06:27 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 8 Aug 2012 13:06:27 +0000 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: References: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B4D52@WARVWEXC1.uk.deluxe-eu.com> I should mention - though the website is, well, dire atm, if there's useful links I'm more than happy to put them up there. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Steff Watkins > Sent: 08 August 2012 13:59 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Upgrade path > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Vic Cornell > > Sent: Wednesday, August 08, 2012 12:32 PM > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] Upgrade path > > > > As with all of these things the Wiki is your friend. > > > > In this case it will point you at the documentation. > > > > The bits you want are here. > > > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.c > > lust er.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm > > > > and > > > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.c > > lust er.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm > > > > You can both 3.4 and 3.5 nodes in a cluster - but I personally > > wouldn't do it unless I had to. > > > > Regards, > > > > Vic > > As I'm relatively new to the list (been here about two months) I've > missed/not been aware of the wiki. > > Very big thanks to you for putting me onto this. It looks like it's got pretty > much everything I'll need for the moment to get the upgrades done. > > Regards, > Steff Watkins > > ----- > Steff Watkins Natural History Museum, Cromwell Road, > London,SW75BD > Systems programmer Email: s.watkins at nhm.ac.uk > Systems Team Phone: +44 (0)20 7942 6000 opt 2 > ======== > "Many were increasingly of the opinion that they'd all made a big mistake in > coming down from the trees in the first place. And some said that even the > trees had been a bad move, and that no one should ever have left the > oceans." - HHGTTG _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From crobson at ocf.co.uk Wed Aug 8 15:29:55 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Wed, 8 Aug 2012 15:29:55 +0100 Subject: [gpfsug-discuss] Agenda for September meeting Message-ID: Dear All, The time is nearly here for our next group meeting. We have organised another fantastic day of speakers for you and really hope you continue to support as well as you have done previously. Please see below the agenda for the next user group meeting: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 pNFS and GPFS Dean Hildebrand, Research Staff Member - Storage Systems IBM Almaden Research Center 12:30 Lunch (Buffet provided) 13:30 SAN Volume Controller/V7000, Easy Tier and Real Time Compression 14:30 WOS: Web Object Scalar Vic Cornell, DDN 14:50 GPFS Metadata + SSDs Andrew Dean, OCF 15:20 User experience of GPFS 15:50 Stupid GPFS Tricks 2012 16:00 Group discussion: Challenges, experiences and questions Led by Jez Tucker, Group Chairperson 16:20 Close The meeting will take place on 20th September at Bishopswood Golf Club, Bishopswood, Bishopswood Lane, Tadley, Hampshire, RG26 4AT. Please register with me if you will be attending the day no later than 6th September. Places are limited and available on a first come first served basis. I look forward to seeing as many of you there as possible! Best wishes Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GPFSUGAgendaSeptember2012.pdf Type: application/pdf Size: 65989 bytes Desc: GPFSUGAgendaSeptember2012.pdf URL: From robert at strubi.ox.ac.uk Tue Aug 14 17:02:47 2012 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Tue, 14 Aug 2012 17:02:47 +0100 (BST) Subject: [gpfsug-discuss] Agenda for September meeting In-Reply-To: References: Message-ID: <201208141602.062512@mail.strubi.ox.ac.uk> Dear Claire, I would be interested in attending the GPFS User Group Meeting on 20th September. I am not a GPFS user yet, although we are seriously looking at it and may have an evaluation system by then. If it is still OK for me to attend then please let me know. Best wishes, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 ---- Original message ---- >Date: Wed, 8 Aug 2012 15:29:55 +0100 >From: gpfsug-discuss-bounces at gpfsug.org (on behalf of Claire Robson ) >Subject: [gpfsug-discuss] Agenda for September meeting >To: "gpfsug-discuss at gpfsug.org" > > Dear All, > > > > The time is nearly here for our next group meeting. > We have organised another fantastic day of speakers > for you and really hope you continue to support as > well as you have done previously. Please see below > the agenda for the next user group meeting: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group > Secretary > > 11:05 pNFS and GPFS > > Dean Hildebrand, Research Staff Member - Storage > Systems > > IBM Almaden Research Center > > 12:30 Lunch (Buffet provided) > > 13:30 SAN Volume Controller/V7000, Easy Tier and > Real Time Compression > > 14:30 WOS: Web Object Scalar > > Vic Cornell, DDN > > 14:50 GPFS Metadata + SSDs > > Andrew Dean, OCF > > 15:20 User experience of GPFS > > 15:50 Stupid GPFS Tricks 2012 > > 16:00 Group discussion: Challenges, experiences > and questions > > Led by Jez Tucker, Group Chairperson > > 16:20 Close > > > > The meeting will take place on 20^th September at > Bishopswood Golf Club, Bishopswood, Bishopswood > Lane, Tadley, Hampshire, RG26 4AT. > > Please register with me if you will be attending the > day no later than 6^th September. Places are limited > and available on a first come first served basis. > > > > I look forward to seeing as many of you there as > possible! > > > > Best wishes > > > > Claire Robson > > GPFS User Group Secretary > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > > > > > > > OCF plc is a company registered in England and > Wales. Registered number 4132533, VAT number GB 780 > 6803 14. Registered office address: OCF plc, 5 > Rotunda Business Centre, Thorncliffe Park, > Chapeltown, Sheffield, S35 2PG > > > > This message is private and confidential. If you > have received this message in error, please notify > us immediately and remove it from your system. > > >________________ >GPFSUGAgendaSeptember2012.pdf (89k bytes) >________________ >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Jez.Tucker at rushes.co.uk Tue Aug 14 19:24:30 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 14 Aug 2012 18:24:30 +0000 Subject: [gpfsug-discuss] Per stgpool locking gpfs->tsm hsm script updated Message-ID: <39571EA9316BE44899D59C7A640C13F5305B94A7@WARVWEXC1.uk.deluxe-eu.com> Hello all Just pushed the latest version of my script to the git repo. - Works on multiple storage pools - Directory lockfiles (atomic) - Use N tape drives - PID stored for easy use fo kill -s SIGTERM `cat/path/to/pidfile` - More informative logged into /var/adm/ras/mmfs.log.latest See: https://github.com/gpfsug/gpfsug-tools/tree/master/scripts/hsm Obv. Use at own risk and test first on non-critical data. Bugfixes / stupidty pointed out is appreciated. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Wed Aug 15 16:03:18 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Wed, 15 Aug 2012 16:03:18 +0100 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 28/08/2012) Message-ID: I am out of the office until 28/08/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 8, Issue 5" sent on 15/8/2012 12:00:01. This is the only notification you will receive while this person is away. From bevans at canditmedia.co.uk Wed Aug 15 19:39:44 2012 From: bevans at canditmedia.co.uk (Barry Evans) Date: Wed, 15 Aug 2012 19:39:44 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba Message-ID: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Hello all, Anyone had success with windows extended attributes actually passing through samba over to GPFS? On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read only through Win 7 explorer and attrib I get: [2012/08/15 18:13:32.023966, 1] modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS attributes failed: -1 This is with gpfs:winattr set to yes. I also tried enabling 'store dos attributes' for a laugh but the result was no different. I've not tried bumping up the loglevel yet, this may reveal something more interesting. Many Thanks, Barry Evans Technical Director CandIT Media UK Ltd +44 7500 667 671 bevans at canditmedia.co.uk From orlando.richards at ed.ac.uk Wed Aug 15 22:20:17 2012 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Wed, 15 Aug 2012 22:20:17 +0100 (BST) Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Message-ID: I had in my head that you'd need to be running samba 3.6 for that to work - although that was a while ago, and they may have backported it. On Wed, 15 Aug 2012, Barry Evans wrote: > Hello all, > > Anyone had success with windows extended attributes actually passing through samba over to GPFS? > > On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read only through Win 7 explorer and attrib I get: > > [2012/08/15 18:13:32.023966, 1] modules/vfs_gpfs.c:1003(gpfs_get_xattr) > gpfs_get_xattr: Get GPFS attributes failed: -1 > > This is with gpfs:winattr set to yes. I also tried enabling 'store dos attributes' for a laugh but the result was no different. I've not tried bumping up the loglevel yet, this may reveal something more interesting. > > Many Thanks, > Barry Evans > Technical Director > CandIT Media UK Ltd > +44 7500 667 671 > bevans at canditmedia.co.uk > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From mail at arif-ali.co.uk Wed Aug 15 22:24:33 2012 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 15 Aug 2012 22:24:33 +0100 Subject: [gpfsug-discuss] open-source and gpfs Message-ID: All, I was hoping to use GPFS in an open-source project, which has little to nil funding (We have enough for infrastructure). How would I approach to get the ability to use GPFS for a non-profit open-source project. Would I need to somehow buy a license, as I know there aren't any license agreements that gpfs comes with, and that it is all about trust in terms of licensing. Any feedback on this would be great. -- Arif Ali catch me on freenode IRC, username: arif-ali From j.buzzard at dundee.ac.uk Wed Aug 15 23:07:34 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 15 Aug 2012 23:07:34 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Message-ID: <502C1DA6.7090002@dundee.ac.uk> Barry Evans wrote: > Hello all, > > Anyone had success with windows extended attributes actually passing > through samba over to GPFS? > > On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read > only through Win 7 explorer and attrib I get: > > [2012/08/15 18:13:32.023966, 1] > modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS > attributes failed: -1 > > This is with gpfs:winattr set to yes. I also tried enabling 'store > dos attributes' for a laugh but the result was no different. I've not > tried bumping up the loglevel yet, this may reveal something more > interesting. Hum, worked for me with 3.4.0-13 and now with 3.4.0-15, using samba3x packages that comes with CentOS 5.6 in the past and CentOS 5.8 currently. Note I have to rebuild the Samba packages to get the vfs_gpfs module which you need to load. The relevant bits of the smb.conf are # general options vfs objects = shadow_copy2 fileid gpfs # the GPFS stuff fileid : algorithm = fsname gpfs : sharemodes = yes gpfs : winattr = yes force unknown acl user = yes nfs4 : mode = special nfs4 : chown = no nfs4 : acedup = merge # store DOS attributes in extended attributes (vfs_gpfs then stores them in the file system) ea support = yes store dos attributes = yes map readonly = no map archive = no map system = no map hidden = no Though I would note that working out what all the configuration options required to make this (and other stuff) work where took some considerable amount of time. I guess there is a reason why IBM charge $$$ for the SONAS and StoreWise Unified products. Note that if you are going for that full make my Samba/GPFS file server look as close as possible to a pucker MS Windows server, you might want to consider setting the following GPFS options cifsBypassShareLocksOnRename cifsBypassTraversalChecking allowWriteWithDeleteChild All fairly self explanatory, and make GPFS follow Windows schematics more closely, though they are "undocumented". There is also there is an undocumented option for ACL's on mmchfs (I am working on 3.4.0-15) so that you can do mmchfs test -k samba Even shows up in the output of mmlsfs. Not entirely sure what samba ACL's are mind you... JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH The University of Dundee is a registered Scottish Charity, No: SC015096 From Jez.Tucker at rushes.co.uk Thu Aug 16 15:27:19 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 16 Aug 2012 14:27:19 +0000 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From sfadden at us.ibm.com Thu Aug 16 16:00:04 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Thu, 16 Aug 2012 08:00:04 -0700 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I am not aware of any special pricing for open source projects. For more details contact you IBM representative or business partner. If you don't know who that is, let me know and I can help you track them down. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/16/2012 07:27 AM Subject: Re: [gpfsug-discuss] open-source and gpfs Sent by: gpfsug-discuss-bounces at gpfsug.org TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Thu Aug 16 16:26:08 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 16 Aug 2012 15:26:08 +0000 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305BA1C8@WARVWEXC1.uk.deluxe-eu.com> I'll sort this out with mine and report back to the list. Questions wrt O/S projects: 1) Cost 2) License terms 3) What can be distributed from the portability layer etc. Any others? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 16 August 2012 16:00 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] open-source and gpfs I am not aware of any special pricing for open source projects. For more details contact you IBM representative or business partner. If you don't know who that is, let me know and I can help you track them down. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker > To: gpfsug main discussion list >, Date: 08/16/2012 07:27 AM Subject: Re: [gpfsug-discuss] open-source and gpfs Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bevans at canditmedia.co.uk Thu Aug 16 16:40:03 2012 From: bevans at canditmedia.co.uk (Barry Evans) Date: Thu, 16 Aug 2012 16:40:03 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <502C1DA6.7090002@dundee.ac.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> <502C1DA6.7090002@dundee.ac.uk> Message-ID: Yep, that works a treat, thanks Jonathan! I was missing ea support and the map = no options Cheers, Barry On 15 Aug 2012, at 23:07, Jonathan Buzzard wrote: > Barry Evans wrote: >> Hello all, >> >> Anyone had success with windows extended attributes actually passing >> through samba over to GPFS? >> >> On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read >> only through Win 7 explorer and attrib I get: >> >> [2012/08/15 18:13:32.023966, 1] >> modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS >> attributes failed: -1 >> >> This is with gpfs:winattr set to yes. I also tried enabling 'store >> dos attributes' for a laugh but the result was no different. I've not >> tried bumping up the loglevel yet, this may reveal something more >> interesting. > > Hum, worked for me with 3.4.0-13 and now with 3.4.0-15, using samba3x > packages that comes with CentOS 5.6 in the past and CentOS 5.8 > currently. Note I have to rebuild the Samba packages to get the vfs_gpfs > module which you need to load. The relevant bits of the smb.conf are > > # general options > vfs objects = shadow_copy2 fileid gpfs > > # the GPFS stuff > fileid : algorithm = fsname > gpfs : sharemodes = yes > gpfs : winattr = yes > force unknown acl user = yes > nfs4 : mode = special > nfs4 : chown = no > nfs4 : acedup = merge > > # store DOS attributes in extended attributes (vfs_gpfs then stores them > in the file system) > ea support = yes > store dos attributes = yes > map readonly = no > map archive = no > map system = no > map hidden = no > > > Though I would note that working out what all the configuration options > required to make this (and other stuff) work where took some > considerable amount of time. I guess there is a reason why IBM charge > $$$ for the SONAS and StoreWise Unified products. > > Note that if you are going for that full make my Samba/GPFS file server > look as close as possible to a pucker MS Windows server, you might want > to consider setting the following GPFS options > > cifsBypassShareLocksOnRename > cifsBypassTraversalChecking > allowWriteWithDeleteChild > > All fairly self explanatory, and make GPFS follow Windows schematics > more closely, though they are "undocumented". > > There is also there is an undocumented option for ACL's on mmchfs (I am > working on 3.4.0-15) so that you can do > > mmchfs test -k samba > > Even shows up in the output of mmlsfs. Not entirely sure what samba > ACL's are mind you... > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > > The University of Dundee is a registered Scottish Charity, No: SC015096 > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Aug 24 12:42:51 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 24 Aug 2012 11:42:51 +0000 Subject: [gpfsug-discuss] mmbackup Message-ID: <39571EA9316BE44899D59C7A640C13F5305BE156@WARVWEXC1.uk.deluxe-eu.com> Does anyone have to hand a copy of both policies which mmbackup uses for full and incremental? --- Jez Tucker Senior Sysadmin Rushes DDI: +44 (0) 207 851 6276 http://www.rushes.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Thu Aug 2 10:37:00 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 2 Aug 2012 09:37:00 +0000 Subject: [gpfsug-discuss] Storagepool and threshold migrations Message-ID: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> Allo Wondering if anyone uses multiple stgpools under a namespace where all stgpools have thresholds attached. Assume two storage pools A and B under namespace N. If storage pool A hits a threshold then mmapplypolicy will be called to migrate stgpool A However, whilst that is occurring, if stgpool B hits its threshold then same mmapplypolicy will be called. Obviously, this can't work if you're using--single-instance or a file locking method to stop the policy being applied every 2 mins due to the lowDiskSpace event. How do you handle this? Move any migration logic out of the main 'running policy' and change the callback to --parms "%eventName %fsName %storagePool" with locking on a per stgpool basis? --- Jez Tucker Senior Sysadmin Rushes DDI: +44 (0) 207 851 6276 http://www.rushes.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Aug 6 18:23:36 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 6 Aug 2012 17:23:36 +0000 Subject: [gpfsug-discuss] GPFS & VMware Workstation Message-ID: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can't find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Mon Aug 6 18:30:49 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Mon, 6 Aug 2012 10:30:49 -0700 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: Two things to looks for: 1. Make sure the virtual LUNS do not do any server caching of data. 2. Use nsddevices file (did you try this?) Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/06/2012 10:25 AM Subject: [gpfsug-discuss] GPFS & VMware Workstation Sent by: gpfsug-discuss-bounces at gpfsug.org Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can?t find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Mon Aug 6 18:31:40 2012 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 6 Aug 2012 18:31:40 +0100 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I do this on virtualbox a lot. I use an OpenFiler VM to provide iSCSI targets to all of the VMs. Works great as long as you dont actually put much data on it. Not enough IOPS to go round. It would run much better if I had an SSD. Regards, Vic On 6 Aug 2012, at 18:23, Jez Tucker wrote: > Has anyone managed to set this up? (Completely unsupported) > > What sort of vmware disks did you use? > > I created lsilogic vmdks and could actually create do mmcrnsd. > That said, mmcrfs fails as it can?t find the disks. > > > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Aug 6 18:53:35 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 6 Aug 2012 17:53:35 +0000 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B41BB@WARVWEXC1.uk.deluxe-eu.com> It seems vmware needs disk locking switched off. Funny that ;-) To create a disk: vmware-vdiskmanager ?a lsi-logic ?c ?s 10GB ?t 2 clusterdisk1.vmdk Add the disk to your nsd server #1. Save config. Edit .vmx file for server. Add the line: disk.locking = ?false? Boot server. Do this for your other quorum-managers. Disk type in nsddevices is ?generic?. Badda bing. One virtual test cluster. Thanks all. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 06 August 2012 18:31 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] GPFS & VMware Workstation Two things to looks for: 1. Make sure the virtual LUNS do not do any server caching of data. 2. Use nsddevices file (did you try this?) Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/06/2012 10:25 AM Subject: [gpfsug-discuss] GPFS & VMware Workstation Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can?t find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattw at vpac.org Tue Aug 7 05:32:11 2012 From: mattw at vpac.org (Matthew Wallis) Date: Tue, 7 Aug 2012 14:32:11 +1000 (EST) Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <1697484944.65711.1344313730219.JavaMail.root@mail> Message-ID: <967227407.65713.1344313931532.JavaMail.root@mail> I know this question is a month or so old, but I figure this is a ping to see if the list is still alive or not :-) >Curiosity... > > How many of you run Windows, Linux and OS X as clients > (GPFS/NFS/CIFS), in any configuration? > >Jez We have 2 small clusters of 42 nodes each, one of them is all Linux, the other a mixture of Linux and Windows Server 2008R2 clients, and to make it more fun, we dual boot the client nodes. 4 NSDs running RHEL 6, in a GPFS cluster with the 42 compute nodes running CentOS 5 and Windows Server. 4 Service nodes, 3 running CentOS 5, and 1 running Windows Server, remote mounting the FS from the above cluster. 1 single host remote Windows Server remote mounting one of the FS for streaming data capture from a camera. 1 occasional headache for me. Matt. -- Matthew Wallis, Systems Administrator Victorian Partnership for Advanced Computing. Ph: +61 3 9925 4645 Fax: +61 3 9925 4647 From Jez.Tucker at rushes.co.uk Tue Aug 7 08:36:47 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 7 Aug 2012 07:36:47 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <967227407.65713.1344313931532.JavaMail.root@mail> References: <1697484944.65711.1344313730219.JavaMail.root@mail> <967227407.65713.1344313931532.JavaMail.root@mail> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B432F@WARVWEXC1.uk.deluxe-eu.com> Indeed it is. Nice to know what our members are running. I should really make a histogram or suchlike. Any python monkeys out there? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Matthew Wallis > Sent: 07 August 2012 05:32 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] Your GPFS O/S support? > > > I know this question is a month or so old, but I figure this is a ping to see if > the list is still alive or not :-) > > >Curiosity... > > > > How many of you run Windows, Linux and OS X as clients > > (GPFS/NFS/CIFS), in any configuration? > > > >Jez > > We have 2 small clusters of 42 nodes each, one of them is all Linux, the > other a mixture of Linux and Windows Server 2008R2 clients, and to make it > more fun, we dual boot the client nodes. > > 4 NSDs running RHEL 6, in a GPFS cluster with the 42 compute nodes running > CentOS 5 and Windows Server. > > 4 Service nodes, 3 running CentOS 5, and 1 running Windows Server, remote > mounting the FS from the above cluster. > > 1 single host remote Windows Server remote mounting one of the FS for > streaming data capture from a camera. > > 1 occasional headache for me. > > Matt. > > -- > Matthew Wallis, Systems Administrator > Victorian Partnership for Advanced Computing. > Ph: +61 3 9925 4645 Fax: +61 3 9925 4647 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From robert at strubi.ox.ac.uk Tue Aug 7 12:09:31 2012 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Tue, 7 Aug 2012 12:09:31 +0100 (BST) Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <201208071109.062254@mail.strubi.ox.ac.uk> Dear GPFS users, Please excuse what is possibly a naive question from a not-yet GPFS admin. We are seriously considering GPFS to provide storage for our compute clusters. We are probably looking at about 600-900TB served into 2000+ Linux cores over InfiniBand. DDN SFA10K and SFA12K seem like good fits. Our domain-specific need is high I/O rates from multiple readers (100-1000) all accessing parts of the same set of 1000-5000 large files (typically 30GB BAM files, for those in the know). We could easily sustain read rates of 5-10GB/s or more if the system would cope. My question is how should we go about configuring the number and specifications of the NSDs? Are there any good rules of thumb? And are there any folk out there using GPFS for high I/O rates like this in a similar setup who would be happy to have their brains/experiences picked? Thanks in advance and best wishes, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 From Jez.Tucker at rushes.co.uk Tue Aug 7 12:32:55 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 7 Aug 2012 11:32:55 +0000 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B44DC@WARVWEXC1.uk.deluxe-eu.com> The HPC folks should probably step in here. Not having such a large system, I'll point you at : https://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_complan.htm > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Robert Esnouf > Sent: 07 August 2012 12:10 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] A GPFS newbie > > > Dear GPFS users, > > Please excuse what is possibly a naive question from a not-yet GPFS admin. > We are seriously considering GPFS to provide storage for our compute > clusters. We are probably looking at about 600-900TB served into 2000+ > Linux cores over InfiniBand. > DDN SFA10K and SFA12K seem like good fits. Our domain-specific need is > high I/O rates from multiple readers (100-1000) all accessing parts of the > same set of 1000-5000 large files (typically 30GB BAM files, for those in the > know). We could easily sustain read rates of 5-10GB/s or more if the system > would cope. > > My question is how should we go about configuring the number and > specifications of the NSDs? Are there any good rules of thumb? And are > there any folk out there using GPFS for high I/O rates like this in a similar > setup who would be happy to have their brains/experiences picked? > > Thanks in advance and best wishes, > Robert Esnouf > > -- > > Dr. Robert Esnouf, > University Research Lecturer > and Head of Research Computing, > Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt > Drive, Oxford OX3 7BN, UK > > Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 > and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From mattw at vpac.org Tue Aug 7 12:43:00 2012 From: mattw at vpac.org (Matthew Wallis) Date: Tue, 7 Aug 2012 21:43:00 +1000 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <419502BD-5AAC-43E3-8116-4A96DDBC64C5@vpac.org> Hi Robert, On 07/08/2012, at 9:09 PM, Robert Esnouf wrote: > > Dear GPFS users, > > My question is how should we go about configuring the number > and specifications of the NSDs? Are there any good rules of > thumb? And are there any folk out there using GPFS for high > I/O rates like this in a similar setup who would be happy to > have their brains/experiences picked? From IBM, a x3650 M3 should be able to provide around 2.4GB/sec over QDR IB. That's with 12GB of RAM and dual quad core X5667s They believe with the M4 you should be able to sustain somewhere near double that, but we'll say 4GB/sec for safety. So with 4 of those you should be pushing somewhere north of 16GB/sec. With FDR IB and PCIe 3.0, I can certainly believe it's possible, I think they've doubled the minimum RAM in the recent proposal we had from them. In our benchmarks we certainly found the M3's capable of it, for daily use, our workloads are too mixed, we don't have anyone doing sustained reads or writes on those types of files. Might have to be a bit more expansive on your node configuration though, I can get 2000 cores in 32 nodes these days, so that spec would give you 512MB/sec per node if everyone is reading and writing at once. If you're only doing 16 cores per node, then that's 125 nodes, and only 131MB/sec per node. Matt. From j.buzzard at dundee.ac.uk Tue Aug 7 12:56:14 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Tue, 7 Aug 2012 12:56:14 +0100 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <5021025E.6090102@dundee.ac.uk> On 07/08/12 12:09, Robert Esnouf wrote: > > Dear GPFS users, > > Please excuse what is possibly a naive question from a not-yet > GPFS admin. We are seriously considering GPFS to provide > storage for our compute clusters. We are probably looking at > about 600-900TB served into 2000+ Linux cores over InfiniBand. > DDN SFA10K and SFA12K seem like good fits. Our domain-specific > need is high I/O rates from multiple readers (100-1000) all > accessing parts of the same set of 1000-5000 large files > (typically 30GB BAM files, for those in the know). We could > easily sustain read rates of 5-10GB/s or more if the system > would cope. > > My question is how should we go about configuring the number > and specifications of the NSDs? Are there any good rules of > thumb? And are there any folk out there using GPFS for high > I/O rates like this in a similar setup who would be happy to > have their brains/experiences picked? > I would guess the biggest question is how sequential is the work load? Also how many cores per box, aka how many cores per storage interface card? The next question would be how much of your data is "old cruft" that is files which have not been used in a long time, but are not going to be deleted because they might be useful? If this is a reasonably high number then tiering/ILM is a worthwhile strategy to follow. Of course if you can afford to buy all your data disks in 600GB 3.5" 15kRPM disks then that is the way to go. Use SSD's for your metadata disks is I would say a must. How much depends on how many files you have. More detailed answers would require more information. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH The University of Dundee is a registered Scottish Charity, No: SC015096 From s.watkins at nhm.ac.uk Wed Aug 8 11:24:42 2012 From: s.watkins at nhm.ac.uk (Steff Watkins) Date: Wed, 8 Aug 2012 10:24:42 +0000 Subject: [gpfsug-discuss] Upgrade path Message-ID: Hello, I'm currently looking after a GPFS setup with six nodes and about 80TB disk. The current GPFS level is 3.4.0 and I'm looking to upgrade it. The (vague) plan is to do a rolling upgrade of the various nodes working through them one at a time leaving the cluster manager node until last then doing a failover of that role to another node and then upgrading the last host. Is there a standard upgrade methodology for GPFS systems or any tricks, tips or traps to know about before I go ahead with this? Also is it 'safe' assume that I could upgrade straight from 3.4.0 to 3.5.x or are there any intermediary steps that need to be performed as well? Any help or advice appreciated, Steff Watkins ----- Steff Watkins Natural History Museum, Cromwell Road, London,SW7 5BD Systems programmer Email: s.watkins at nhm.ac.uk Systems Team Phone: +44 (0)20 7942 6000 opt 2 ======== "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG From viccornell at gmail.com Wed Aug 8 12:32:19 2012 From: viccornell at gmail.com (Vic Cornell) Date: Wed, 8 Aug 2012 12:32:19 +0100 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: References: Message-ID: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> As with all of these things the Wiki is your friend. In this case it will point you at the documentation. The bits you want are here. http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm and http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm You can both 3.4 and 3.5 nodes in a cluster - but I personally wouldn't do it unless I had to. Regards, Vic On 8 Aug 2012, at 11:24, Steff Watkins wrote: > Hello, > > I'm currently looking after a GPFS setup with six nodes and about 80TB disk. The current GPFS level is 3.4.0 and I'm looking to upgrade it. The (vague) plan is to do a rolling upgrade of the various nodes working through them one at a time leaving the cluster manager node until last then doing a failover of that role to another node and then upgrading the last host. > > Is there a standard upgrade methodology for GPFS systems or any tricks, tips or traps to know about before I go ahead with this? > > Also is it 'safe' assume that I could upgrade straight from 3.4.0 to 3.5.x or are there any intermediary steps that need to be performed as well? > > Any help or advice appreciated, > Steff Watkins > > ----- > Steff Watkins Natural History Museum, Cromwell Road, London,SW7 5BD > Systems programmer Email: s.watkins at nhm.ac.uk > Systems Team Phone: +44 (0)20 7942 6000 opt 2 > ======== > "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From s.watkins at nhm.ac.uk Wed Aug 8 13:59:04 2012 From: s.watkins at nhm.ac.uk (Steff Watkins) Date: Wed, 8 Aug 2012 12:59:04 +0000 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> References: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> Message-ID: > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Vic Cornell > Sent: Wednesday, August 08, 2012 12:32 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Upgrade path > > As with all of these things the Wiki is your friend. > > In this case it will point you at the documentation. > > The bits you want are here. > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.clust > er.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm > > and > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.clust > er.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm > > You can both 3.4 and 3.5 nodes in a cluster - but I personally wouldn't do it > unless I had to. > > Regards, > > Vic As I'm relatively new to the list (been here about two months) I've missed/not been aware of the wiki. Very big thanks to you for putting me onto this. It looks like it's got pretty much everything I'll need for the moment to get the upgrades done. Regards, Steff Watkins ----- Steff Watkins Natural History Museum, Cromwell Road, London,SW75BD Systems programmer Email: s.watkins at nhm.ac.uk Systems Team Phone: +44 (0)20 7942 6000 opt 2 ======== "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG From Jez.Tucker at rushes.co.uk Wed Aug 8 14:06:27 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 8 Aug 2012 13:06:27 +0000 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: References: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B4D52@WARVWEXC1.uk.deluxe-eu.com> I should mention - though the website is, well, dire atm, if there's useful links I'm more than happy to put them up there. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Steff Watkins > Sent: 08 August 2012 13:59 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Upgrade path > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Vic Cornell > > Sent: Wednesday, August 08, 2012 12:32 PM > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] Upgrade path > > > > As with all of these things the Wiki is your friend. > > > > In this case it will point you at the documentation. > > > > The bits you want are here. > > > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.c > > lust er.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm > > > > and > > > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.c > > lust er.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm > > > > You can both 3.4 and 3.5 nodes in a cluster - but I personally > > wouldn't do it unless I had to. > > > > Regards, > > > > Vic > > As I'm relatively new to the list (been here about two months) I've > missed/not been aware of the wiki. > > Very big thanks to you for putting me onto this. It looks like it's got pretty > much everything I'll need for the moment to get the upgrades done. > > Regards, > Steff Watkins > > ----- > Steff Watkins Natural History Museum, Cromwell Road, > London,SW75BD > Systems programmer Email: s.watkins at nhm.ac.uk > Systems Team Phone: +44 (0)20 7942 6000 opt 2 > ======== > "Many were increasingly of the opinion that they'd all made a big mistake in > coming down from the trees in the first place. And some said that even the > trees had been a bad move, and that no one should ever have left the > oceans." - HHGTTG _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From crobson at ocf.co.uk Wed Aug 8 15:29:55 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Wed, 8 Aug 2012 15:29:55 +0100 Subject: [gpfsug-discuss] Agenda for September meeting Message-ID: Dear All, The time is nearly here for our next group meeting. We have organised another fantastic day of speakers for you and really hope you continue to support as well as you have done previously. Please see below the agenda for the next user group meeting: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 pNFS and GPFS Dean Hildebrand, Research Staff Member - Storage Systems IBM Almaden Research Center 12:30 Lunch (Buffet provided) 13:30 SAN Volume Controller/V7000, Easy Tier and Real Time Compression 14:30 WOS: Web Object Scalar Vic Cornell, DDN 14:50 GPFS Metadata + SSDs Andrew Dean, OCF 15:20 User experience of GPFS 15:50 Stupid GPFS Tricks 2012 16:00 Group discussion: Challenges, experiences and questions Led by Jez Tucker, Group Chairperson 16:20 Close The meeting will take place on 20th September at Bishopswood Golf Club, Bishopswood, Bishopswood Lane, Tadley, Hampshire, RG26 4AT. Please register with me if you will be attending the day no later than 6th September. Places are limited and available on a first come first served basis. I look forward to seeing as many of you there as possible! Best wishes Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GPFSUGAgendaSeptember2012.pdf Type: application/pdf Size: 65989 bytes Desc: GPFSUGAgendaSeptember2012.pdf URL: From robert at strubi.ox.ac.uk Tue Aug 14 17:02:47 2012 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Tue, 14 Aug 2012 17:02:47 +0100 (BST) Subject: [gpfsug-discuss] Agenda for September meeting In-Reply-To: References: Message-ID: <201208141602.062512@mail.strubi.ox.ac.uk> Dear Claire, I would be interested in attending the GPFS User Group Meeting on 20th September. I am not a GPFS user yet, although we are seriously looking at it and may have an evaluation system by then. If it is still OK for me to attend then please let me know. Best wishes, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 ---- Original message ---- >Date: Wed, 8 Aug 2012 15:29:55 +0100 >From: gpfsug-discuss-bounces at gpfsug.org (on behalf of Claire Robson ) >Subject: [gpfsug-discuss] Agenda for September meeting >To: "gpfsug-discuss at gpfsug.org" > > Dear All, > > > > The time is nearly here for our next group meeting. > We have organised another fantastic day of speakers > for you and really hope you continue to support as > well as you have done previously. Please see below > the agenda for the next user group meeting: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group > Secretary > > 11:05 pNFS and GPFS > > Dean Hildebrand, Research Staff Member - Storage > Systems > > IBM Almaden Research Center > > 12:30 Lunch (Buffet provided) > > 13:30 SAN Volume Controller/V7000, Easy Tier and > Real Time Compression > > 14:30 WOS: Web Object Scalar > > Vic Cornell, DDN > > 14:50 GPFS Metadata + SSDs > > Andrew Dean, OCF > > 15:20 User experience of GPFS > > 15:50 Stupid GPFS Tricks 2012 > > 16:00 Group discussion: Challenges, experiences > and questions > > Led by Jez Tucker, Group Chairperson > > 16:20 Close > > > > The meeting will take place on 20^th September at > Bishopswood Golf Club, Bishopswood, Bishopswood > Lane, Tadley, Hampshire, RG26 4AT. > > Please register with me if you will be attending the > day no later than 6^th September. Places are limited > and available on a first come first served basis. > > > > I look forward to seeing as many of you there as > possible! > > > > Best wishes > > > > Claire Robson > > GPFS User Group Secretary > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > > > > > > > OCF plc is a company registered in England and > Wales. Registered number 4132533, VAT number GB 780 > 6803 14. Registered office address: OCF plc, 5 > Rotunda Business Centre, Thorncliffe Park, > Chapeltown, Sheffield, S35 2PG > > > > This message is private and confidential. If you > have received this message in error, please notify > us immediately and remove it from your system. > > >________________ >GPFSUGAgendaSeptember2012.pdf (89k bytes) >________________ >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Jez.Tucker at rushes.co.uk Tue Aug 14 19:24:30 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 14 Aug 2012 18:24:30 +0000 Subject: [gpfsug-discuss] Per stgpool locking gpfs->tsm hsm script updated Message-ID: <39571EA9316BE44899D59C7A640C13F5305B94A7@WARVWEXC1.uk.deluxe-eu.com> Hello all Just pushed the latest version of my script to the git repo. - Works on multiple storage pools - Directory lockfiles (atomic) - Use N tape drives - PID stored for easy use fo kill -s SIGTERM `cat/path/to/pidfile` - More informative logged into /var/adm/ras/mmfs.log.latest See: https://github.com/gpfsug/gpfsug-tools/tree/master/scripts/hsm Obv. Use at own risk and test first on non-critical data. Bugfixes / stupidty pointed out is appreciated. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Wed Aug 15 16:03:18 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Wed, 15 Aug 2012 16:03:18 +0100 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 28/08/2012) Message-ID: I am out of the office until 28/08/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 8, Issue 5" sent on 15/8/2012 12:00:01. This is the only notification you will receive while this person is away. From bevans at canditmedia.co.uk Wed Aug 15 19:39:44 2012 From: bevans at canditmedia.co.uk (Barry Evans) Date: Wed, 15 Aug 2012 19:39:44 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba Message-ID: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Hello all, Anyone had success with windows extended attributes actually passing through samba over to GPFS? On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read only through Win 7 explorer and attrib I get: [2012/08/15 18:13:32.023966, 1] modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS attributes failed: -1 This is with gpfs:winattr set to yes. I also tried enabling 'store dos attributes' for a laugh but the result was no different. I've not tried bumping up the loglevel yet, this may reveal something more interesting. Many Thanks, Barry Evans Technical Director CandIT Media UK Ltd +44 7500 667 671 bevans at canditmedia.co.uk From orlando.richards at ed.ac.uk Wed Aug 15 22:20:17 2012 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Wed, 15 Aug 2012 22:20:17 +0100 (BST) Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Message-ID: I had in my head that you'd need to be running samba 3.6 for that to work - although that was a while ago, and they may have backported it. On Wed, 15 Aug 2012, Barry Evans wrote: > Hello all, > > Anyone had success with windows extended attributes actually passing through samba over to GPFS? > > On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read only through Win 7 explorer and attrib I get: > > [2012/08/15 18:13:32.023966, 1] modules/vfs_gpfs.c:1003(gpfs_get_xattr) > gpfs_get_xattr: Get GPFS attributes failed: -1 > > This is with gpfs:winattr set to yes. I also tried enabling 'store dos attributes' for a laugh but the result was no different. I've not tried bumping up the loglevel yet, this may reveal something more interesting. > > Many Thanks, > Barry Evans > Technical Director > CandIT Media UK Ltd > +44 7500 667 671 > bevans at canditmedia.co.uk > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From mail at arif-ali.co.uk Wed Aug 15 22:24:33 2012 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 15 Aug 2012 22:24:33 +0100 Subject: [gpfsug-discuss] open-source and gpfs Message-ID: All, I was hoping to use GPFS in an open-source project, which has little to nil funding (We have enough for infrastructure). How would I approach to get the ability to use GPFS for a non-profit open-source project. Would I need to somehow buy a license, as I know there aren't any license agreements that gpfs comes with, and that it is all about trust in terms of licensing. Any feedback on this would be great. -- Arif Ali catch me on freenode IRC, username: arif-ali From j.buzzard at dundee.ac.uk Wed Aug 15 23:07:34 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 15 Aug 2012 23:07:34 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Message-ID: <502C1DA6.7090002@dundee.ac.uk> Barry Evans wrote: > Hello all, > > Anyone had success with windows extended attributes actually passing > through samba over to GPFS? > > On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read > only through Win 7 explorer and attrib I get: > > [2012/08/15 18:13:32.023966, 1] > modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS > attributes failed: -1 > > This is with gpfs:winattr set to yes. I also tried enabling 'store > dos attributes' for a laugh but the result was no different. I've not > tried bumping up the loglevel yet, this may reveal something more > interesting. Hum, worked for me with 3.4.0-13 and now with 3.4.0-15, using samba3x packages that comes with CentOS 5.6 in the past and CentOS 5.8 currently. Note I have to rebuild the Samba packages to get the vfs_gpfs module which you need to load. The relevant bits of the smb.conf are # general options vfs objects = shadow_copy2 fileid gpfs # the GPFS stuff fileid : algorithm = fsname gpfs : sharemodes = yes gpfs : winattr = yes force unknown acl user = yes nfs4 : mode = special nfs4 : chown = no nfs4 : acedup = merge # store DOS attributes in extended attributes (vfs_gpfs then stores them in the file system) ea support = yes store dos attributes = yes map readonly = no map archive = no map system = no map hidden = no Though I would note that working out what all the configuration options required to make this (and other stuff) work where took some considerable amount of time. I guess there is a reason why IBM charge $$$ for the SONAS and StoreWise Unified products. Note that if you are going for that full make my Samba/GPFS file server look as close as possible to a pucker MS Windows server, you might want to consider setting the following GPFS options cifsBypassShareLocksOnRename cifsBypassTraversalChecking allowWriteWithDeleteChild All fairly self explanatory, and make GPFS follow Windows schematics more closely, though they are "undocumented". There is also there is an undocumented option for ACL's on mmchfs (I am working on 3.4.0-15) so that you can do mmchfs test -k samba Even shows up in the output of mmlsfs. Not entirely sure what samba ACL's are mind you... JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH The University of Dundee is a registered Scottish Charity, No: SC015096 From Jez.Tucker at rushes.co.uk Thu Aug 16 15:27:19 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 16 Aug 2012 14:27:19 +0000 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From sfadden at us.ibm.com Thu Aug 16 16:00:04 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Thu, 16 Aug 2012 08:00:04 -0700 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I am not aware of any special pricing for open source projects. For more details contact you IBM representative or business partner. If you don't know who that is, let me know and I can help you track them down. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/16/2012 07:27 AM Subject: Re: [gpfsug-discuss] open-source and gpfs Sent by: gpfsug-discuss-bounces at gpfsug.org TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Thu Aug 16 16:26:08 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 16 Aug 2012 15:26:08 +0000 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305BA1C8@WARVWEXC1.uk.deluxe-eu.com> I'll sort this out with mine and report back to the list. Questions wrt O/S projects: 1) Cost 2) License terms 3) What can be distributed from the portability layer etc. Any others? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 16 August 2012 16:00 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] open-source and gpfs I am not aware of any special pricing for open source projects. For more details contact you IBM representative or business partner. If you don't know who that is, let me know and I can help you track them down. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker > To: gpfsug main discussion list >, Date: 08/16/2012 07:27 AM Subject: Re: [gpfsug-discuss] open-source and gpfs Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bevans at canditmedia.co.uk Thu Aug 16 16:40:03 2012 From: bevans at canditmedia.co.uk (Barry Evans) Date: Thu, 16 Aug 2012 16:40:03 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <502C1DA6.7090002@dundee.ac.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> <502C1DA6.7090002@dundee.ac.uk> Message-ID: Yep, that works a treat, thanks Jonathan! I was missing ea support and the map = no options Cheers, Barry On 15 Aug 2012, at 23:07, Jonathan Buzzard wrote: > Barry Evans wrote: >> Hello all, >> >> Anyone had success with windows extended attributes actually passing >> through samba over to GPFS? >> >> On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read >> only through Win 7 explorer and attrib I get: >> >> [2012/08/15 18:13:32.023966, 1] >> modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS >> attributes failed: -1 >> >> This is with gpfs:winattr set to yes. I also tried enabling 'store >> dos attributes' for a laugh but the result was no different. I've not >> tried bumping up the loglevel yet, this may reveal something more >> interesting. > > Hum, worked for me with 3.4.0-13 and now with 3.4.0-15, using samba3x > packages that comes with CentOS 5.6 in the past and CentOS 5.8 > currently. Note I have to rebuild the Samba packages to get the vfs_gpfs > module which you need to load. The relevant bits of the smb.conf are > > # general options > vfs objects = shadow_copy2 fileid gpfs > > # the GPFS stuff > fileid : algorithm = fsname > gpfs : sharemodes = yes > gpfs : winattr = yes > force unknown acl user = yes > nfs4 : mode = special > nfs4 : chown = no > nfs4 : acedup = merge > > # store DOS attributes in extended attributes (vfs_gpfs then stores them > in the file system) > ea support = yes > store dos attributes = yes > map readonly = no > map archive = no > map system = no > map hidden = no > > > Though I would note that working out what all the configuration options > required to make this (and other stuff) work where took some > considerable amount of time. I guess there is a reason why IBM charge > $$$ for the SONAS and StoreWise Unified products. > > Note that if you are going for that full make my Samba/GPFS file server > look as close as possible to a pucker MS Windows server, you might want > to consider setting the following GPFS options > > cifsBypassShareLocksOnRename > cifsBypassTraversalChecking > allowWriteWithDeleteChild > > All fairly self explanatory, and make GPFS follow Windows schematics > more closely, though they are "undocumented". > > There is also there is an undocumented option for ACL's on mmchfs (I am > working on 3.4.0-15) so that you can do > > mmchfs test -k samba > > Even shows up in the output of mmlsfs. Not entirely sure what samba > ACL's are mind you... > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > > The University of Dundee is a registered Scottish Charity, No: SC015096 > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Aug 24 12:42:51 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 24 Aug 2012 11:42:51 +0000 Subject: [gpfsug-discuss] mmbackup Message-ID: <39571EA9316BE44899D59C7A640C13F5305BE156@WARVWEXC1.uk.deluxe-eu.com> Does anyone have to hand a copy of both policies which mmbackup uses for full and incremental? --- Jez Tucker Senior Sysadmin Rushes DDI: +44 (0) 207 851 6276 http://www.rushes.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Thu Aug 2 10:37:00 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 2 Aug 2012 09:37:00 +0000 Subject: [gpfsug-discuss] Storagepool and threshold migrations Message-ID: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> Allo Wondering if anyone uses multiple stgpools under a namespace where all stgpools have thresholds attached. Assume two storage pools A and B under namespace N. If storage pool A hits a threshold then mmapplypolicy will be called to migrate stgpool A However, whilst that is occurring, if stgpool B hits its threshold then same mmapplypolicy will be called. Obviously, this can't work if you're using--single-instance or a file locking method to stop the policy being applied every 2 mins due to the lowDiskSpace event. How do you handle this? Move any migration logic out of the main 'running policy' and change the callback to --parms "%eventName %fsName %storagePool" with locking on a per stgpool basis? --- Jez Tucker Senior Sysadmin Rushes DDI: +44 (0) 207 851 6276 http://www.rushes.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Aug 6 18:23:36 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 6 Aug 2012 17:23:36 +0000 Subject: [gpfsug-discuss] GPFS & VMware Workstation Message-ID: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can't find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Mon Aug 6 18:30:49 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Mon, 6 Aug 2012 10:30:49 -0700 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: Two things to looks for: 1. Make sure the virtual LUNS do not do any server caching of data. 2. Use nsddevices file (did you try this?) Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/06/2012 10:25 AM Subject: [gpfsug-discuss] GPFS & VMware Workstation Sent by: gpfsug-discuss-bounces at gpfsug.org Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can?t find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Mon Aug 6 18:31:40 2012 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 6 Aug 2012 18:31:40 +0100 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I do this on virtualbox a lot. I use an OpenFiler VM to provide iSCSI targets to all of the VMs. Works great as long as you dont actually put much data on it. Not enough IOPS to go round. It would run much better if I had an SSD. Regards, Vic On 6 Aug 2012, at 18:23, Jez Tucker wrote: > Has anyone managed to set this up? (Completely unsupported) > > What sort of vmware disks did you use? > > I created lsilogic vmdks and could actually create do mmcrnsd. > That said, mmcrfs fails as it can?t find the disks. > > > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Aug 6 18:53:35 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 6 Aug 2012 17:53:35 +0000 Subject: [gpfsug-discuss] GPFS & VMware Workstation In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305B418E@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B41BB@WARVWEXC1.uk.deluxe-eu.com> It seems vmware needs disk locking switched off. Funny that ;-) To create a disk: vmware-vdiskmanager ?a lsi-logic ?c ?s 10GB ?t 2 clusterdisk1.vmdk Add the disk to your nsd server #1. Save config. Edit .vmx file for server. Add the line: disk.locking = ?false? Boot server. Do this for your other quorum-managers. Disk type in nsddevices is ?generic?. Badda bing. One virtual test cluster. Thanks all. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 06 August 2012 18:31 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] GPFS & VMware Workstation Two things to looks for: 1. Make sure the virtual LUNS do not do any server caching of data. 2. Use nsddevices file (did you try this?) Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/06/2012 10:25 AM Subject: [gpfsug-discuss] GPFS & VMware Workstation Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Has anyone managed to set this up? (Completely unsupported) What sort of vmware disks did you use? I created lsilogic vmdks and could actually create do mmcrnsd. That said, mmcrfs fails as it can?t find the disks. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mattw at vpac.org Tue Aug 7 05:32:11 2012 From: mattw at vpac.org (Matthew Wallis) Date: Tue, 7 Aug 2012 14:32:11 +1000 (EST) Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <1697484944.65711.1344313730219.JavaMail.root@mail> Message-ID: <967227407.65713.1344313931532.JavaMail.root@mail> I know this question is a month or so old, but I figure this is a ping to see if the list is still alive or not :-) >Curiosity... > > How many of you run Windows, Linux and OS X as clients > (GPFS/NFS/CIFS), in any configuration? > >Jez We have 2 small clusters of 42 nodes each, one of them is all Linux, the other a mixture of Linux and Windows Server 2008R2 clients, and to make it more fun, we dual boot the client nodes. 4 NSDs running RHEL 6, in a GPFS cluster with the 42 compute nodes running CentOS 5 and Windows Server. 4 Service nodes, 3 running CentOS 5, and 1 running Windows Server, remote mounting the FS from the above cluster. 1 single host remote Windows Server remote mounting one of the FS for streaming data capture from a camera. 1 occasional headache for me. Matt. -- Matthew Wallis, Systems Administrator Victorian Partnership for Advanced Computing. Ph: +61 3 9925 4645 Fax: +61 3 9925 4647 From Jez.Tucker at rushes.co.uk Tue Aug 7 08:36:47 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 7 Aug 2012 07:36:47 +0000 Subject: [gpfsug-discuss] Your GPFS O/S support? In-Reply-To: <967227407.65713.1344313931532.JavaMail.root@mail> References: <1697484944.65711.1344313730219.JavaMail.root@mail> <967227407.65713.1344313931532.JavaMail.root@mail> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B432F@WARVWEXC1.uk.deluxe-eu.com> Indeed it is. Nice to know what our members are running. I should really make a histogram or suchlike. Any python monkeys out there? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Matthew Wallis > Sent: 07 August 2012 05:32 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] Your GPFS O/S support? > > > I know this question is a month or so old, but I figure this is a ping to see if > the list is still alive or not :-) > > >Curiosity... > > > > How many of you run Windows, Linux and OS X as clients > > (GPFS/NFS/CIFS), in any configuration? > > > >Jez > > We have 2 small clusters of 42 nodes each, one of them is all Linux, the > other a mixture of Linux and Windows Server 2008R2 clients, and to make it > more fun, we dual boot the client nodes. > > 4 NSDs running RHEL 6, in a GPFS cluster with the 42 compute nodes running > CentOS 5 and Windows Server. > > 4 Service nodes, 3 running CentOS 5, and 1 running Windows Server, remote > mounting the FS from the above cluster. > > 1 single host remote Windows Server remote mounting one of the FS for > streaming data capture from a camera. > > 1 occasional headache for me. > > Matt. > > -- > Matthew Wallis, Systems Administrator > Victorian Partnership for Advanced Computing. > Ph: +61 3 9925 4645 Fax: +61 3 9925 4647 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From robert at strubi.ox.ac.uk Tue Aug 7 12:09:31 2012 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Tue, 7 Aug 2012 12:09:31 +0100 (BST) Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <201208071109.062254@mail.strubi.ox.ac.uk> Dear GPFS users, Please excuse what is possibly a naive question from a not-yet GPFS admin. We are seriously considering GPFS to provide storage for our compute clusters. We are probably looking at about 600-900TB served into 2000+ Linux cores over InfiniBand. DDN SFA10K and SFA12K seem like good fits. Our domain-specific need is high I/O rates from multiple readers (100-1000) all accessing parts of the same set of 1000-5000 large files (typically 30GB BAM files, for those in the know). We could easily sustain read rates of 5-10GB/s or more if the system would cope. My question is how should we go about configuring the number and specifications of the NSDs? Are there any good rules of thumb? And are there any folk out there using GPFS for high I/O rates like this in a similar setup who would be happy to have their brains/experiences picked? Thanks in advance and best wishes, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 From Jez.Tucker at rushes.co.uk Tue Aug 7 12:32:55 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 7 Aug 2012 11:32:55 +0000 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B44DC@WARVWEXC1.uk.deluxe-eu.com> The HPC folks should probably step in here. Not having such a large system, I'll point you at : https://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_complan.htm > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Robert Esnouf > Sent: 07 August 2012 12:10 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] A GPFS newbie > > > Dear GPFS users, > > Please excuse what is possibly a naive question from a not-yet GPFS admin. > We are seriously considering GPFS to provide storage for our compute > clusters. We are probably looking at about 600-900TB served into 2000+ > Linux cores over InfiniBand. > DDN SFA10K and SFA12K seem like good fits. Our domain-specific need is > high I/O rates from multiple readers (100-1000) all accessing parts of the > same set of 1000-5000 large files (typically 30GB BAM files, for those in the > know). We could easily sustain read rates of 5-10GB/s or more if the system > would cope. > > My question is how should we go about configuring the number and > specifications of the NSDs? Are there any good rules of thumb? And are > there any folk out there using GPFS for high I/O rates like this in a similar > setup who would be happy to have their brains/experiences picked? > > Thanks in advance and best wishes, > Robert Esnouf > > -- > > Dr. Robert Esnouf, > University Research Lecturer > and Head of Research Computing, > Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt > Drive, Oxford OX3 7BN, UK > > Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 > and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From mattw at vpac.org Tue Aug 7 12:43:00 2012 From: mattw at vpac.org (Matthew Wallis) Date: Tue, 7 Aug 2012 21:43:00 +1000 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <419502BD-5AAC-43E3-8116-4A96DDBC64C5@vpac.org> Hi Robert, On 07/08/2012, at 9:09 PM, Robert Esnouf wrote: > > Dear GPFS users, > > My question is how should we go about configuring the number > and specifications of the NSDs? Are there any good rules of > thumb? And are there any folk out there using GPFS for high > I/O rates like this in a similar setup who would be happy to > have their brains/experiences picked? From IBM, a x3650 M3 should be able to provide around 2.4GB/sec over QDR IB. That's with 12GB of RAM and dual quad core X5667s They believe with the M4 you should be able to sustain somewhere near double that, but we'll say 4GB/sec for safety. So with 4 of those you should be pushing somewhere north of 16GB/sec. With FDR IB and PCIe 3.0, I can certainly believe it's possible, I think they've doubled the minimum RAM in the recent proposal we had from them. In our benchmarks we certainly found the M3's capable of it, for daily use, our workloads are too mixed, we don't have anyone doing sustained reads or writes on those types of files. Might have to be a bit more expansive on your node configuration though, I can get 2000 cores in 32 nodes these days, so that spec would give you 512MB/sec per node if everyone is reading and writing at once. If you're only doing 16 cores per node, then that's 125 nodes, and only 131MB/sec per node. Matt. From j.buzzard at dundee.ac.uk Tue Aug 7 12:56:14 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Tue, 7 Aug 2012 12:56:14 +0100 Subject: [gpfsug-discuss] A GPFS newbie In-Reply-To: <201208071109.062254@mail.strubi.ox.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5305B165C@WARVWEXC1.uk.deluxe-eu.com> <201208071109.062254@mail.strubi.ox.ac.uk> Message-ID: <5021025E.6090102@dundee.ac.uk> On 07/08/12 12:09, Robert Esnouf wrote: > > Dear GPFS users, > > Please excuse what is possibly a naive question from a not-yet > GPFS admin. We are seriously considering GPFS to provide > storage for our compute clusters. We are probably looking at > about 600-900TB served into 2000+ Linux cores over InfiniBand. > DDN SFA10K and SFA12K seem like good fits. Our domain-specific > need is high I/O rates from multiple readers (100-1000) all > accessing parts of the same set of 1000-5000 large files > (typically 30GB BAM files, for those in the know). We could > easily sustain read rates of 5-10GB/s or more if the system > would cope. > > My question is how should we go about configuring the number > and specifications of the NSDs? Are there any good rules of > thumb? And are there any folk out there using GPFS for high > I/O rates like this in a similar setup who would be happy to > have their brains/experiences picked? > I would guess the biggest question is how sequential is the work load? Also how many cores per box, aka how many cores per storage interface card? The next question would be how much of your data is "old cruft" that is files which have not been used in a long time, but are not going to be deleted because they might be useful? If this is a reasonably high number then tiering/ILM is a worthwhile strategy to follow. Of course if you can afford to buy all your data disks in 600GB 3.5" 15kRPM disks then that is the way to go. Use SSD's for your metadata disks is I would say a must. How much depends on how many files you have. More detailed answers would require more information. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH The University of Dundee is a registered Scottish Charity, No: SC015096 From s.watkins at nhm.ac.uk Wed Aug 8 11:24:42 2012 From: s.watkins at nhm.ac.uk (Steff Watkins) Date: Wed, 8 Aug 2012 10:24:42 +0000 Subject: [gpfsug-discuss] Upgrade path Message-ID: Hello, I'm currently looking after a GPFS setup with six nodes and about 80TB disk. The current GPFS level is 3.4.0 and I'm looking to upgrade it. The (vague) plan is to do a rolling upgrade of the various nodes working through them one at a time leaving the cluster manager node until last then doing a failover of that role to another node and then upgrading the last host. Is there a standard upgrade methodology for GPFS systems or any tricks, tips or traps to know about before I go ahead with this? Also is it 'safe' assume that I could upgrade straight from 3.4.0 to 3.5.x or are there any intermediary steps that need to be performed as well? Any help or advice appreciated, Steff Watkins ----- Steff Watkins Natural History Museum, Cromwell Road, London,SW7 5BD Systems programmer Email: s.watkins at nhm.ac.uk Systems Team Phone: +44 (0)20 7942 6000 opt 2 ======== "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG From viccornell at gmail.com Wed Aug 8 12:32:19 2012 From: viccornell at gmail.com (Vic Cornell) Date: Wed, 8 Aug 2012 12:32:19 +0100 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: References: Message-ID: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> As with all of these things the Wiki is your friend. In this case it will point you at the documentation. The bits you want are here. http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm and http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.cluster.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm You can both 3.4 and 3.5 nodes in a cluster - but I personally wouldn't do it unless I had to. Regards, Vic On 8 Aug 2012, at 11:24, Steff Watkins wrote: > Hello, > > I'm currently looking after a GPFS setup with six nodes and about 80TB disk. The current GPFS level is 3.4.0 and I'm looking to upgrade it. The (vague) plan is to do a rolling upgrade of the various nodes working through them one at a time leaving the cluster manager node until last then doing a failover of that role to another node and then upgrading the last host. > > Is there a standard upgrade methodology for GPFS systems or any tricks, tips or traps to know about before I go ahead with this? > > Also is it 'safe' assume that I could upgrade straight from 3.4.0 to 3.5.x or are there any intermediary steps that need to be performed as well? > > Any help or advice appreciated, > Steff Watkins > > ----- > Steff Watkins Natural History Museum, Cromwell Road, London,SW7 5BD > Systems programmer Email: s.watkins at nhm.ac.uk > Systems Team Phone: +44 (0)20 7942 6000 opt 2 > ======== > "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From s.watkins at nhm.ac.uk Wed Aug 8 13:59:04 2012 From: s.watkins at nhm.ac.uk (Steff Watkins) Date: Wed, 8 Aug 2012 12:59:04 +0000 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> References: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> Message-ID: > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Vic Cornell > Sent: Wednesday, August 08, 2012 12:32 PM > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Upgrade path > > As with all of these things the Wiki is your friend. > > In this case it will point you at the documentation. > > The bits you want are here. > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.clust > er.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm > > and > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.clust > er.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm > > You can both 3.4 and 3.5 nodes in a cluster - but I personally wouldn't do it > unless I had to. > > Regards, > > Vic As I'm relatively new to the list (been here about two months) I've missed/not been aware of the wiki. Very big thanks to you for putting me onto this. It looks like it's got pretty much everything I'll need for the moment to get the upgrades done. Regards, Steff Watkins ----- Steff Watkins Natural History Museum, Cromwell Road, London,SW75BD Systems programmer Email: s.watkins at nhm.ac.uk Systems Team Phone: +44 (0)20 7942 6000 opt 2 ======== "Many were increasingly of the opinion that they'd all made a big mistake in coming down from the trees in the first place. And some said that even the trees had been a bad move, and that no one should ever have left the oceans." - HHGTTG From Jez.Tucker at rushes.co.uk Wed Aug 8 14:06:27 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 8 Aug 2012 13:06:27 +0000 Subject: [gpfsug-discuss] Upgrade path In-Reply-To: References: <49163E87-01F9-40AD-9C4E-F8A10E11C119@gmail.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305B4D52@WARVWEXC1.uk.deluxe-eu.com> I should mention - though the website is, well, dire atm, if there's useful links I'm more than happy to put them up there. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Steff Watkins > Sent: 08 August 2012 13:59 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Upgrade path > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Vic Cornell > > Sent: Wednesday, August 08, 2012 12:32 PM > > To: gpfsug main discussion list > > Subject: Re: [gpfsug-discuss] Upgrade path > > > > As with all of these things the Wiki is your friend. > > > > In this case it will point you at the documentation. > > > > The bits you want are here. > > > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.c > > lust er.gpfs.v3r5.gpfs300.doc/bl1ins_migratl.htm > > > > and > > > > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/topic/com.ibm.c > > lust er.gpfs.v3r5.gpfs300.doc/bl1ins_mig35.htm > > > > You can both 3.4 and 3.5 nodes in a cluster - but I personally > > wouldn't do it unless I had to. > > > > Regards, > > > > Vic > > As I'm relatively new to the list (been here about two months) I've > missed/not been aware of the wiki. > > Very big thanks to you for putting me onto this. It looks like it's got pretty > much everything I'll need for the moment to get the upgrades done. > > Regards, > Steff Watkins > > ----- > Steff Watkins Natural History Museum, Cromwell Road, > London,SW75BD > Systems programmer Email: s.watkins at nhm.ac.uk > Systems Team Phone: +44 (0)20 7942 6000 opt 2 > ======== > "Many were increasingly of the opinion that they'd all made a big mistake in > coming down from the trees in the first place. And some said that even the > trees had been a bad move, and that no one should ever have left the > oceans." - HHGTTG _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From crobson at ocf.co.uk Wed Aug 8 15:29:55 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Wed, 8 Aug 2012 15:29:55 +0100 Subject: [gpfsug-discuss] Agenda for September meeting Message-ID: Dear All, The time is nearly here for our next group meeting. We have organised another fantastic day of speakers for you and really hope you continue to support as well as you have done previously. Please see below the agenda for the next user group meeting: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 pNFS and GPFS Dean Hildebrand, Research Staff Member - Storage Systems IBM Almaden Research Center 12:30 Lunch (Buffet provided) 13:30 SAN Volume Controller/V7000, Easy Tier and Real Time Compression 14:30 WOS: Web Object Scalar Vic Cornell, DDN 14:50 GPFS Metadata + SSDs Andrew Dean, OCF 15:20 User experience of GPFS 15:50 Stupid GPFS Tricks 2012 16:00 Group discussion: Challenges, experiences and questions Led by Jez Tucker, Group Chairperson 16:20 Close The meeting will take place on 20th September at Bishopswood Golf Club, Bishopswood, Bishopswood Lane, Tadley, Hampshire, RG26 4AT. Please register with me if you will be attending the day no later than 6th September. Places are limited and available on a first come first served basis. I look forward to seeing as many of you there as possible! Best wishes Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GPFSUGAgendaSeptember2012.pdf Type: application/pdf Size: 65989 bytes Desc: GPFSUGAgendaSeptember2012.pdf URL: From robert at strubi.ox.ac.uk Tue Aug 14 17:02:47 2012 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Tue, 14 Aug 2012 17:02:47 +0100 (BST) Subject: [gpfsug-discuss] Agenda for September meeting In-Reply-To: References: Message-ID: <201208141602.062512@mail.strubi.ox.ac.uk> Dear Claire, I would be interested in attending the GPFS User Group Meeting on 20th September. I am not a GPFS user yet, although we are seriously looking at it and may have an evaluation system by then. If it is still OK for me to attend then please let me know. Best wishes, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 ---- Original message ---- >Date: Wed, 8 Aug 2012 15:29:55 +0100 >From: gpfsug-discuss-bounces at gpfsug.org (on behalf of Claire Robson ) >Subject: [gpfsug-discuss] Agenda for September meeting >To: "gpfsug-discuss at gpfsug.org" > > Dear All, > > > > The time is nearly here for our next group meeting. > We have organised another fantastic day of speakers > for you and really hope you continue to support as > well as you have done previously. Please see below > the agenda for the next user group meeting: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group > Secretary > > 11:05 pNFS and GPFS > > Dean Hildebrand, Research Staff Member - Storage > Systems > > IBM Almaden Research Center > > 12:30 Lunch (Buffet provided) > > 13:30 SAN Volume Controller/V7000, Easy Tier and > Real Time Compression > > 14:30 WOS: Web Object Scalar > > Vic Cornell, DDN > > 14:50 GPFS Metadata + SSDs > > Andrew Dean, OCF > > 15:20 User experience of GPFS > > 15:50 Stupid GPFS Tricks 2012 > > 16:00 Group discussion: Challenges, experiences > and questions > > Led by Jez Tucker, Group Chairperson > > 16:20 Close > > > > The meeting will take place on 20^th September at > Bishopswood Golf Club, Bishopswood, Bishopswood > Lane, Tadley, Hampshire, RG26 4AT. > > Please register with me if you will be attending the > day no later than 6^th September. Places are limited > and available on a first come first served basis. > > > > I look forward to seeing as many of you there as > possible! > > > > Best wishes > > > > Claire Robson > > GPFS User Group Secretary > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > > > > > > > OCF plc is a company registered in England and > Wales. Registered number 4132533, VAT number GB 780 > 6803 14. Registered office address: OCF plc, 5 > Rotunda Business Centre, Thorncliffe Park, > Chapeltown, Sheffield, S35 2PG > > > > This message is private and confidential. If you > have received this message in error, please notify > us immediately and remove it from your system. > > >________________ >GPFSUGAgendaSeptember2012.pdf (89k bytes) >________________ >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From Jez.Tucker at rushes.co.uk Tue Aug 14 19:24:30 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 14 Aug 2012 18:24:30 +0000 Subject: [gpfsug-discuss] Per stgpool locking gpfs->tsm hsm script updated Message-ID: <39571EA9316BE44899D59C7A640C13F5305B94A7@WARVWEXC1.uk.deluxe-eu.com> Hello all Just pushed the latest version of my script to the git repo. - Works on multiple storage pools - Directory lockfiles (atomic) - Use N tape drives - PID stored for easy use fo kill -s SIGTERM `cat/path/to/pidfile` - More informative logged into /var/adm/ras/mmfs.log.latest See: https://github.com/gpfsug/gpfsug-tools/tree/master/scripts/hsm Obv. Use at own risk and test first on non-critical data. Bugfixes / stupidty pointed out is appreciated. --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Wed Aug 15 16:03:18 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Wed, 15 Aug 2012 16:03:18 +0100 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 28/08/2012) Message-ID: I am out of the office until 28/08/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 8, Issue 5" sent on 15/8/2012 12:00:01. This is the only notification you will receive while this person is away. From bevans at canditmedia.co.uk Wed Aug 15 19:39:44 2012 From: bevans at canditmedia.co.uk (Barry Evans) Date: Wed, 15 Aug 2012 19:39:44 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba Message-ID: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Hello all, Anyone had success with windows extended attributes actually passing through samba over to GPFS? On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read only through Win 7 explorer and attrib I get: [2012/08/15 18:13:32.023966, 1] modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS attributes failed: -1 This is with gpfs:winattr set to yes. I also tried enabling 'store dos attributes' for a laugh but the result was no different. I've not tried bumping up the loglevel yet, this may reveal something more interesting. Many Thanks, Barry Evans Technical Director CandIT Media UK Ltd +44 7500 667 671 bevans at canditmedia.co.uk From orlando.richards at ed.ac.uk Wed Aug 15 22:20:17 2012 From: orlando.richards at ed.ac.uk (orlando.richards at ed.ac.uk) Date: Wed, 15 Aug 2012 22:20:17 +0100 (BST) Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Message-ID: I had in my head that you'd need to be running samba 3.6 for that to work - although that was a while ago, and they may have backported it. On Wed, 15 Aug 2012, Barry Evans wrote: > Hello all, > > Anyone had success with windows extended attributes actually passing through samba over to GPFS? > > On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read only through Win 7 explorer and attrib I get: > > [2012/08/15 18:13:32.023966, 1] modules/vfs_gpfs.c:1003(gpfs_get_xattr) > gpfs_get_xattr: Get GPFS attributes failed: -1 > > This is with gpfs:winattr set to yes. I also tried enabling 'store dos attributes' for a laugh but the result was no different. I've not tried bumping up the loglevel yet, this may reveal something more interesting. > > Many Thanks, > Barry Evans > Technical Director > CandIT Media UK Ltd > +44 7500 667 671 > bevans at canditmedia.co.uk > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From mail at arif-ali.co.uk Wed Aug 15 22:24:33 2012 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 15 Aug 2012 22:24:33 +0100 Subject: [gpfsug-discuss] open-source and gpfs Message-ID: All, I was hoping to use GPFS in an open-source project, which has little to nil funding (We have enough for infrastructure). How would I approach to get the ability to use GPFS for a non-profit open-source project. Would I need to somehow buy a license, as I know there aren't any license agreements that gpfs comes with, and that it is all about trust in terms of licensing. Any feedback on this would be great. -- Arif Ali catch me on freenode IRC, username: arif-ali From j.buzzard at dundee.ac.uk Wed Aug 15 23:07:34 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 15 Aug 2012 23:07:34 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> Message-ID: <502C1DA6.7090002@dundee.ac.uk> Barry Evans wrote: > Hello all, > > Anyone had success with windows extended attributes actually passing > through samba over to GPFS? > > On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read > only through Win 7 explorer and attrib I get: > > [2012/08/15 18:13:32.023966, 1] > modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS > attributes failed: -1 > > This is with gpfs:winattr set to yes. I also tried enabling 'store > dos attributes' for a laugh but the result was no different. I've not > tried bumping up the loglevel yet, this may reveal something more > interesting. Hum, worked for me with 3.4.0-13 and now with 3.4.0-15, using samba3x packages that comes with CentOS 5.6 in the past and CentOS 5.8 currently. Note I have to rebuild the Samba packages to get the vfs_gpfs module which you need to load. The relevant bits of the smb.conf are # general options vfs objects = shadow_copy2 fileid gpfs # the GPFS stuff fileid : algorithm = fsname gpfs : sharemodes = yes gpfs : winattr = yes force unknown acl user = yes nfs4 : mode = special nfs4 : chown = no nfs4 : acedup = merge # store DOS attributes in extended attributes (vfs_gpfs then stores them in the file system) ea support = yes store dos attributes = yes map readonly = no map archive = no map system = no map hidden = no Though I would note that working out what all the configuration options required to make this (and other stuff) work where took some considerable amount of time. I guess there is a reason why IBM charge $$$ for the SONAS and StoreWise Unified products. Note that if you are going for that full make my Samba/GPFS file server look as close as possible to a pucker MS Windows server, you might want to consider setting the following GPFS options cifsBypassShareLocksOnRename cifsBypassTraversalChecking allowWriteWithDeleteChild All fairly self explanatory, and make GPFS follow Windows schematics more closely, though they are "undocumented". There is also there is an undocumented option for ACL's on mmchfs (I am working on 3.4.0-15) so that you can do mmchfs test -k samba Even shows up in the output of mmlsfs. Not entirely sure what samba ACL's are mind you... JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH The University of Dundee is a registered Scottish Charity, No: SC015096 From Jez.Tucker at rushes.co.uk Thu Aug 16 15:27:19 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 16 Aug 2012 14:27:19 +0000 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From sfadden at us.ibm.com Thu Aug 16 16:00:04 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Thu, 16 Aug 2012 08:00:04 -0700 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I am not aware of any special pricing for open source projects. For more details contact you IBM representative or business partner. If you don't know who that is, let me know and I can help you track them down. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list , Date: 08/16/2012 07:27 AM Subject: Re: [gpfsug-discuss] open-source and gpfs Sent by: gpfsug-discuss-bounces at gpfsug.org TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Thu Aug 16 16:26:08 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 16 Aug 2012 15:26:08 +0000 Subject: [gpfsug-discuss] open-source and gpfs In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5305BA13A@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5305BA1C8@WARVWEXC1.uk.deluxe-eu.com> I'll sort this out with mine and report back to the list. Questions wrt O/S projects: 1) Cost 2) License terms 3) What can be distributed from the portability layer etc. Any others? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 16 August 2012 16:00 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] open-source and gpfs I am not aware of any special pricing for open source projects. For more details contact you IBM representative or business partner. If you don't know who that is, let me know and I can help you track them down. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker > To: gpfsug main discussion list >, Date: 08/16/2012 07:27 AM Subject: Re: [gpfsug-discuss] open-source and gpfs Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ TBH. I'm not too sure about this myself. Scott, can you comment with the official IBM line? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Arif Ali > Sent: 15 August 2012 22:25 > To: gpfsug-discuss > Subject: [gpfsug-discuss] open-source and gpfs > > All, > > I was hoping to use GPFS in an open-source project, which has little > to nil funding (We have enough for infrastructure). How would I > approach to get the ability to use GPFS for a non-profit open-source > project. > > Would I need to somehow buy a license, as I know there aren't any > license agreements that gpfs comes with, and that it is all about > trust in terms of licensing. > > Any feedback on this would be great. > > -- > Arif Ali > > catch me on freenode IRC, username: arif-ali > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bevans at canditmedia.co.uk Thu Aug 16 16:40:03 2012 From: bevans at canditmedia.co.uk (Barry Evans) Date: Thu, 16 Aug 2012 16:40:03 +0100 Subject: [gpfsug-discuss] Windows xattr/Samba In-Reply-To: <502C1DA6.7090002@dundee.ac.uk> References: <8C193A59-5E29-4F9B-A90C-9B90613D6A1A@canditmedia.co.uk> <502C1DA6.7090002@dundee.ac.uk> Message-ID: Yep, that works a treat, thanks Jonathan! I was missing ea support and the map = no options Cheers, Barry On 15 Aug 2012, at 23:07, Jonathan Buzzard wrote: > Barry Evans wrote: >> Hello all, >> >> Anyone had success with windows extended attributes actually passing >> through samba over to GPFS? >> >> On a 3.4.0-13 system with samba 3.5.11 when trying to set a file read >> only through Win 7 explorer and attrib I get: >> >> [2012/08/15 18:13:32.023966, 1] >> modules/vfs_gpfs.c:1003(gpfs_get_xattr) gpfs_get_xattr: Get GPFS >> attributes failed: -1 >> >> This is with gpfs:winattr set to yes. I also tried enabling 'store >> dos attributes' for a laugh but the result was no different. I've not >> tried bumping up the loglevel yet, this may reveal something more >> interesting. > > Hum, worked for me with 3.4.0-13 and now with 3.4.0-15, using samba3x > packages that comes with CentOS 5.6 in the past and CentOS 5.8 > currently. Note I have to rebuild the Samba packages to get the vfs_gpfs > module which you need to load. The relevant bits of the smb.conf are > > # general options > vfs objects = shadow_copy2 fileid gpfs > > # the GPFS stuff > fileid : algorithm = fsname > gpfs : sharemodes = yes > gpfs : winattr = yes > force unknown acl user = yes > nfs4 : mode = special > nfs4 : chown = no > nfs4 : acedup = merge > > # store DOS attributes in extended attributes (vfs_gpfs then stores them > in the file system) > ea support = yes > store dos attributes = yes > map readonly = no > map archive = no > map system = no > map hidden = no > > > Though I would note that working out what all the configuration options > required to make this (and other stuff) work where took some > considerable amount of time. I guess there is a reason why IBM charge > $$$ for the SONAS and StoreWise Unified products. > > Note that if you are going for that full make my Samba/GPFS file server > look as close as possible to a pucker MS Windows server, you might want > to consider setting the following GPFS options > > cifsBypassShareLocksOnRename > cifsBypassTraversalChecking > allowWriteWithDeleteChild > > All fairly self explanatory, and make GPFS follow Windows schematics > more closely, though they are "undocumented". > > There is also there is an undocumented option for ACL's on mmchfs (I am > working on 3.4.0-15) so that you can do > > mmchfs test -k samba > > Even shows up in the output of mmlsfs. Not entirely sure what samba > ACL's are mind you... > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > > The University of Dundee is a registered Scottish Charity, No: SC015096 > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Aug 24 12:42:51 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 24 Aug 2012 11:42:51 +0000 Subject: [gpfsug-discuss] mmbackup Message-ID: <39571EA9316BE44899D59C7A640C13F5305BE156@WARVWEXC1.uk.deluxe-eu.com> Does anyone have to hand a copy of both policies which mmbackup uses for full and incremental? --- Jez Tucker Senior Sysadmin Rushes DDI: +44 (0) 207 851 6276 http://www.rushes.co.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: