From j.buzzard at dundee.ac.uk Wed May 9 16:47:25 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2012 16:47:25 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba Message-ID: <4FAA918D.50101@dundee.ac.uk> Not documented, but I believe there are four ;-) allowSambaCaseInsensitiveLookup syncSambaMetadataOps cifsBypassShareLocksOnRename cifsBypassTraversalChecking From what I can determine they are binary on/off options. For example you enable the first with mmchconfig allowSambaCaseInsensitiveLookup=yes I am guessing but I would imagine when the first is turned on, then when Samba tries to lookup a filename GPFS will do the case insensitive matching for you, which should be faster than Samba having to do it. You should obviously have case sensitive = no in your Samba config as well. The cifsBypassTraversalChecking is explained in the SONAS manual page for chcfg, and I note it is on by default in SONAS. http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.sonas.doc%2Fmanpages%2Fchcfg.html Some Googling indicates that NetApp and EMC have options for bypass traverse checking on their filers, so something you probably want to turn on. The other two sound fairly self explanatory, but the question is why would you want them turned on. Any one any ideas? JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From j.buzzard at dundee.ac.uk Wed May 9 23:57:56 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2012 23:57:56 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <4FAA918D.50101@dundee.ac.uk> References: <4FAA918D.50101@dundee.ac.uk> Message-ID: <4FAAF674.9070809@dundee.ac.uk> Jonathan Buzzard wrote: > > Not documented, but I believe there are four ;-) > > allowSambaCaseInsensitiveLookup > syncSambaMetadataOps > cifsBypassShareLocksOnRename > cifsBypassTraversalChecking > Just add to this I believe there are some more, mostly because they are between the first two and last two in mmchconfig Korn shell file They are allowSynchronousFcntlRetries allowWriteWithDeleteChild Not sure what the first one does, but the second one I am guessing allows you to write to a folder if you can delete child folders and would make GPFS/Samba follow Windows schematics closer. Over the coming days I hope to play around with some of these options and see what they do. Also there is an undocumented option for ACL's on mmchfs (I am working on 3.4.0-13 here so it's not even 3.5) so that you can do mmchfs test -k samba and then [root at krebs1 bin]# mmlsfs test flag value description ------------------- ------------------------ ----------------------------------- -f 32768 Minimum fragment size in bytes -i 512 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k samba ACL semantics in effect -n 32 Estimated number of nodes that will mount file system -B 1048576 Block size -Q user;group;fileset Quotas enforced none Default quotas enabled --filesetdf no Fileset df enabled? -V 12.07 (3.4.0.4) Current file system version 11.05 (3.3.0.2) Original file system version --create-time Fri Dec 4 09:37:28 2009 File system creation time -u yes Support for large LUNs? -z yes Is DMAPI enabled? -L 4194304 Logfile size -E no Exact mtime mount option -S no Suppress atime mount option -K always Strict replica allocation option --fastea yes Fast external attributes enabled? --inode-limit 1427760 Maximum number of inodes -P system;nearline Disk storage pools in file system -d gpfs19nsd;gpfs20nsd;gpfs21nsd;gpfs22nsd;gpfs23nsd;gpfs24nsd Disks in file system -A yes Automatic mount option -o none Additional mount options -T /test Default mount point --mount-priority 0 Mount priority Not entirely sure what samba ACL's are mind you. Does it modify NFSv4 ACL's so they follow NTFS schematics more closely? JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu May 10 08:39:09 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 10 May 2012 07:39:09 +0000 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <4FAAF674.9070809@dundee.ac.uk> References: <4FAA918D.50101@dundee.ac.uk> <4FAAF674.9070809@dundee.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> If you're on 3.4.0-13, can you confirm the operation of DMAPI Windows mounts. Any serious issues? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 09 May 2012 23:58 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS magic options for Samba > > Jonathan Buzzard wrote: > > > > Not documented, but I believe there are four ;-) > > > > allowSambaCaseInsensitiveLookup > > syncSambaMetadataOps > > cifsBypassShareLocksOnRename > > cifsBypassTraversalChecking > > > > Just add to this I believe there are some more, mostly because they are > between the first two and last two in mmchconfig Korn shell file > > They are > > allowSynchronousFcntlRetries > allowWriteWithDeleteChild > > Not sure what the first one does, but the second one I am guessing allows > you to write to a folder if you can delete child folders and would make > GPFS/Samba follow Windows schematics closer. Over the coming days I > hope to play around with some of these options and see what they do. > > Also there is an undocumented option for ACL's on mmchfs (I am working on > 3.4.0-13 here so it's not even 3.5) so that you can do > > mmchfs test -k samba > > and then > > [root at krebs1 bin]# mmlsfs test > flag value description > ------------------- ------------------------ > ----------------------------------- > -f 32768 Minimum fragment size in bytes > -i 512 Inode size in bytes > -I 32768 Indirect block size in bytes > -m 1 Default number of metadata > replicas > -M 2 Maximum number of metadata > replicas > -r 1 Default number of data > replicas > -R 2 Maximum number of data > replicas > -j cluster Block allocation type > -D nfs4 File locking semantics in > effect > -k samba ACL semantics in effect > -n 32 Estimated number of nodes > that will mount file system > -B 1048576 Block size > -Q user;group;fileset Quotas enforced > none Default quotas enabled > --filesetdf no Fileset df enabled? > -V 12.07 (3.4.0.4) Current file system version > 11.05 (3.3.0.2) Original file system version > --create-time Fri Dec 4 09:37:28 2009 File system creation time > -u yes Support for large LUNs? > -z yes Is DMAPI enabled? > -L 4194304 Logfile size > -E no Exact mtime mount option > -S no Suppress atime mount option > -K always Strict replica allocation > option > --fastea yes Fast external attributes > enabled? > --inode-limit 1427760 Maximum number of inodes > -P system;nearline Disk storage pools in file > system > -d > gpfs19nsd;gpfs20nsd;gpfs21nsd;gpfs22nsd;gpfs23nsd;gpfs24nsd Disks in file > system > -A yes Automatic mount option > -o none Additional mount options > -T /test Default mount point > --mount-priority 0 Mount priority > > > Not entirely sure what samba ACL's are mind you. Does it modify NFSv4 > ACL's so they follow NTFS schematics more closely? > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From j.buzzard at dundee.ac.uk Thu May 10 09:54:46 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 10 May 2012 09:54:46 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> References: <4FAA918D.50101@dundee.ac.uk> <4FAAF674.9070809@dundee.ac.uk> <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FAB8256.5010409@dundee.ac.uk> On 10/05/12 08:39, Jez Tucker wrote: > If you're on 3.4.0-13, can you confirm the operation of DMAPI Windows mounts. > Any serious issues? Yes I can confirm that it does not work. Thw documentation is/was all wrong. See this thread in the GPFS forums. http://www.ibm.com/developerworks/forums/thread.jspa?threadID=426107&tstart=15 Basically you need to wait to 3.4.0-14 or jump to 3.5.0-1 :-) I have however noticed that the per fileset quotas seem to be fully functional on 3.4.0-13, turn them on with mmchfs test --perfileset-quota and off with, mmchfs test --noperfileset-quota set a quota for user nemo on the homes fileset with mmedquota -u test:homes:nemo or if you prefer the command line than messing with an editor mmsetquota -u nemo -h 25G /test/homes JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From crobson at ocf.co.uk Fri May 18 12:15:50 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Fri, 18 May 2012 12:15:50 +0100 Subject: [gpfsug-discuss] A date for your diaries Message-ID: Dear All, The next GPFS user group meeting will take place on Thursday 20th September. Paul Tomlinson, AWE, has kindly offered to host and the meeting will take place at Bishopswood Golf Club, Bishopswood, Bishopswood Lane, Tadley, Hampshire, RG26 4AT. Agenda to follow soon. Please contact me to register your place and to highlight any agenda items. Many thanks, Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri May 18 15:57:18 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 18 May 2012 14:57:18 +0000 Subject: [gpfsug-discuss] Stupid GPFS Tricks 2012 - Call for entries Message-ID: <39571EA9316BE44899D59C7A640C13F5305997B6@WARVWEXC1.uk.deluxe-eu.com> [cid:image001.png at 01CD350C.1C9EA0D0] Hello GPFSUG peeps, Have you used GPFS to do something insanely wacky that the sheer craziness of would blow our minds? Perhaps you've done something spectacularly stupid that turned out to be, well, just brilliant. Maybe you used the GPFS API, policies or scripts to create a hack of utter awesomeness. If so, then Stupid GPFS Tricks is for you. The rules: - It must take no longer than 10 minutes to explain your stupid trick. - You must be able to attend the next UG at AWE. - All stupid tricks must be submitted by Aug 31st 2012. Entries should be submitted to secretary at gpfsug.org with the subject "Stupid GPFS Trick". A short description of your trick and any associated Powerpoint/OO/etc. slides is required or point us to a URL. Thanks Jez [This event idea has been shamelessly robbed from the Pixar UG. Thanks folks!] --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16602 bytes Desc: image001.png URL: From crobson at ocf.co.uk Wed May 23 08:53:02 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Wed, 23 May 2012 08:53:02 +0100 Subject: [gpfsug-discuss] Crispin Keable article Message-ID: Dear All, An interesting article featuring Crispin Keable (who has previously presented at our user group meetings) was published in The Register yesterday. Crispin talks about the latest GPFS 3.5 update. Read the full article: http://www.theregister.co.uk/2012/05/21/ibm_general_parallel_file_system_3dot5/ Claire Robson GPFS User Group Secretary -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Thu May 24 11:55:42 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Thu, 24 May 2012 10:55:42 +0000 Subject: [gpfsug-discuss] GPFS Question: Will stopping all tie-breaker disks break quorum semantics? Message-ID: Dear GPFS, I have a relatively simple GPFS set-up: Two manager-quorum nodes (primary and secondary configuration nodes) run the cluster with tie-breaker disk quorum semantics. The two manager nodes are SAN attached to 6 x 20TB SATA NSDs (marked as dataOnly), split in to two failure groups so we could create a file system that supported replication. Three of these NSDs are marked as the tie-breaker disks. The metadata is stored on SAS disks located in both manager-quorum nodes (marked as metaDataOnly) and replicated between them. The disk controller subsystem that runs the SATA NSDs requires a reboot, BUT I do not want to shut down GPFS as some critical services are dependent on a small (~12TB) portion of the data. I have added two additional NSD servers to the cluster using some old equipment. These are SAN attached to 10 x 2TB LUNs which is enough to keep the critical data on. I am removing one of the SATA 20TB LUNs from the file system 'system' storage pool on the manager nodes and adding it to another storage pool 'evac-pool' which contains the new 10 x 2TB NSDs. Using the policy engine, I want to migrate the file set which contains the critical data to this new storage pool and enable replication of the file set (with the single 20TB NSD in failure group 1 and the 10 x 2TB NSDs in failure group 2). I am expecting to then be able to suspend then stop the 20TB NSD and maintain access to the critical data. This plan is progressing nicely, but I'm not yet at the stage where I can stop the 20TB NSD (I'm waiting for a re-stripe to finish for something else). Does this plan sound plausible so far? I've read the relevant documentation and will run an experiment with stopping the single 20TB NSDs first. However, I thought about a potential problem - the quorum semantics in operation. When I switch off all six 20TB NSDs, the cluster manager-quorum nodes to which they are attached will remain online (to serve the metadata NSDs for the surviving data disks), but all the tiebreaker disks are on the six 20TB NSDs. My question is, will removing access to the tie-breaker disks affect GPFS quorum, or are they only referenced when quorum is lost? I'm running GPFS 3.4.7. Thanks, Luke. -- Luke Raimbach IT Manager Oxford e-Research Centre 7 Keble Road, Oxford, OX1 3QG +44(0)1865 610639 From ghemingtsai at gmail.com Sat May 26 01:10:04 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Fri, 25 May 2012 17:10:04 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Message-ID: Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon May 28 16:55:54 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 28 May 2012 15:55:54 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059CC99@WARVWEXC1.uk.deluxe-eu.com> Hello Grace This is most likely because the file system that you're trying to manage via Space Management isn't configured as such. I.E. the -z flag in mmlsfs http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html Also: This IBM red book should be a good starting point and includes the information you need should you with to setup GPFS drives TSM migration (using THRESHOLD). http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1 Suggest you read the red book first and decide which method you'd like. Regards, Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 26 May 2012 01:10 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Tue May 29 18:39:24 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Tue, 29 May 2012 10:39:24 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 6 In-Reply-To: References: Message-ID: Hi, Jez, I tried what you suggested with the command: mmchfs -z yes /dev/fs1 and the list output of "mmlsfs" is as follows: -sh-4.1# ./mmlsfs /dev/fs1 flag value description ------------------- ------------------------ ----------------------------------- -f 32768 Minimum fragment size in bytes -i 512 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k all ACL semantics in effect -n 10 Estimated number of nodes that will mount file system -B 1048576 Block size -Q none Quotas enforced none Default quotas enabled --filesetdf no Fileset df enabled? -V 12.10 (3.4.0.7) File system version --create-time Thu Feb 23 16:13:28 2012 File system creation time -u yes Support for large LUNs? -z yes Is DMAPI enabled? -L 4194304 Logfile size -E yes Exact mtime mount option -S no Suppress atime mount option -K whenpossible Strict replica allocation option --fastea yes Fast external attributes enabled? --inode-limit 571392 Maximum number of inodes -P system Disk storage pools in file system -d scratch_DL1;scratch_MDL1 Disks in file system -A no Automatic mount option -o none Additional mount options -T /gpfs_directory1/ Default mount point --mount-priority 0 Mount priority But I still got the error message in dsmsmj from "manage" on /gpfs_directory1 "A conflicting Space Management is already running in the /gpfs_directory1 file system. Please wait until the Space Management process is ready and try" Could you help please? Could you give more suggestions please? Thanks. Grace On Tue, May 29, 2012 at 4:00 AM, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at gpfsug.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at gpfsug.org > > You can reach the person managing the list at > gpfsug-discuss-owner at gpfsug.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Use HSM to backup GPFS - error message: ANS9085E (Jez Tucker) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 28 May 2012 15:55:54 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS - error message: > ANS9085E > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059CC99 at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > Hello Grace > > This is most likely because the file system that you're trying to manage > via Space Management isn't configured as such. > > I.E. the -z flag in mmlsfs > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html > > Also: > > This IBM red book should be a good starting point and includes the > information you need should you with to setup GPFS drives TSM migration > (using THRESHOLD). > > http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1 > > Suggest you read the red book first and decide which method you'd like. > > Regards, > > Jez > > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 26 May 2012 01:10 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E > > Hi, > > I have a GPFS system verson 3.4, which includes the following two GPFS > file systems with the directories: > > /gpfs_directory1 > /gpfs_directory2 > > I like to use HSM to backup these GPFS files to the tapes in our TSM > server (RHAT 6.2, TSM 6.3). > I run HSM GUI on this GPFS server, the list of the file systems on this > GPFS server is as follows: > > File System State Size(KB) Free(KB) ... > ------------------ > / Not Manageable > /boot Not Manageable > ... > /gpfs_directory1 Not Managed > /gpfs_directory2 Not Managed > > > I click "gpfs_directory1", and click "Manage" > => > I got error: > """ > A conflicting Space Management process is already running in the > /gpfs_directory1 file system. > Please wait until the Space management process is ready and try again. > """ > > The dsmerror.log shows the message: > "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space > management" > > Is there anything on GPFS or HSM or TSM server that I didnt configure > correctly? Please help. Thanks. > > Grace > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120528/b97e39e0/attachment-0001.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 5, Issue 6 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 08:28:01 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 07:28:01 +0000 Subject: [gpfsug-discuss] GPFS + RDMA + Ethernet (RoCE/iWARP) Message-ID: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> Hello all I've been having a pootle around ye olde Internet in a coffee break and noticed that RDMA over Ethernet exists. http://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet http://www.hpcwire.com/hpcwire/2010-04-22/roce_an_ethernet-infiniband_love_story.html Has anyone had any experience of using this? (even outside GPFS) I know GPFS supports RDMA with Infiniband, but unsure as to RoCE / iWARP support. It suddenly occurred to me that I have 10Gb Brocade VDX switches with DCB & PFC and making things go faster is great. Perhaps the HPC crowd do this, but only via IB? Thoughts? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 08:56:42 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 07:56:42 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059D745@WARVWEXC1.uk.deluxe-eu.com> On the command line: What's the output of dsmmigfs query -Detail and ps -ef | grep dsm From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 26 May 2012 01:10 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at arif-ali.co.uk Wed May 30 14:18:17 2012 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 30 May 2012 14:18:17 +0100 Subject: [gpfsug-discuss] GPFS + RDMA + Ethernet (RoCE/iWARP) In-Reply-To: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FC61E19.8060909@arif-ali.co.uk> On 30/05/12 08:28, Jez Tucker wrote: > > Hello all > > I've been having a pootle around ye olde Internet in a coffee break > and noticed that RDMA over Ethernet exists. > > http://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet > > http://www.hpcwire.com/hpcwire/2010-04-22/roce_an_ethernet-infiniband_love_story.html > > Has anyone had any experience of using this? (even outside GPFS) > > I know GPFS supports RDMA with Infiniband, but unsure as to RoCE / > iWARP support. > > It suddenly occurred to me that I have 10Gb Brocade VDX switches with > DCB & PFC and making things go faster is great. > > Perhaps the HPC crowd do this, but only via IB? > I did have a look at this about a year ago, and thought it would be great. But never thought people would be interested. and didn't find anything within the GPFS docs or secret configs that indicated that this is supported In most of our setups we do tend to stick with verbs-rdma, and that is where most of our customer's are working with. It would be very interesting to see if it was ever supported, and to see what kind of performance improvement we would get by taking the tcp layer away maybe one of the devs could shed some light on this. -- regards, Arif -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed May 30 15:36:05 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 30 May 2012 07:36:05 -0700 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows Message-ID: This came up in the user group meeting so I thought I would send this to the group. Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file systems on GPFS Windows nodes. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 15:58:12 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 14:58:12 +0000 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059E2B2@WARVWEXC1.uk.deluxe-eu.com> May I be the first to stick both hands in the air and run round the room screaming WOOOT! Thanks to the dev team for that one. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 30 May 2012 15:36 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Mount DMAPI File system on Windows This came up in the user group meeting so I thought I would send this to the group. Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file systems on GPFS Windows nodes. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.buzzard at dundee.ac.uk Wed May 30 16:55:22 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 30 May 2012 16:55:22 +0100 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows In-Reply-To: References: Message-ID: <4FC642EA.8050601@dundee.ac.uk> Scott Fadden wrote: > This came up in the user group meeting so I thought I would send this to > the group. > > Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file > systems on GPFS Windows nodes. > Are we absolutely 100% sure on that? I ask because the release notes have contradictory information on this and when I asked in the GPFS forum for clarification the reply was it would be starting with 3.4.0-14 http://www.ibm.com/developerworks/forums/thread.jspa?threadID=426107&tstart=30 JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From ghemingtsai at gmail.com Wed May 30 17:06:15 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Wed, 30 May 2012 09:06:15 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Wed May 30 17:09:57 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Wed, 30 May 2012 09:09:57 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/ bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 17:14:17 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 16:14:17 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059E38F@WARVWEXC1.uk.deluxe-eu.com> So. My hunch from looking at my system here is that you haven't actually told dsm that the filesystem is to be space managed. You do that here: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html Then re-run dsmmigfs query -Detail and hopefully you should see something similar to this: [root at tsm01 ~]# dsmmigfs query -Detail IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 2, Level 4.1 Client date/time: 30-05-2012 17:13:00 (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights Reserved. The local node has Node ID: 3 The failover environment is deactivated on the local node. File System Name: /mnt/gpfs High Threshold: 100 Low Threshold: 80 Premig Percentage: 20 Quota: 999999999999999 Stub Size: 0 Server Name: TSM01 Max Candidates: 100 Max Files: 0 Min Partial Rec Size: 0 Min Stream File Size: 0 MinMigFileSize: 0 Preferred Node: tsm01 Node ID: 3 Owner Node: tsm01 Node ID: 3 Source Nodes: tsm01 Then see if your HSM GUI works properly. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 30 May 2012 17:06 To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtsai at slac.stanford.edu Wed May 30 20:15:55 2012 From: gtsai at slac.stanford.edu (Grace Tsai) Date: Wed, 30 May 2012 12:15:55 -0700 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments Message-ID: <4FC671EB.7060104@slac.stanford.edu> Hi, We are in the process of choosing a permanent file system for our institution, GPFS is one of the three candidates. Could someone help me to give comments or answers to the requests listed in the following. Basically, I need your help to mark 1 or 0 in the GPFS column if a feature either exists or doesnt exist, respectively. Please also add supporting comments if a feature has additional info, e.g., 100PB single namespace file system supported, etc. I answered some of them which I have tested, or got the information from google or manuals. User-Visible Features: ----------------------------- 1. Allows a single namespace (UNIX path) of at least 10s of petabytes in size. (My answer: Current tested: 4PB) 2. Large number of files supported (specify number) per namespace. (My answer: 9*10**9) 3. Supports POSIX-like mount points (i.e., looks like NFS) (My answer: 1) 4. File system supports file level access control lists (ACLs) (My answer: 1) 5. File system supports directory level ACLs, e.g., like AFS. (My answer: 1) 6. Disk quotas that users can query. (My answer: 1) 7. Disk quotas based on UID (My answer: 1) 8. Disk quotas based on GID (My answer: 1) 9. Disk quotas based on directory. 10. User-accessible snapshots Groupadmin-Visible Features --------------------------------------- 1. Group access (capabilities) similar to AFS pts groups. (My answer: 1) 2. Group access administration (create/delete/modify) that can be delegated to the groups themselves. (My answer: 1) 3. High limit (1000s) on the number of users per group 4. High limit (100s) on the number of groups a user can belong to. (My answer: 1) 5. Nesting groups within groups is permitted. 6. Groups are equal partners with users in terms of access control lists. 7. Group managers can adjust disk quotas 8. Group managers can create/delete/modify user spaces. 9. Group managers can create/delete/modify group spaces. Sysadmin-Visible Features ----------------------------------- 1. Namespace is expandable and shrinkable without file system downtime. (My answer: 1) 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) via some type of filtering, without manual user intervention (Data life-cycle management) 3. User can provide manual "hints" on where to place files based on usage requirements. 4. Allows resource-configurable logical relocation or actual migration of data without user downtime (Hardware life-cycle management/patching/maintenance) 5. Product has been shipping in production for a minimum of 2 years, nice to have at least 5 years. Must be comfortable with the track record. (My answer: 1 ) 6. Product has at least two commercial companies providing support. 7. Distributed metadata (or equivalent) to remove obvious file system bottlenecks. (My Answer: 1) 8. File system supports data integrity checking (e.g., ZFS checksumming) (My answer: 1) 9. Customized levels of data redundancy at the file/subdirectory/partition layer, based on user requirements. Replication. Load-balancing. 10. Management software fully spoorts command line interface (CLI) 10. Management software supports a graphical user interface (GUI) 11. Must run on non-proprietary x86/x64 hardware (Note: this might eliminate some proprietary solutions that meet every other requirement.) 12. Software provides tools to measure performance and monitor problems. (My answer: 1) 13. Robust and reliable: file system must recover gracefully from an unscheduled power outage, and not take forever for fsck. 14. Client code must support RHEL. (My answer: 1) 15. Client code must support RHEL compatible OS. (My answer: 1) 16. Client code must support Linux. (My answer: 1) 17. Client code must support Windows. (My answer: 1) 18. Affordable 19. Value for the money. 20. Provides native accounting information to support a storage service model. 21. Ability to change file owner throughout file system (generalized ability to implement metadata changes) 22. Allows discrete resource allocation in case groups want physical resource separation, yet still allows central management. Resource allocation might control bandwidth, LUNx, CPU, user/subdir/filesystem quotas, etc. 23. Built-in file system compression option 24. Built-in file-level replication option 25. Built-in file system deduplication option 26. Built-in file system encryption option 27. Support VM image movement among storage servers, including moving entire jobs (hypervisor requirement) 28. Security/authentication of local user to allow access (something stronger than host-based access) 29. WAN-based file system (e.g., for disaster recover site) 30. Must be able to access filesystem via NFSv3 (My answer: 1) 31. Can perform OPTIONAL file system rebalancing when adding new storage. 32. Protection from accidental, large scale deletions 33. Ability to transfer snapshots among hosts. 34. Ability to promote snapshot to read/write partition 35. Consideration given to number of metadata servers required to support overall service, and how that affects HA, i.e., must be able to support HA on a per namespace basis . (How many MD servers would we need to keep file service running?) 36. Consideration given to backup and restore capabilities and compatible hardware/software products. Look at timeframe requirements. (What backup solutions does it recommend?) 37. Need to specify how any given file system is not POSIX-compliant so we understand it. Make this info available to users. (What are its POSIX shortcomings?) From j.buzzard at dundee.ac.uk Thu May 31 00:39:32 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 31 May 2012 00:39:32 +0100 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <4FC671EB.7060104@slac.stanford.edu> References: <4FC671EB.7060104@slac.stanford.edu> Message-ID: <4FC6AFB4.6040300@dundee.ac.uk> Grace Tsai wrote: I am not sure who dreamed up this list but I will use two of the points to illustrate why it is bizarre. [SNIP] > 5. Nesting groups within groups is permitted. > Has absolutely nothing whatsoever to do with any file system under Linux that I am aware of. So for example traditionally under Linux you don't have nested groups. However if you are running against Active Directory with winbind you can. This is however independent of any file system you are running. [SNIP] > > 35. Consideration given to number of metadata servers required to > support overall service, and how that affects HA, i.e., > must be able to support HA on a per namespace basis . (How many MD > servers would we need to keep file service running?) > This for example would suggest that whoever drew up the list has a particular idea about how clustered file systems work that simply does not apply to GPFS; there are no metadata servers in GPFS There are lots of other points that just don't make sense to me as a storage administrator. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu May 31 09:32:24 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 31 May 2012 08:32:24 +0000 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <4FC671EB.7060104@slac.stanford.edu> References: <4FC671EB.7060104@slac.stanford.edu> Message-ID: <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> Hello Grace, I've cribbed out the questions you've already answered. Though, I think these should be best directed to IBM pre-sales tech to qualify them. Regards, Jez > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 30 May 2012 20:16 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments > > Hi, > > We are in the process of choosing a permanent file system for our > institution, GPFS is one of the three candidates. Could someone help me to > give comments or answers to the requests listed in the following. Basically, > I need your help to mark 1 or 0 in the GPFS column if a feature either exists > or doesnt exist, respectively. Please also add supporting comments if a > feature has additional info, e.g., 100PB single namespace file system > supported, etc. > I answered some of them which I have tested, or got the information from > google or manuals. > > > 9. Disk quotas based on directory. = 1 (per directory based on filesets which is a 'hard linked' directory to a storage pool via placement rules.) Max filesets is 10000 in 3.5. > Groupadmin-Visible Features > --------------------------------------- > 5. Nesting groups within groups is permitted. > > 6. Groups are equal partners with users in terms of access control lists. GPFS supports POSIX and NFS v4 ACLS (which are not quite the same as Windows ACLs) > 7. Group managers can adjust disk quotas > > 8. Group managers can create/delete/modify user spaces. > > 9. Group managers can create/delete/modify group spaces. .. .paraphrase... users with admin privs (root / sudoers) can adjust things. How you organise your user & group administration is up to you. This is external to GPFS. > Sysadmin-Visible Features > ----------------------------------- > > 1. Namespace is expandable and shrinkable without file system downtime. > (My answer: 1) > > 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) via some > type of filtering, without manual user intervention (Data life-cycle > management) = 1 . You can do this with GPFS policies and THRESHOLDS. Or look at IBM's V7000 Easy Tier. > 3. User can provide manual "hints" on where to place files based on usage > requirements. Do you mean the user is prompted, when you write a file? If so, then no. Though there is an API, so you could integrate that functionality if required, and your application defers to your GPFS API program before writes. I suggest user education is far simpler and cheaper to maintain. If you need prompts, your workflow is inefficient. It should be transparent to the user. > 4. Allows resource-configurable logical relocation or actual migration of data > without user downtime (Hardware life-cycle > management/patching/maintenance) = 1 > 6. Product has at least two commercial companies providing support. =1 Many companies provide OEM GPFS support. Though at some point this may be backed off to IBM if a problem requires development teams. > 9. Customized levels of data redundancy at the file/subdirectory/partition > layer, based on user requirements. > Replication. Load-balancing. =1 > 10. Management software fully spoorts command line interface (CLI) =1 > 10. Management software supports a graphical user interface (GUI) =1 , if you buy IBM's SONAS. Presume that v7000 has something also. > 11. Must run on non-proprietary x86/x64 hardware (Note: this might > eliminate some proprietary solutions that meet every other requirement.) =1 > 13. Robust and reliable: file system must recover gracefully from an > unscheduled power outage, and not take forever for fsck. =1. I've been through this personally. All good. All cluster nodes can participate in fsck. (Actually one of our Qlogic switches spat badness to two of our storage units which caused both units to simultaneously soft-reboot. Apparently the Qlogic firmware couldn't handle the amount of data we transfer a day in an internal counter. Needless to say, new firmware was required.) > 14. Client code must support RHEL. > (My answer: 1) > > 18. Affordable > > 19. Value for the money. Both above points are arguable. Nobody knows your budget. That said, it's cheaper to buy a GPFS system than an Isilon system of similar spec (I have both - and we're just about to switch off the Isilon due to running and expansion costs). Stornext is just too much management overhead and constant de-fragging. > 20. Provides native accounting information to support a storage service > model. What does 'Storage service model mean?' Chargeback per GB / user? If so, then you can write a list policy to obtain this information or use fileset quota accounting. > 21. Ability to change file owner throughout file system (generalized ability > to implement metadata changes) =1. You'd run a policy to do this. > 22. Allows discrete resource allocation in case groups want physical > resource separation, yet still allows central management. > Resource allocation might control bandwidth, LUNx, CPU, > user/subdir/filesystem quotas, etc. = 0.5. Max bandwidth you can control. You can't set a min. CPU is irrelevant. > 23. Built-in file system compression option No. Perhaps you could use TSM as an external storage pool and de-dupe to VTL ? If you backend that to tape, remember it will un-dupe as it writes to tape. > 24. Built-in file-level replication option =1 > 25. Built-in file system deduplication option =0 . I think. > 26. Built-in file system encryption option =1, if you buy IBM storage with on disk encryption. I.E. the disk is encrypted and is unreadable if removed, but the actual file system itself is not. > 27. Support VM image movement among storage servers, including moving > entire jobs (hypervisor requirement) That's a huge scope. Check your choice of VM requirements. GPFS is just a file system. > 28. Security/authentication of local user to allow access (something stronger > than host-based access) No. Unless you chkconfig the GPFS start scripts off and then have the user authenticate to be abel to start the script which mounts GPFS. > 29. WAN-based file system (e.g., for disaster recover site) =1 > 31. Can perform OPTIONAL file system rebalancing when adding new > storage. =1 > 32. Protection from accidental, large scale deletions =1 via snapshots. Though that's retrospective. No system is idiot proof. > 33. Ability to transfer snapshots among hosts. Unknown. All hosts in GPFS would see the snapshot. Transfer to a different GPFS cluster for DR, er, not quite sure. > 34. Ability to promote snapshot to read/write partition In what context does 'promote' mean? > 35. Consideration given to number of metadata servers required to support > overall service, and how that affects HA, i.e., > must be able to support HA on a per namespace basis . (How many MD > servers would we need to keep file service running?) 2 dedicated NSD servers for all namespaces is a good setup. Though, metadata is shared between all nodes. > 36. Consideration given to backup and restore capabilities and compatible > hardware/software products. Look at timeframe requirements. > (What backup solutions does it recommend?) I rather like TSM. Not tried HPSS. > 37. Need to specify how any given file system is not POSIX-compliant so we > understand it. Make this info available to users. > (What are its POSIX shortcomings?) GPFS is POSIX compliant. I'm personally unaware of any POSIX compatibility shortcomings. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luke.raimbach at oerc.ox.ac.uk Thu May 31 12:41:49 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Thu, 31 May 2012 11:41:49 +0000 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> References: <4FC671EB.7060104@slac.stanford.edu> <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> Message-ID: Hi Jez, >> 27. Support VM image movement among storage servers, including moving >> entire jobs (hypervisor requirement) > That's a huge scope. Check your choice of VM requirements. GPFS is just a file system. This works very nicely with VMware - we run our datastores from the cNFS exports of the file system. Putting the VM disks in a file-set allowed us to re-stripe the file-set, replicating it on to spare hardware in order to take down our main storage system for a firmware upgrade. The ESXi hosts didn't even flinch when we stopped the disks in the main file system! > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jez Tucker > Sent: 31 May 2012 09:32 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS Evaluation List - Please give some > comments > > Hello Grace, > > I've cribbed out the questions you've already answered. > Though, I think these should be best directed to IBM pre-sales tech to qualify > them. > > Regards, > > Jez > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Grace Tsai > > Sent: 30 May 2012 20:16 > > To: gpfsug-discuss at gpfsug.org > > Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some > > comments > > > > Hi, > > > > We are in the process of choosing a permanent file system for our > > institution, GPFS is one of the three candidates. Could someone help > > me to give comments or answers to the requests listed in the > > following. Basically, I need your help to mark 1 or 0 in the GPFS > > column if a feature either exists or doesnt exist, respectively. > > Please also add supporting comments if a feature has additional info, > > e.g., 100PB single namespace file system supported, etc. > > I answered some of them which I have tested, or got the information > > from google or manuals. > > > > > > 9. Disk quotas based on directory. > > = 1 (per directory based on filesets which is a 'hard linked' directory to a > storage pool via placement rules.) Max filesets is 10000 in 3.5. > > > > Groupadmin-Visible Features > > --------------------------------------- > > > 5. Nesting groups within groups is permitted. > > > > 6. Groups are equal partners with users in terms of access control lists. > > GPFS supports POSIX and NFS v4 ACLS (which are not quite the same as > Windows ACLs) > > > 7. Group managers can adjust disk quotas > > > > 8. Group managers can create/delete/modify user spaces. > > > > 9. Group managers can create/delete/modify group spaces. > > .. .paraphrase... users with admin privs (root / sudoers) can adjust things. > How you organise your user & group administration is up to you. This is > external to GPFS. > > > > Sysadmin-Visible Features > > ----------------------------------- > > > > 1. Namespace is expandable and shrinkable without file system downtime. > > (My answer: 1) > > > > 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) > > via some type of filtering, without manual user intervention (Data > > life-cycle > > management) > > = 1 . You can do this with GPFS policies and THRESHOLDS. Or look at IBM's > V7000 Easy Tier. > > > 3. User can provide manual "hints" on where to place files based on > > usage requirements. > > Do you mean the user is prompted, when you write a file? If so, then no. > Though there is an API, so you could integrate that functionality if required, > and your application defers to your GPFS API program before writes. I > suggest user education is far simpler and cheaper to maintain. If you need > prompts, your workflow is inefficient. It should be transparent to the user. > > > 4. Allows resource-configurable logical relocation or actual migration > > of data without user downtime (Hardware life-cycle > > management/patching/maintenance) > > = 1 > > > 6. Product has at least two commercial companies providing support. > > =1 Many companies provide OEM GPFS support. Though at some point this > may be backed off to IBM if a problem requires development teams. > > > 9. Customized levels of data redundancy at the > > file/subdirectory/partition layer, based on user requirements. > > Replication. Load-balancing. > > =1 > > > 10. Management software fully spoorts command line interface (CLI) > > =1 > > > > 10. Management software supports a graphical user interface (GUI) > > =1 , if you buy IBM's SONAS. Presume that v7000 has something also. > > > 11. Must run on non-proprietary x86/x64 hardware (Note: this might > > eliminate some proprietary solutions that meet every other > > requirement.) > > =1 > > > 13. Robust and reliable: file system must recover gracefully from an > > unscheduled power outage, and not take forever for fsck. > > =1. I've been through this personally. All good. All cluster nodes can > participate in fsck. > (Actually one of our Qlogic switches spat badness to two of our storage units > which caused both units to simultaneously soft-reboot. Apparently the > Qlogic firmware couldn't handle the amount of data we transfer a day in an > internal counter. Needless to say, new firmware was required.) > > > 14. Client code must support RHEL. > > (My answer: 1) > > > > > 18. Affordable > > > > 19. Value for the money. > > Both above points are arguable. Nobody knows your budget. > That said, it's cheaper to buy a GPFS system than an Isilon system of similar > spec (I have both - and we're just about to switch off the Isilon due to > running and expansion costs). Stornext is just too much management > overhead and constant de-fragging. > > > 20. Provides native accounting information to support a storage > > service model. > > What does 'Storage service model mean?' Chargeback per GB / user? > If so, then you can write a list policy to obtain this information or use fileset > quota accounting. > > > 21. Ability to change file owner throughout file system (generalized > > ability to implement metadata changes) > > =1. You'd run a policy to do this. > > > 22. Allows discrete resource allocation in case groups want physical > > resource separation, yet still allows central management. > > Resource allocation might control bandwidth, LUNx, CPU, > > user/subdir/filesystem quotas, etc. > > = 0.5. Max bandwidth you can control. You can't set a min. CPU is > irrelevant. > > > 23. Built-in file system compression option > > No. Perhaps you could use TSM as an external storage pool and de-dupe to > VTL ? If you backend that to tape, remember it will un-dupe as it writes to > tape. > > > 24. Built-in file-level replication option > > =1 > > > 25. Built-in file system deduplication option > > =0 . I think. > > > 26. Built-in file system encryption option > > =1, if you buy IBM storage with on disk encryption. I.E. the disk is encrypted > and is unreadable if removed, but the actual file system itself is not. > > > 27. Support VM image movement among storage servers, including moving > > entire jobs (hypervisor requirement) > > That's a huge scope. Check your choice of VM requirements. GPFS is just a > file system. > > > 28. Security/authentication of local user to allow access (something > > stronger than host-based access) > > No. Unless you chkconfig the GPFS start scripts off and then have the user > authenticate to be abel to start the script which mounts GPFS. > > > 29. WAN-based file system (e.g., for disaster recover site) > > =1 > > > 31. Can perform OPTIONAL file system rebalancing when adding new > > storage. > > =1 > > > 32. Protection from accidental, large scale deletions > > =1 via snapshots. Though that's retrospective. No system is idiot proof. > > > 33. Ability to transfer snapshots among hosts. > > Unknown. All hosts in GPFS would see the snapshot. Transfer to a different > GPFS cluster for DR, er, not quite sure. > > > 34. Ability to promote snapshot to read/write partition > > In what context does 'promote' mean? > > > 35. Consideration given to number of metadata servers required to > > support overall service, and how that affects HA, i.e., > > must be able to support HA on a per namespace basis . (How many > > MD servers would we need to keep file service running?) > > 2 dedicated NSD servers for all namespaces is a good setup. Though, > metadata is shared between all nodes. > > > 36. Consideration given to backup and restore capabilities and > > compatible hardware/software products. Look at timeframe requirements. > > (What backup solutions does it recommend?) > > I rather like TSM. Not tried HPSS. > > > 37. Need to specify how any given file system is not POSIX-compliant > > so we understand it. Make this info available to users. > > (What are its POSIX shortcomings?) > > GPFS is POSIX compliant. I'm personally unaware of any POSIX compatibility > shortcomings. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From j.buzzard at dundee.ac.uk Wed May 9 16:47:25 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2012 16:47:25 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba Message-ID: <4FAA918D.50101@dundee.ac.uk> Not documented, but I believe there are four ;-) allowSambaCaseInsensitiveLookup syncSambaMetadataOps cifsBypassShareLocksOnRename cifsBypassTraversalChecking From what I can determine they are binary on/off options. For example you enable the first with mmchconfig allowSambaCaseInsensitiveLookup=yes I am guessing but I would imagine when the first is turned on, then when Samba tries to lookup a filename GPFS will do the case insensitive matching for you, which should be faster than Samba having to do it. You should obviously have case sensitive = no in your Samba config as well. The cifsBypassTraversalChecking is explained in the SONAS manual page for chcfg, and I note it is on by default in SONAS. http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.sonas.doc%2Fmanpages%2Fchcfg.html Some Googling indicates that NetApp and EMC have options for bypass traverse checking on their filers, so something you probably want to turn on. The other two sound fairly self explanatory, but the question is why would you want them turned on. Any one any ideas? JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From j.buzzard at dundee.ac.uk Wed May 9 23:57:56 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2012 23:57:56 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <4FAA918D.50101@dundee.ac.uk> References: <4FAA918D.50101@dundee.ac.uk> Message-ID: <4FAAF674.9070809@dundee.ac.uk> Jonathan Buzzard wrote: > > Not documented, but I believe there are four ;-) > > allowSambaCaseInsensitiveLookup > syncSambaMetadataOps > cifsBypassShareLocksOnRename > cifsBypassTraversalChecking > Just add to this I believe there are some more, mostly because they are between the first two and last two in mmchconfig Korn shell file They are allowSynchronousFcntlRetries allowWriteWithDeleteChild Not sure what the first one does, but the second one I am guessing allows you to write to a folder if you can delete child folders and would make GPFS/Samba follow Windows schematics closer. Over the coming days I hope to play around with some of these options and see what they do. Also there is an undocumented option for ACL's on mmchfs (I am working on 3.4.0-13 here so it's not even 3.5) so that you can do mmchfs test -k samba and then [root at krebs1 bin]# mmlsfs test flag value description ------------------- ------------------------ ----------------------------------- -f 32768 Minimum fragment size in bytes -i 512 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k samba ACL semantics in effect -n 32 Estimated number of nodes that will mount file system -B 1048576 Block size -Q user;group;fileset Quotas enforced none Default quotas enabled --filesetdf no Fileset df enabled? -V 12.07 (3.4.0.4) Current file system version 11.05 (3.3.0.2) Original file system version --create-time Fri Dec 4 09:37:28 2009 File system creation time -u yes Support for large LUNs? -z yes Is DMAPI enabled? -L 4194304 Logfile size -E no Exact mtime mount option -S no Suppress atime mount option -K always Strict replica allocation option --fastea yes Fast external attributes enabled? --inode-limit 1427760 Maximum number of inodes -P system;nearline Disk storage pools in file system -d gpfs19nsd;gpfs20nsd;gpfs21nsd;gpfs22nsd;gpfs23nsd;gpfs24nsd Disks in file system -A yes Automatic mount option -o none Additional mount options -T /test Default mount point --mount-priority 0 Mount priority Not entirely sure what samba ACL's are mind you. Does it modify NFSv4 ACL's so they follow NTFS schematics more closely? JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu May 10 08:39:09 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 10 May 2012 07:39:09 +0000 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <4FAAF674.9070809@dundee.ac.uk> References: <4FAA918D.50101@dundee.ac.uk> <4FAAF674.9070809@dundee.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> If you're on 3.4.0-13, can you confirm the operation of DMAPI Windows mounts. Any serious issues? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 09 May 2012 23:58 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS magic options for Samba > > Jonathan Buzzard wrote: > > > > Not documented, but I believe there are four ;-) > > > > allowSambaCaseInsensitiveLookup > > syncSambaMetadataOps > > cifsBypassShareLocksOnRename > > cifsBypassTraversalChecking > > > > Just add to this I believe there are some more, mostly because they are > between the first two and last two in mmchconfig Korn shell file > > They are > > allowSynchronousFcntlRetries > allowWriteWithDeleteChild > > Not sure what the first one does, but the second one I am guessing allows > you to write to a folder if you can delete child folders and would make > GPFS/Samba follow Windows schematics closer. Over the coming days I > hope to play around with some of these options and see what they do. > > Also there is an undocumented option for ACL's on mmchfs (I am working on > 3.4.0-13 here so it's not even 3.5) so that you can do > > mmchfs test -k samba > > and then > > [root at krebs1 bin]# mmlsfs test > flag value description > ------------------- ------------------------ > ----------------------------------- > -f 32768 Minimum fragment size in bytes > -i 512 Inode size in bytes > -I 32768 Indirect block size in bytes > -m 1 Default number of metadata > replicas > -M 2 Maximum number of metadata > replicas > -r 1 Default number of data > replicas > -R 2 Maximum number of data > replicas > -j cluster Block allocation type > -D nfs4 File locking semantics in > effect > -k samba ACL semantics in effect > -n 32 Estimated number of nodes > that will mount file system > -B 1048576 Block size > -Q user;group;fileset Quotas enforced > none Default quotas enabled > --filesetdf no Fileset df enabled? > -V 12.07 (3.4.0.4) Current file system version > 11.05 (3.3.0.2) Original file system version > --create-time Fri Dec 4 09:37:28 2009 File system creation time > -u yes Support for large LUNs? > -z yes Is DMAPI enabled? > -L 4194304 Logfile size > -E no Exact mtime mount option > -S no Suppress atime mount option > -K always Strict replica allocation > option > --fastea yes Fast external attributes > enabled? > --inode-limit 1427760 Maximum number of inodes > -P system;nearline Disk storage pools in file > system > -d > gpfs19nsd;gpfs20nsd;gpfs21nsd;gpfs22nsd;gpfs23nsd;gpfs24nsd Disks in file > system > -A yes Automatic mount option > -o none Additional mount options > -T /test Default mount point > --mount-priority 0 Mount priority > > > Not entirely sure what samba ACL's are mind you. Does it modify NFSv4 > ACL's so they follow NTFS schematics more closely? > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From j.buzzard at dundee.ac.uk Thu May 10 09:54:46 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 10 May 2012 09:54:46 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> References: <4FAA918D.50101@dundee.ac.uk> <4FAAF674.9070809@dundee.ac.uk> <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FAB8256.5010409@dundee.ac.uk> On 10/05/12 08:39, Jez Tucker wrote: > If you're on 3.4.0-13, can you confirm the operation of DMAPI Windows mounts. > Any serious issues? Yes I can confirm that it does not work. Thw documentation is/was all wrong. See this thread in the GPFS forums. http://www.ibm.com/developerworks/forums/thread.jspa?threadID=426107&tstart=15 Basically you need to wait to 3.4.0-14 or jump to 3.5.0-1 :-) I have however noticed that the per fileset quotas seem to be fully functional on 3.4.0-13, turn them on with mmchfs test --perfileset-quota and off with, mmchfs test --noperfileset-quota set a quota for user nemo on the homes fileset with mmedquota -u test:homes:nemo or if you prefer the command line than messing with an editor mmsetquota -u nemo -h 25G /test/homes JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From crobson at ocf.co.uk Fri May 18 12:15:50 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Fri, 18 May 2012 12:15:50 +0100 Subject: [gpfsug-discuss] A date for your diaries Message-ID: Dear All, The next GPFS user group meeting will take place on Thursday 20th September. Paul Tomlinson, AWE, has kindly offered to host and the meeting will take place at Bishopswood Golf Club, Bishopswood, Bishopswood Lane, Tadley, Hampshire, RG26 4AT. Agenda to follow soon. Please contact me to register your place and to highlight any agenda items. Many thanks, Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri May 18 15:57:18 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 18 May 2012 14:57:18 +0000 Subject: [gpfsug-discuss] Stupid GPFS Tricks 2012 - Call for entries Message-ID: <39571EA9316BE44899D59C7A640C13F5305997B6@WARVWEXC1.uk.deluxe-eu.com> [cid:image001.png at 01CD350C.1C9EA0D0] Hello GPFSUG peeps, Have you used GPFS to do something insanely wacky that the sheer craziness of would blow our minds? Perhaps you've done something spectacularly stupid that turned out to be, well, just brilliant. Maybe you used the GPFS API, policies or scripts to create a hack of utter awesomeness. If so, then Stupid GPFS Tricks is for you. The rules: - It must take no longer than 10 minutes to explain your stupid trick. - You must be able to attend the next UG at AWE. - All stupid tricks must be submitted by Aug 31st 2012. Entries should be submitted to secretary at gpfsug.org with the subject "Stupid GPFS Trick". A short description of your trick and any associated Powerpoint/OO/etc. slides is required or point us to a URL. Thanks Jez [This event idea has been shamelessly robbed from the Pixar UG. Thanks folks!] --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16602 bytes Desc: image001.png URL: From crobson at ocf.co.uk Wed May 23 08:53:02 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Wed, 23 May 2012 08:53:02 +0100 Subject: [gpfsug-discuss] Crispin Keable article Message-ID: Dear All, An interesting article featuring Crispin Keable (who has previously presented at our user group meetings) was published in The Register yesterday. Crispin talks about the latest GPFS 3.5 update. Read the full article: http://www.theregister.co.uk/2012/05/21/ibm_general_parallel_file_system_3dot5/ Claire Robson GPFS User Group Secretary -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Thu May 24 11:55:42 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Thu, 24 May 2012 10:55:42 +0000 Subject: [gpfsug-discuss] GPFS Question: Will stopping all tie-breaker disks break quorum semantics? Message-ID: Dear GPFS, I have a relatively simple GPFS set-up: Two manager-quorum nodes (primary and secondary configuration nodes) run the cluster with tie-breaker disk quorum semantics. The two manager nodes are SAN attached to 6 x 20TB SATA NSDs (marked as dataOnly), split in to two failure groups so we could create a file system that supported replication. Three of these NSDs are marked as the tie-breaker disks. The metadata is stored on SAS disks located in both manager-quorum nodes (marked as metaDataOnly) and replicated between them. The disk controller subsystem that runs the SATA NSDs requires a reboot, BUT I do not want to shut down GPFS as some critical services are dependent on a small (~12TB) portion of the data. I have added two additional NSD servers to the cluster using some old equipment. These are SAN attached to 10 x 2TB LUNs which is enough to keep the critical data on. I am removing one of the SATA 20TB LUNs from the file system 'system' storage pool on the manager nodes and adding it to another storage pool 'evac-pool' which contains the new 10 x 2TB NSDs. Using the policy engine, I want to migrate the file set which contains the critical data to this new storage pool and enable replication of the file set (with the single 20TB NSD in failure group 1 and the 10 x 2TB NSDs in failure group 2). I am expecting to then be able to suspend then stop the 20TB NSD and maintain access to the critical data. This plan is progressing nicely, but I'm not yet at the stage where I can stop the 20TB NSD (I'm waiting for a re-stripe to finish for something else). Does this plan sound plausible so far? I've read the relevant documentation and will run an experiment with stopping the single 20TB NSDs first. However, I thought about a potential problem - the quorum semantics in operation. When I switch off all six 20TB NSDs, the cluster manager-quorum nodes to which they are attached will remain online (to serve the metadata NSDs for the surviving data disks), but all the tiebreaker disks are on the six 20TB NSDs. My question is, will removing access to the tie-breaker disks affect GPFS quorum, or are they only referenced when quorum is lost? I'm running GPFS 3.4.7. Thanks, Luke. -- Luke Raimbach IT Manager Oxford e-Research Centre 7 Keble Road, Oxford, OX1 3QG +44(0)1865 610639 From ghemingtsai at gmail.com Sat May 26 01:10:04 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Fri, 25 May 2012 17:10:04 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Message-ID: Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon May 28 16:55:54 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 28 May 2012 15:55:54 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059CC99@WARVWEXC1.uk.deluxe-eu.com> Hello Grace This is most likely because the file system that you're trying to manage via Space Management isn't configured as such. I.E. the -z flag in mmlsfs http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html Also: This IBM red book should be a good starting point and includes the information you need should you with to setup GPFS drives TSM migration (using THRESHOLD). http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1 Suggest you read the red book first and decide which method you'd like. Regards, Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 26 May 2012 01:10 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Tue May 29 18:39:24 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Tue, 29 May 2012 10:39:24 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 6 In-Reply-To: References: Message-ID: Hi, Jez, I tried what you suggested with the command: mmchfs -z yes /dev/fs1 and the list output of "mmlsfs" is as follows: -sh-4.1# ./mmlsfs /dev/fs1 flag value description ------------------- ------------------------ ----------------------------------- -f 32768 Minimum fragment size in bytes -i 512 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k all ACL semantics in effect -n 10 Estimated number of nodes that will mount file system -B 1048576 Block size -Q none Quotas enforced none Default quotas enabled --filesetdf no Fileset df enabled? -V 12.10 (3.4.0.7) File system version --create-time Thu Feb 23 16:13:28 2012 File system creation time -u yes Support for large LUNs? -z yes Is DMAPI enabled? -L 4194304 Logfile size -E yes Exact mtime mount option -S no Suppress atime mount option -K whenpossible Strict replica allocation option --fastea yes Fast external attributes enabled? --inode-limit 571392 Maximum number of inodes -P system Disk storage pools in file system -d scratch_DL1;scratch_MDL1 Disks in file system -A no Automatic mount option -o none Additional mount options -T /gpfs_directory1/ Default mount point --mount-priority 0 Mount priority But I still got the error message in dsmsmj from "manage" on /gpfs_directory1 "A conflicting Space Management is already running in the /gpfs_directory1 file system. Please wait until the Space Management process is ready and try" Could you help please? Could you give more suggestions please? Thanks. Grace On Tue, May 29, 2012 at 4:00 AM, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at gpfsug.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at gpfsug.org > > You can reach the person managing the list at > gpfsug-discuss-owner at gpfsug.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Use HSM to backup GPFS - error message: ANS9085E (Jez Tucker) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 28 May 2012 15:55:54 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS - error message: > ANS9085E > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059CC99 at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > Hello Grace > > This is most likely because the file system that you're trying to manage > via Space Management isn't configured as such. > > I.E. the -z flag in mmlsfs > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html > > Also: > > This IBM red book should be a good starting point and includes the > information you need should you with to setup GPFS drives TSM migration > (using THRESHOLD). > > http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1 > > Suggest you read the red book first and decide which method you'd like. > > Regards, > > Jez > > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 26 May 2012 01:10 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E > > Hi, > > I have a GPFS system verson 3.4, which includes the following two GPFS > file systems with the directories: > > /gpfs_directory1 > /gpfs_directory2 > > I like to use HSM to backup these GPFS files to the tapes in our TSM > server (RHAT 6.2, TSM 6.3). > I run HSM GUI on this GPFS server, the list of the file systems on this > GPFS server is as follows: > > File System State Size(KB) Free(KB) ... > ------------------ > / Not Manageable > /boot Not Manageable > ... > /gpfs_directory1 Not Managed > /gpfs_directory2 Not Managed > > > I click "gpfs_directory1", and click "Manage" > => > I got error: > """ > A conflicting Space Management process is already running in the > /gpfs_directory1 file system. > Please wait until the Space management process is ready and try again. > """ > > The dsmerror.log shows the message: > "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space > management" > > Is there anything on GPFS or HSM or TSM server that I didnt configure > correctly? Please help. Thanks. > > Grace > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120528/b97e39e0/attachment-0001.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 5, Issue 6 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 08:28:01 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 07:28:01 +0000 Subject: [gpfsug-discuss] GPFS + RDMA + Ethernet (RoCE/iWARP) Message-ID: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> Hello all I've been having a pootle around ye olde Internet in a coffee break and noticed that RDMA over Ethernet exists. http://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet http://www.hpcwire.com/hpcwire/2010-04-22/roce_an_ethernet-infiniband_love_story.html Has anyone had any experience of using this? (even outside GPFS) I know GPFS supports RDMA with Infiniband, but unsure as to RoCE / iWARP support. It suddenly occurred to me that I have 10Gb Brocade VDX switches with DCB & PFC and making things go faster is great. Perhaps the HPC crowd do this, but only via IB? Thoughts? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 08:56:42 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 07:56:42 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059D745@WARVWEXC1.uk.deluxe-eu.com> On the command line: What's the output of dsmmigfs query -Detail and ps -ef | grep dsm From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 26 May 2012 01:10 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at arif-ali.co.uk Wed May 30 14:18:17 2012 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 30 May 2012 14:18:17 +0100 Subject: [gpfsug-discuss] GPFS + RDMA + Ethernet (RoCE/iWARP) In-Reply-To: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FC61E19.8060909@arif-ali.co.uk> On 30/05/12 08:28, Jez Tucker wrote: > > Hello all > > I've been having a pootle around ye olde Internet in a coffee break > and noticed that RDMA over Ethernet exists. > > http://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet > > http://www.hpcwire.com/hpcwire/2010-04-22/roce_an_ethernet-infiniband_love_story.html > > Has anyone had any experience of using this? (even outside GPFS) > > I know GPFS supports RDMA with Infiniband, but unsure as to RoCE / > iWARP support. > > It suddenly occurred to me that I have 10Gb Brocade VDX switches with > DCB & PFC and making things go faster is great. > > Perhaps the HPC crowd do this, but only via IB? > I did have a look at this about a year ago, and thought it would be great. But never thought people would be interested. and didn't find anything within the GPFS docs or secret configs that indicated that this is supported In most of our setups we do tend to stick with verbs-rdma, and that is where most of our customer's are working with. It would be very interesting to see if it was ever supported, and to see what kind of performance improvement we would get by taking the tcp layer away maybe one of the devs could shed some light on this. -- regards, Arif -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed May 30 15:36:05 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 30 May 2012 07:36:05 -0700 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows Message-ID: This came up in the user group meeting so I thought I would send this to the group. Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file systems on GPFS Windows nodes. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 15:58:12 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 14:58:12 +0000 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059E2B2@WARVWEXC1.uk.deluxe-eu.com> May I be the first to stick both hands in the air and run round the room screaming WOOOT! Thanks to the dev team for that one. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 30 May 2012 15:36 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Mount DMAPI File system on Windows This came up in the user group meeting so I thought I would send this to the group. Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file systems on GPFS Windows nodes. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.buzzard at dundee.ac.uk Wed May 30 16:55:22 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 30 May 2012 16:55:22 +0100 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows In-Reply-To: References: Message-ID: <4FC642EA.8050601@dundee.ac.uk> Scott Fadden wrote: > This came up in the user group meeting so I thought I would send this to > the group. > > Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file > systems on GPFS Windows nodes. > Are we absolutely 100% sure on that? I ask because the release notes have contradictory information on this and when I asked in the GPFS forum for clarification the reply was it would be starting with 3.4.0-14 http://www.ibm.com/developerworks/forums/thread.jspa?threadID=426107&tstart=30 JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From ghemingtsai at gmail.com Wed May 30 17:06:15 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Wed, 30 May 2012 09:06:15 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Wed May 30 17:09:57 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Wed, 30 May 2012 09:09:57 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/ bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 17:14:17 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 16:14:17 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059E38F@WARVWEXC1.uk.deluxe-eu.com> So. My hunch from looking at my system here is that you haven't actually told dsm that the filesystem is to be space managed. You do that here: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html Then re-run dsmmigfs query -Detail and hopefully you should see something similar to this: [root at tsm01 ~]# dsmmigfs query -Detail IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 2, Level 4.1 Client date/time: 30-05-2012 17:13:00 (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights Reserved. The local node has Node ID: 3 The failover environment is deactivated on the local node. File System Name: /mnt/gpfs High Threshold: 100 Low Threshold: 80 Premig Percentage: 20 Quota: 999999999999999 Stub Size: 0 Server Name: TSM01 Max Candidates: 100 Max Files: 0 Min Partial Rec Size: 0 Min Stream File Size: 0 MinMigFileSize: 0 Preferred Node: tsm01 Node ID: 3 Owner Node: tsm01 Node ID: 3 Source Nodes: tsm01 Then see if your HSM GUI works properly. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 30 May 2012 17:06 To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtsai at slac.stanford.edu Wed May 30 20:15:55 2012 From: gtsai at slac.stanford.edu (Grace Tsai) Date: Wed, 30 May 2012 12:15:55 -0700 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments Message-ID: <4FC671EB.7060104@slac.stanford.edu> Hi, We are in the process of choosing a permanent file system for our institution, GPFS is one of the three candidates. Could someone help me to give comments or answers to the requests listed in the following. Basically, I need your help to mark 1 or 0 in the GPFS column if a feature either exists or doesnt exist, respectively. Please also add supporting comments if a feature has additional info, e.g., 100PB single namespace file system supported, etc. I answered some of them which I have tested, or got the information from google or manuals. User-Visible Features: ----------------------------- 1. Allows a single namespace (UNIX path) of at least 10s of petabytes in size. (My answer: Current tested: 4PB) 2. Large number of files supported (specify number) per namespace. (My answer: 9*10**9) 3. Supports POSIX-like mount points (i.e., looks like NFS) (My answer: 1) 4. File system supports file level access control lists (ACLs) (My answer: 1) 5. File system supports directory level ACLs, e.g., like AFS. (My answer: 1) 6. Disk quotas that users can query. (My answer: 1) 7. Disk quotas based on UID (My answer: 1) 8. Disk quotas based on GID (My answer: 1) 9. Disk quotas based on directory. 10. User-accessible snapshots Groupadmin-Visible Features --------------------------------------- 1. Group access (capabilities) similar to AFS pts groups. (My answer: 1) 2. Group access administration (create/delete/modify) that can be delegated to the groups themselves. (My answer: 1) 3. High limit (1000s) on the number of users per group 4. High limit (100s) on the number of groups a user can belong to. (My answer: 1) 5. Nesting groups within groups is permitted. 6. Groups are equal partners with users in terms of access control lists. 7. Group managers can adjust disk quotas 8. Group managers can create/delete/modify user spaces. 9. Group managers can create/delete/modify group spaces. Sysadmin-Visible Features ----------------------------------- 1. Namespace is expandable and shrinkable without file system downtime. (My answer: 1) 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) via some type of filtering, without manual user intervention (Data life-cycle management) 3. User can provide manual "hints" on where to place files based on usage requirements. 4. Allows resource-configurable logical relocation or actual migration of data without user downtime (Hardware life-cycle management/patching/maintenance) 5. Product has been shipping in production for a minimum of 2 years, nice to have at least 5 years. Must be comfortable with the track record. (My answer: 1 ) 6. Product has at least two commercial companies providing support. 7. Distributed metadata (or equivalent) to remove obvious file system bottlenecks. (My Answer: 1) 8. File system supports data integrity checking (e.g., ZFS checksumming) (My answer: 1) 9. Customized levels of data redundancy at the file/subdirectory/partition layer, based on user requirements. Replication. Load-balancing. 10. Management software fully spoorts command line interface (CLI) 10. Management software supports a graphical user interface (GUI) 11. Must run on non-proprietary x86/x64 hardware (Note: this might eliminate some proprietary solutions that meet every other requirement.) 12. Software provides tools to measure performance and monitor problems. (My answer: 1) 13. Robust and reliable: file system must recover gracefully from an unscheduled power outage, and not take forever for fsck. 14. Client code must support RHEL. (My answer: 1) 15. Client code must support RHEL compatible OS. (My answer: 1) 16. Client code must support Linux. (My answer: 1) 17. Client code must support Windows. (My answer: 1) 18. Affordable 19. Value for the money. 20. Provides native accounting information to support a storage service model. 21. Ability to change file owner throughout file system (generalized ability to implement metadata changes) 22. Allows discrete resource allocation in case groups want physical resource separation, yet still allows central management. Resource allocation might control bandwidth, LUNx, CPU, user/subdir/filesystem quotas, etc. 23. Built-in file system compression option 24. Built-in file-level replication option 25. Built-in file system deduplication option 26. Built-in file system encryption option 27. Support VM image movement among storage servers, including moving entire jobs (hypervisor requirement) 28. Security/authentication of local user to allow access (something stronger than host-based access) 29. WAN-based file system (e.g., for disaster recover site) 30. Must be able to access filesystem via NFSv3 (My answer: 1) 31. Can perform OPTIONAL file system rebalancing when adding new storage. 32. Protection from accidental, large scale deletions 33. Ability to transfer snapshots among hosts. 34. Ability to promote snapshot to read/write partition 35. Consideration given to number of metadata servers required to support overall service, and how that affects HA, i.e., must be able to support HA on a per namespace basis . (How many MD servers would we need to keep file service running?) 36. Consideration given to backup and restore capabilities and compatible hardware/software products. Look at timeframe requirements. (What backup solutions does it recommend?) 37. Need to specify how any given file system is not POSIX-compliant so we understand it. Make this info available to users. (What are its POSIX shortcomings?) From j.buzzard at dundee.ac.uk Thu May 31 00:39:32 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 31 May 2012 00:39:32 +0100 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <4FC671EB.7060104@slac.stanford.edu> References: <4FC671EB.7060104@slac.stanford.edu> Message-ID: <4FC6AFB4.6040300@dundee.ac.uk> Grace Tsai wrote: I am not sure who dreamed up this list but I will use two of the points to illustrate why it is bizarre. [SNIP] > 5. Nesting groups within groups is permitted. > Has absolutely nothing whatsoever to do with any file system under Linux that I am aware of. So for example traditionally under Linux you don't have nested groups. However if you are running against Active Directory with winbind you can. This is however independent of any file system you are running. [SNIP] > > 35. Consideration given to number of metadata servers required to > support overall service, and how that affects HA, i.e., > must be able to support HA on a per namespace basis . (How many MD > servers would we need to keep file service running?) > This for example would suggest that whoever drew up the list has a particular idea about how clustered file systems work that simply does not apply to GPFS; there are no metadata servers in GPFS There are lots of other points that just don't make sense to me as a storage administrator. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu May 31 09:32:24 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 31 May 2012 08:32:24 +0000 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <4FC671EB.7060104@slac.stanford.edu> References: <4FC671EB.7060104@slac.stanford.edu> Message-ID: <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> Hello Grace, I've cribbed out the questions you've already answered. Though, I think these should be best directed to IBM pre-sales tech to qualify them. Regards, Jez > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 30 May 2012 20:16 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments > > Hi, > > We are in the process of choosing a permanent file system for our > institution, GPFS is one of the three candidates. Could someone help me to > give comments or answers to the requests listed in the following. Basically, > I need your help to mark 1 or 0 in the GPFS column if a feature either exists > or doesnt exist, respectively. Please also add supporting comments if a > feature has additional info, e.g., 100PB single namespace file system > supported, etc. > I answered some of them which I have tested, or got the information from > google or manuals. > > > 9. Disk quotas based on directory. = 1 (per directory based on filesets which is a 'hard linked' directory to a storage pool via placement rules.) Max filesets is 10000 in 3.5. > Groupadmin-Visible Features > --------------------------------------- > 5. Nesting groups within groups is permitted. > > 6. Groups are equal partners with users in terms of access control lists. GPFS supports POSIX and NFS v4 ACLS (which are not quite the same as Windows ACLs) > 7. Group managers can adjust disk quotas > > 8. Group managers can create/delete/modify user spaces. > > 9. Group managers can create/delete/modify group spaces. .. .paraphrase... users with admin privs (root / sudoers) can adjust things. How you organise your user & group administration is up to you. This is external to GPFS. > Sysadmin-Visible Features > ----------------------------------- > > 1. Namespace is expandable and shrinkable without file system downtime. > (My answer: 1) > > 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) via some > type of filtering, without manual user intervention (Data life-cycle > management) = 1 . You can do this with GPFS policies and THRESHOLDS. Or look at IBM's V7000 Easy Tier. > 3. User can provide manual "hints" on where to place files based on usage > requirements. Do you mean the user is prompted, when you write a file? If so, then no. Though there is an API, so you could integrate that functionality if required, and your application defers to your GPFS API program before writes. I suggest user education is far simpler and cheaper to maintain. If you need prompts, your workflow is inefficient. It should be transparent to the user. > 4. Allows resource-configurable logical relocation or actual migration of data > without user downtime (Hardware life-cycle > management/patching/maintenance) = 1 > 6. Product has at least two commercial companies providing support. =1 Many companies provide OEM GPFS support. Though at some point this may be backed off to IBM if a problem requires development teams. > 9. Customized levels of data redundancy at the file/subdirectory/partition > layer, based on user requirements. > Replication. Load-balancing. =1 > 10. Management software fully spoorts command line interface (CLI) =1 > 10. Management software supports a graphical user interface (GUI) =1 , if you buy IBM's SONAS. Presume that v7000 has something also. > 11. Must run on non-proprietary x86/x64 hardware (Note: this might > eliminate some proprietary solutions that meet every other requirement.) =1 > 13. Robust and reliable: file system must recover gracefully from an > unscheduled power outage, and not take forever for fsck. =1. I've been through this personally. All good. All cluster nodes can participate in fsck. (Actually one of our Qlogic switches spat badness to two of our storage units which caused both units to simultaneously soft-reboot. Apparently the Qlogic firmware couldn't handle the amount of data we transfer a day in an internal counter. Needless to say, new firmware was required.) > 14. Client code must support RHEL. > (My answer: 1) > > 18. Affordable > > 19. Value for the money. Both above points are arguable. Nobody knows your budget. That said, it's cheaper to buy a GPFS system than an Isilon system of similar spec (I have both - and we're just about to switch off the Isilon due to running and expansion costs). Stornext is just too much management overhead and constant de-fragging. > 20. Provides native accounting information to support a storage service > model. What does 'Storage service model mean?' Chargeback per GB / user? If so, then you can write a list policy to obtain this information or use fileset quota accounting. > 21. Ability to change file owner throughout file system (generalized ability > to implement metadata changes) =1. You'd run a policy to do this. > 22. Allows discrete resource allocation in case groups want physical > resource separation, yet still allows central management. > Resource allocation might control bandwidth, LUNx, CPU, > user/subdir/filesystem quotas, etc. = 0.5. Max bandwidth you can control. You can't set a min. CPU is irrelevant. > 23. Built-in file system compression option No. Perhaps you could use TSM as an external storage pool and de-dupe to VTL ? If you backend that to tape, remember it will un-dupe as it writes to tape. > 24. Built-in file-level replication option =1 > 25. Built-in file system deduplication option =0 . I think. > 26. Built-in file system encryption option =1, if you buy IBM storage with on disk encryption. I.E. the disk is encrypted and is unreadable if removed, but the actual file system itself is not. > 27. Support VM image movement among storage servers, including moving > entire jobs (hypervisor requirement) That's a huge scope. Check your choice of VM requirements. GPFS is just a file system. > 28. Security/authentication of local user to allow access (something stronger > than host-based access) No. Unless you chkconfig the GPFS start scripts off and then have the user authenticate to be abel to start the script which mounts GPFS. > 29. WAN-based file system (e.g., for disaster recover site) =1 > 31. Can perform OPTIONAL file system rebalancing when adding new > storage. =1 > 32. Protection from accidental, large scale deletions =1 via snapshots. Though that's retrospective. No system is idiot proof. > 33. Ability to transfer snapshots among hosts. Unknown. All hosts in GPFS would see the snapshot. Transfer to a different GPFS cluster for DR, er, not quite sure. > 34. Ability to promote snapshot to read/write partition In what context does 'promote' mean? > 35. Consideration given to number of metadata servers required to support > overall service, and how that affects HA, i.e., > must be able to support HA on a per namespace basis . (How many MD > servers would we need to keep file service running?) 2 dedicated NSD servers for all namespaces is a good setup. Though, metadata is shared between all nodes. > 36. Consideration given to backup and restore capabilities and compatible > hardware/software products. Look at timeframe requirements. > (What backup solutions does it recommend?) I rather like TSM. Not tried HPSS. > 37. Need to specify how any given file system is not POSIX-compliant so we > understand it. Make this info available to users. > (What are its POSIX shortcomings?) GPFS is POSIX compliant. I'm personally unaware of any POSIX compatibility shortcomings. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luke.raimbach at oerc.ox.ac.uk Thu May 31 12:41:49 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Thu, 31 May 2012 11:41:49 +0000 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> References: <4FC671EB.7060104@slac.stanford.edu> <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> Message-ID: Hi Jez, >> 27. Support VM image movement among storage servers, including moving >> entire jobs (hypervisor requirement) > That's a huge scope. Check your choice of VM requirements. GPFS is just a file system. This works very nicely with VMware - we run our datastores from the cNFS exports of the file system. Putting the VM disks in a file-set allowed us to re-stripe the file-set, replicating it on to spare hardware in order to take down our main storage system for a firmware upgrade. The ESXi hosts didn't even flinch when we stopped the disks in the main file system! > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jez Tucker > Sent: 31 May 2012 09:32 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS Evaluation List - Please give some > comments > > Hello Grace, > > I've cribbed out the questions you've already answered. > Though, I think these should be best directed to IBM pre-sales tech to qualify > them. > > Regards, > > Jez > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Grace Tsai > > Sent: 30 May 2012 20:16 > > To: gpfsug-discuss at gpfsug.org > > Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some > > comments > > > > Hi, > > > > We are in the process of choosing a permanent file system for our > > institution, GPFS is one of the three candidates. Could someone help > > me to give comments or answers to the requests listed in the > > following. Basically, I need your help to mark 1 or 0 in the GPFS > > column if a feature either exists or doesnt exist, respectively. > > Please also add supporting comments if a feature has additional info, > > e.g., 100PB single namespace file system supported, etc. > > I answered some of them which I have tested, or got the information > > from google or manuals. > > > > > > 9. Disk quotas based on directory. > > = 1 (per directory based on filesets which is a 'hard linked' directory to a > storage pool via placement rules.) Max filesets is 10000 in 3.5. > > > > Groupadmin-Visible Features > > --------------------------------------- > > > 5. Nesting groups within groups is permitted. > > > > 6. Groups are equal partners with users in terms of access control lists. > > GPFS supports POSIX and NFS v4 ACLS (which are not quite the same as > Windows ACLs) > > > 7. Group managers can adjust disk quotas > > > > 8. Group managers can create/delete/modify user spaces. > > > > 9. Group managers can create/delete/modify group spaces. > > .. .paraphrase... users with admin privs (root / sudoers) can adjust things. > How you organise your user & group administration is up to you. This is > external to GPFS. > > > > Sysadmin-Visible Features > > ----------------------------------- > > > > 1. Namespace is expandable and shrinkable without file system downtime. > > (My answer: 1) > > > > 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) > > via some type of filtering, without manual user intervention (Data > > life-cycle > > management) > > = 1 . You can do this with GPFS policies and THRESHOLDS. Or look at IBM's > V7000 Easy Tier. > > > 3. User can provide manual "hints" on where to place files based on > > usage requirements. > > Do you mean the user is prompted, when you write a file? If so, then no. > Though there is an API, so you could integrate that functionality if required, > and your application defers to your GPFS API program before writes. I > suggest user education is far simpler and cheaper to maintain. If you need > prompts, your workflow is inefficient. It should be transparent to the user. > > > 4. Allows resource-configurable logical relocation or actual migration > > of data without user downtime (Hardware life-cycle > > management/patching/maintenance) > > = 1 > > > 6. Product has at least two commercial companies providing support. > > =1 Many companies provide OEM GPFS support. Though at some point this > may be backed off to IBM if a problem requires development teams. > > > 9. Customized levels of data redundancy at the > > file/subdirectory/partition layer, based on user requirements. > > Replication. Load-balancing. > > =1 > > > 10. Management software fully spoorts command line interface (CLI) > > =1 > > > > 10. Management software supports a graphical user interface (GUI) > > =1 , if you buy IBM's SONAS. Presume that v7000 has something also. > > > 11. Must run on non-proprietary x86/x64 hardware (Note: this might > > eliminate some proprietary solutions that meet every other > > requirement.) > > =1 > > > 13. Robust and reliable: file system must recover gracefully from an > > unscheduled power outage, and not take forever for fsck. > > =1. I've been through this personally. All good. All cluster nodes can > participate in fsck. > (Actually one of our Qlogic switches spat badness to two of our storage units > which caused both units to simultaneously soft-reboot. Apparently the > Qlogic firmware couldn't handle the amount of data we transfer a day in an > internal counter. Needless to say, new firmware was required.) > > > 14. Client code must support RHEL. > > (My answer: 1) > > > > > 18. Affordable > > > > 19. Value for the money. > > Both above points are arguable. Nobody knows your budget. > That said, it's cheaper to buy a GPFS system than an Isilon system of similar > spec (I have both - and we're just about to switch off the Isilon due to > running and expansion costs). Stornext is just too much management > overhead and constant de-fragging. > > > 20. Provides native accounting information to support a storage > > service model. > > What does 'Storage service model mean?' Chargeback per GB / user? > If so, then you can write a list policy to obtain this information or use fileset > quota accounting. > > > 21. Ability to change file owner throughout file system (generalized > > ability to implement metadata changes) > > =1. You'd run a policy to do this. > > > 22. Allows discrete resource allocation in case groups want physical > > resource separation, yet still allows central management. > > Resource allocation might control bandwidth, LUNx, CPU, > > user/subdir/filesystem quotas, etc. > > = 0.5. Max bandwidth you can control. You can't set a min. CPU is > irrelevant. > > > 23. Built-in file system compression option > > No. Perhaps you could use TSM as an external storage pool and de-dupe to > VTL ? If you backend that to tape, remember it will un-dupe as it writes to > tape. > > > 24. Built-in file-level replication option > > =1 > > > 25. Built-in file system deduplication option > > =0 . I think. > > > 26. Built-in file system encryption option > > =1, if you buy IBM storage with on disk encryption. I.E. the disk is encrypted > and is unreadable if removed, but the actual file system itself is not. > > > 27. Support VM image movement among storage servers, including moving > > entire jobs (hypervisor requirement) > > That's a huge scope. Check your choice of VM requirements. GPFS is just a > file system. > > > 28. Security/authentication of local user to allow access (something > > stronger than host-based access) > > No. Unless you chkconfig the GPFS start scripts off and then have the user > authenticate to be abel to start the script which mounts GPFS. > > > 29. WAN-based file system (e.g., for disaster recover site) > > =1 > > > 31. Can perform OPTIONAL file system rebalancing when adding new > > storage. > > =1 > > > 32. Protection from accidental, large scale deletions > > =1 via snapshots. Though that's retrospective. No system is idiot proof. > > > 33. Ability to transfer snapshots among hosts. > > Unknown. All hosts in GPFS would see the snapshot. Transfer to a different > GPFS cluster for DR, er, not quite sure. > > > 34. Ability to promote snapshot to read/write partition > > In what context does 'promote' mean? > > > 35. Consideration given to number of metadata servers required to > > support overall service, and how that affects HA, i.e., > > must be able to support HA on a per namespace basis . (How many > > MD servers would we need to keep file service running?) > > 2 dedicated NSD servers for all namespaces is a good setup. Though, > metadata is shared between all nodes. > > > 36. Consideration given to backup and restore capabilities and > > compatible hardware/software products. Look at timeframe requirements. > > (What backup solutions does it recommend?) > > I rather like TSM. Not tried HPSS. > > > 37. Need to specify how any given file system is not POSIX-compliant > > so we understand it. Make this info available to users. > > (What are its POSIX shortcomings?) > > GPFS is POSIX compliant. I'm personally unaware of any POSIX compatibility > shortcomings. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From j.buzzard at dundee.ac.uk Wed May 9 16:47:25 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2012 16:47:25 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba Message-ID: <4FAA918D.50101@dundee.ac.uk> Not documented, but I believe there are four ;-) allowSambaCaseInsensitiveLookup syncSambaMetadataOps cifsBypassShareLocksOnRename cifsBypassTraversalChecking From what I can determine they are binary on/off options. For example you enable the first with mmchconfig allowSambaCaseInsensitiveLookup=yes I am guessing but I would imagine when the first is turned on, then when Samba tries to lookup a filename GPFS will do the case insensitive matching for you, which should be faster than Samba having to do it. You should obviously have case sensitive = no in your Samba config as well. The cifsBypassTraversalChecking is explained in the SONAS manual page for chcfg, and I note it is on by default in SONAS. http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.sonas.doc%2Fmanpages%2Fchcfg.html Some Googling indicates that NetApp and EMC have options for bypass traverse checking on their filers, so something you probably want to turn on. The other two sound fairly self explanatory, but the question is why would you want them turned on. Any one any ideas? JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From j.buzzard at dundee.ac.uk Wed May 9 23:57:56 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2012 23:57:56 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <4FAA918D.50101@dundee.ac.uk> References: <4FAA918D.50101@dundee.ac.uk> Message-ID: <4FAAF674.9070809@dundee.ac.uk> Jonathan Buzzard wrote: > > Not documented, but I believe there are four ;-) > > allowSambaCaseInsensitiveLookup > syncSambaMetadataOps > cifsBypassShareLocksOnRename > cifsBypassTraversalChecking > Just add to this I believe there are some more, mostly because they are between the first two and last two in mmchconfig Korn shell file They are allowSynchronousFcntlRetries allowWriteWithDeleteChild Not sure what the first one does, but the second one I am guessing allows you to write to a folder if you can delete child folders and would make GPFS/Samba follow Windows schematics closer. Over the coming days I hope to play around with some of these options and see what they do. Also there is an undocumented option for ACL's on mmchfs (I am working on 3.4.0-13 here so it's not even 3.5) so that you can do mmchfs test -k samba and then [root at krebs1 bin]# mmlsfs test flag value description ------------------- ------------------------ ----------------------------------- -f 32768 Minimum fragment size in bytes -i 512 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k samba ACL semantics in effect -n 32 Estimated number of nodes that will mount file system -B 1048576 Block size -Q user;group;fileset Quotas enforced none Default quotas enabled --filesetdf no Fileset df enabled? -V 12.07 (3.4.0.4) Current file system version 11.05 (3.3.0.2) Original file system version --create-time Fri Dec 4 09:37:28 2009 File system creation time -u yes Support for large LUNs? -z yes Is DMAPI enabled? -L 4194304 Logfile size -E no Exact mtime mount option -S no Suppress atime mount option -K always Strict replica allocation option --fastea yes Fast external attributes enabled? --inode-limit 1427760 Maximum number of inodes -P system;nearline Disk storage pools in file system -d gpfs19nsd;gpfs20nsd;gpfs21nsd;gpfs22nsd;gpfs23nsd;gpfs24nsd Disks in file system -A yes Automatic mount option -o none Additional mount options -T /test Default mount point --mount-priority 0 Mount priority Not entirely sure what samba ACL's are mind you. Does it modify NFSv4 ACL's so they follow NTFS schematics more closely? JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu May 10 08:39:09 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 10 May 2012 07:39:09 +0000 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <4FAAF674.9070809@dundee.ac.uk> References: <4FAA918D.50101@dundee.ac.uk> <4FAAF674.9070809@dundee.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> If you're on 3.4.0-13, can you confirm the operation of DMAPI Windows mounts. Any serious issues? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 09 May 2012 23:58 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS magic options for Samba > > Jonathan Buzzard wrote: > > > > Not documented, but I believe there are four ;-) > > > > allowSambaCaseInsensitiveLookup > > syncSambaMetadataOps > > cifsBypassShareLocksOnRename > > cifsBypassTraversalChecking > > > > Just add to this I believe there are some more, mostly because they are > between the first two and last two in mmchconfig Korn shell file > > They are > > allowSynchronousFcntlRetries > allowWriteWithDeleteChild > > Not sure what the first one does, but the second one I am guessing allows > you to write to a folder if you can delete child folders and would make > GPFS/Samba follow Windows schematics closer. Over the coming days I > hope to play around with some of these options and see what they do. > > Also there is an undocumented option for ACL's on mmchfs (I am working on > 3.4.0-13 here so it's not even 3.5) so that you can do > > mmchfs test -k samba > > and then > > [root at krebs1 bin]# mmlsfs test > flag value description > ------------------- ------------------------ > ----------------------------------- > -f 32768 Minimum fragment size in bytes > -i 512 Inode size in bytes > -I 32768 Indirect block size in bytes > -m 1 Default number of metadata > replicas > -M 2 Maximum number of metadata > replicas > -r 1 Default number of data > replicas > -R 2 Maximum number of data > replicas > -j cluster Block allocation type > -D nfs4 File locking semantics in > effect > -k samba ACL semantics in effect > -n 32 Estimated number of nodes > that will mount file system > -B 1048576 Block size > -Q user;group;fileset Quotas enforced > none Default quotas enabled > --filesetdf no Fileset df enabled? > -V 12.07 (3.4.0.4) Current file system version > 11.05 (3.3.0.2) Original file system version > --create-time Fri Dec 4 09:37:28 2009 File system creation time > -u yes Support for large LUNs? > -z yes Is DMAPI enabled? > -L 4194304 Logfile size > -E no Exact mtime mount option > -S no Suppress atime mount option > -K always Strict replica allocation > option > --fastea yes Fast external attributes > enabled? > --inode-limit 1427760 Maximum number of inodes > -P system;nearline Disk storage pools in file > system > -d > gpfs19nsd;gpfs20nsd;gpfs21nsd;gpfs22nsd;gpfs23nsd;gpfs24nsd Disks in file > system > -A yes Automatic mount option > -o none Additional mount options > -T /test Default mount point > --mount-priority 0 Mount priority > > > Not entirely sure what samba ACL's are mind you. Does it modify NFSv4 > ACL's so they follow NTFS schematics more closely? > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From j.buzzard at dundee.ac.uk Thu May 10 09:54:46 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 10 May 2012 09:54:46 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> References: <4FAA918D.50101@dundee.ac.uk> <4FAAF674.9070809@dundee.ac.uk> <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FAB8256.5010409@dundee.ac.uk> On 10/05/12 08:39, Jez Tucker wrote: > If you're on 3.4.0-13, can you confirm the operation of DMAPI Windows mounts. > Any serious issues? Yes I can confirm that it does not work. Thw documentation is/was all wrong. See this thread in the GPFS forums. http://www.ibm.com/developerworks/forums/thread.jspa?threadID=426107&tstart=15 Basically you need to wait to 3.4.0-14 or jump to 3.5.0-1 :-) I have however noticed that the per fileset quotas seem to be fully functional on 3.4.0-13, turn them on with mmchfs test --perfileset-quota and off with, mmchfs test --noperfileset-quota set a quota for user nemo on the homes fileset with mmedquota -u test:homes:nemo or if you prefer the command line than messing with an editor mmsetquota -u nemo -h 25G /test/homes JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From crobson at ocf.co.uk Fri May 18 12:15:50 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Fri, 18 May 2012 12:15:50 +0100 Subject: [gpfsug-discuss] A date for your diaries Message-ID: Dear All, The next GPFS user group meeting will take place on Thursday 20th September. Paul Tomlinson, AWE, has kindly offered to host and the meeting will take place at Bishopswood Golf Club, Bishopswood, Bishopswood Lane, Tadley, Hampshire, RG26 4AT. Agenda to follow soon. Please contact me to register your place and to highlight any agenda items. Many thanks, Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri May 18 15:57:18 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 18 May 2012 14:57:18 +0000 Subject: [gpfsug-discuss] Stupid GPFS Tricks 2012 - Call for entries Message-ID: <39571EA9316BE44899D59C7A640C13F5305997B6@WARVWEXC1.uk.deluxe-eu.com> [cid:image001.png at 01CD350C.1C9EA0D0] Hello GPFSUG peeps, Have you used GPFS to do something insanely wacky that the sheer craziness of would blow our minds? Perhaps you've done something spectacularly stupid that turned out to be, well, just brilliant. Maybe you used the GPFS API, policies or scripts to create a hack of utter awesomeness. If so, then Stupid GPFS Tricks is for you. The rules: - It must take no longer than 10 minutes to explain your stupid trick. - You must be able to attend the next UG at AWE. - All stupid tricks must be submitted by Aug 31st 2012. Entries should be submitted to secretary at gpfsug.org with the subject "Stupid GPFS Trick". A short description of your trick and any associated Powerpoint/OO/etc. slides is required or point us to a URL. Thanks Jez [This event idea has been shamelessly robbed from the Pixar UG. Thanks folks!] --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16602 bytes Desc: image001.png URL: From crobson at ocf.co.uk Wed May 23 08:53:02 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Wed, 23 May 2012 08:53:02 +0100 Subject: [gpfsug-discuss] Crispin Keable article Message-ID: Dear All, An interesting article featuring Crispin Keable (who has previously presented at our user group meetings) was published in The Register yesterday. Crispin talks about the latest GPFS 3.5 update. Read the full article: http://www.theregister.co.uk/2012/05/21/ibm_general_parallel_file_system_3dot5/ Claire Robson GPFS User Group Secretary -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Thu May 24 11:55:42 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Thu, 24 May 2012 10:55:42 +0000 Subject: [gpfsug-discuss] GPFS Question: Will stopping all tie-breaker disks break quorum semantics? Message-ID: Dear GPFS, I have a relatively simple GPFS set-up: Two manager-quorum nodes (primary and secondary configuration nodes) run the cluster with tie-breaker disk quorum semantics. The two manager nodes are SAN attached to 6 x 20TB SATA NSDs (marked as dataOnly), split in to two failure groups so we could create a file system that supported replication. Three of these NSDs are marked as the tie-breaker disks. The metadata is stored on SAS disks located in both manager-quorum nodes (marked as metaDataOnly) and replicated between them. The disk controller subsystem that runs the SATA NSDs requires a reboot, BUT I do not want to shut down GPFS as some critical services are dependent on a small (~12TB) portion of the data. I have added two additional NSD servers to the cluster using some old equipment. These are SAN attached to 10 x 2TB LUNs which is enough to keep the critical data on. I am removing one of the SATA 20TB LUNs from the file system 'system' storage pool on the manager nodes and adding it to another storage pool 'evac-pool' which contains the new 10 x 2TB NSDs. Using the policy engine, I want to migrate the file set which contains the critical data to this new storage pool and enable replication of the file set (with the single 20TB NSD in failure group 1 and the 10 x 2TB NSDs in failure group 2). I am expecting to then be able to suspend then stop the 20TB NSD and maintain access to the critical data. This plan is progressing nicely, but I'm not yet at the stage where I can stop the 20TB NSD (I'm waiting for a re-stripe to finish for something else). Does this plan sound plausible so far? I've read the relevant documentation and will run an experiment with stopping the single 20TB NSDs first. However, I thought about a potential problem - the quorum semantics in operation. When I switch off all six 20TB NSDs, the cluster manager-quorum nodes to which they are attached will remain online (to serve the metadata NSDs for the surviving data disks), but all the tiebreaker disks are on the six 20TB NSDs. My question is, will removing access to the tie-breaker disks affect GPFS quorum, or are they only referenced when quorum is lost? I'm running GPFS 3.4.7. Thanks, Luke. -- Luke Raimbach IT Manager Oxford e-Research Centre 7 Keble Road, Oxford, OX1 3QG +44(0)1865 610639 From ghemingtsai at gmail.com Sat May 26 01:10:04 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Fri, 25 May 2012 17:10:04 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Message-ID: Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon May 28 16:55:54 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 28 May 2012 15:55:54 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059CC99@WARVWEXC1.uk.deluxe-eu.com> Hello Grace This is most likely because the file system that you're trying to manage via Space Management isn't configured as such. I.E. the -z flag in mmlsfs http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html Also: This IBM red book should be a good starting point and includes the information you need should you with to setup GPFS drives TSM migration (using THRESHOLD). http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1 Suggest you read the red book first and decide which method you'd like. Regards, Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 26 May 2012 01:10 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Tue May 29 18:39:24 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Tue, 29 May 2012 10:39:24 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 6 In-Reply-To: References: Message-ID: Hi, Jez, I tried what you suggested with the command: mmchfs -z yes /dev/fs1 and the list output of "mmlsfs" is as follows: -sh-4.1# ./mmlsfs /dev/fs1 flag value description ------------------- ------------------------ ----------------------------------- -f 32768 Minimum fragment size in bytes -i 512 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k all ACL semantics in effect -n 10 Estimated number of nodes that will mount file system -B 1048576 Block size -Q none Quotas enforced none Default quotas enabled --filesetdf no Fileset df enabled? -V 12.10 (3.4.0.7) File system version --create-time Thu Feb 23 16:13:28 2012 File system creation time -u yes Support for large LUNs? -z yes Is DMAPI enabled? -L 4194304 Logfile size -E yes Exact mtime mount option -S no Suppress atime mount option -K whenpossible Strict replica allocation option --fastea yes Fast external attributes enabled? --inode-limit 571392 Maximum number of inodes -P system Disk storage pools in file system -d scratch_DL1;scratch_MDL1 Disks in file system -A no Automatic mount option -o none Additional mount options -T /gpfs_directory1/ Default mount point --mount-priority 0 Mount priority But I still got the error message in dsmsmj from "manage" on /gpfs_directory1 "A conflicting Space Management is already running in the /gpfs_directory1 file system. Please wait until the Space Management process is ready and try" Could you help please? Could you give more suggestions please? Thanks. Grace On Tue, May 29, 2012 at 4:00 AM, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at gpfsug.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at gpfsug.org > > You can reach the person managing the list at > gpfsug-discuss-owner at gpfsug.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Use HSM to backup GPFS - error message: ANS9085E (Jez Tucker) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 28 May 2012 15:55:54 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS - error message: > ANS9085E > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059CC99 at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > Hello Grace > > This is most likely because the file system that you're trying to manage > via Space Management isn't configured as such. > > I.E. the -z flag in mmlsfs > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html > > Also: > > This IBM red book should be a good starting point and includes the > information you need should you with to setup GPFS drives TSM migration > (using THRESHOLD). > > http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1 > > Suggest you read the red book first and decide which method you'd like. > > Regards, > > Jez > > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 26 May 2012 01:10 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E > > Hi, > > I have a GPFS system verson 3.4, which includes the following two GPFS > file systems with the directories: > > /gpfs_directory1 > /gpfs_directory2 > > I like to use HSM to backup these GPFS files to the tapes in our TSM > server (RHAT 6.2, TSM 6.3). > I run HSM GUI on this GPFS server, the list of the file systems on this > GPFS server is as follows: > > File System State Size(KB) Free(KB) ... > ------------------ > / Not Manageable > /boot Not Manageable > ... > /gpfs_directory1 Not Managed > /gpfs_directory2 Not Managed > > > I click "gpfs_directory1", and click "Manage" > => > I got error: > """ > A conflicting Space Management process is already running in the > /gpfs_directory1 file system. > Please wait until the Space management process is ready and try again. > """ > > The dsmerror.log shows the message: > "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space > management" > > Is there anything on GPFS or HSM or TSM server that I didnt configure > correctly? Please help. Thanks. > > Grace > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120528/b97e39e0/attachment-0001.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 5, Issue 6 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 08:28:01 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 07:28:01 +0000 Subject: [gpfsug-discuss] GPFS + RDMA + Ethernet (RoCE/iWARP) Message-ID: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> Hello all I've been having a pootle around ye olde Internet in a coffee break and noticed that RDMA over Ethernet exists. http://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet http://www.hpcwire.com/hpcwire/2010-04-22/roce_an_ethernet-infiniband_love_story.html Has anyone had any experience of using this? (even outside GPFS) I know GPFS supports RDMA with Infiniband, but unsure as to RoCE / iWARP support. It suddenly occurred to me that I have 10Gb Brocade VDX switches with DCB & PFC and making things go faster is great. Perhaps the HPC crowd do this, but only via IB? Thoughts? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 08:56:42 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 07:56:42 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059D745@WARVWEXC1.uk.deluxe-eu.com> On the command line: What's the output of dsmmigfs query -Detail and ps -ef | grep dsm From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 26 May 2012 01:10 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at arif-ali.co.uk Wed May 30 14:18:17 2012 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 30 May 2012 14:18:17 +0100 Subject: [gpfsug-discuss] GPFS + RDMA + Ethernet (RoCE/iWARP) In-Reply-To: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FC61E19.8060909@arif-ali.co.uk> On 30/05/12 08:28, Jez Tucker wrote: > > Hello all > > I've been having a pootle around ye olde Internet in a coffee break > and noticed that RDMA over Ethernet exists. > > http://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet > > http://www.hpcwire.com/hpcwire/2010-04-22/roce_an_ethernet-infiniband_love_story.html > > Has anyone had any experience of using this? (even outside GPFS) > > I know GPFS supports RDMA with Infiniband, but unsure as to RoCE / > iWARP support. > > It suddenly occurred to me that I have 10Gb Brocade VDX switches with > DCB & PFC and making things go faster is great. > > Perhaps the HPC crowd do this, but only via IB? > I did have a look at this about a year ago, and thought it would be great. But never thought people would be interested. and didn't find anything within the GPFS docs or secret configs that indicated that this is supported In most of our setups we do tend to stick with verbs-rdma, and that is where most of our customer's are working with. It would be very interesting to see if it was ever supported, and to see what kind of performance improvement we would get by taking the tcp layer away maybe one of the devs could shed some light on this. -- regards, Arif -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed May 30 15:36:05 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 30 May 2012 07:36:05 -0700 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows Message-ID: This came up in the user group meeting so I thought I would send this to the group. Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file systems on GPFS Windows nodes. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 15:58:12 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 14:58:12 +0000 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059E2B2@WARVWEXC1.uk.deluxe-eu.com> May I be the first to stick both hands in the air and run round the room screaming WOOOT! Thanks to the dev team for that one. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 30 May 2012 15:36 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Mount DMAPI File system on Windows This came up in the user group meeting so I thought I would send this to the group. Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file systems on GPFS Windows nodes. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.buzzard at dundee.ac.uk Wed May 30 16:55:22 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 30 May 2012 16:55:22 +0100 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows In-Reply-To: References: Message-ID: <4FC642EA.8050601@dundee.ac.uk> Scott Fadden wrote: > This came up in the user group meeting so I thought I would send this to > the group. > > Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file > systems on GPFS Windows nodes. > Are we absolutely 100% sure on that? I ask because the release notes have contradictory information on this and when I asked in the GPFS forum for clarification the reply was it would be starting with 3.4.0-14 http://www.ibm.com/developerworks/forums/thread.jspa?threadID=426107&tstart=30 JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From ghemingtsai at gmail.com Wed May 30 17:06:15 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Wed, 30 May 2012 09:06:15 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Wed May 30 17:09:57 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Wed, 30 May 2012 09:09:57 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/ bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 17:14:17 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 16:14:17 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059E38F@WARVWEXC1.uk.deluxe-eu.com> So. My hunch from looking at my system here is that you haven't actually told dsm that the filesystem is to be space managed. You do that here: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html Then re-run dsmmigfs query -Detail and hopefully you should see something similar to this: [root at tsm01 ~]# dsmmigfs query -Detail IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 2, Level 4.1 Client date/time: 30-05-2012 17:13:00 (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights Reserved. The local node has Node ID: 3 The failover environment is deactivated on the local node. File System Name: /mnt/gpfs High Threshold: 100 Low Threshold: 80 Premig Percentage: 20 Quota: 999999999999999 Stub Size: 0 Server Name: TSM01 Max Candidates: 100 Max Files: 0 Min Partial Rec Size: 0 Min Stream File Size: 0 MinMigFileSize: 0 Preferred Node: tsm01 Node ID: 3 Owner Node: tsm01 Node ID: 3 Source Nodes: tsm01 Then see if your HSM GUI works properly. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 30 May 2012 17:06 To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtsai at slac.stanford.edu Wed May 30 20:15:55 2012 From: gtsai at slac.stanford.edu (Grace Tsai) Date: Wed, 30 May 2012 12:15:55 -0700 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments Message-ID: <4FC671EB.7060104@slac.stanford.edu> Hi, We are in the process of choosing a permanent file system for our institution, GPFS is one of the three candidates. Could someone help me to give comments or answers to the requests listed in the following. Basically, I need your help to mark 1 or 0 in the GPFS column if a feature either exists or doesnt exist, respectively. Please also add supporting comments if a feature has additional info, e.g., 100PB single namespace file system supported, etc. I answered some of them which I have tested, or got the information from google or manuals. User-Visible Features: ----------------------------- 1. Allows a single namespace (UNIX path) of at least 10s of petabytes in size. (My answer: Current tested: 4PB) 2. Large number of files supported (specify number) per namespace. (My answer: 9*10**9) 3. Supports POSIX-like mount points (i.e., looks like NFS) (My answer: 1) 4. File system supports file level access control lists (ACLs) (My answer: 1) 5. File system supports directory level ACLs, e.g., like AFS. (My answer: 1) 6. Disk quotas that users can query. (My answer: 1) 7. Disk quotas based on UID (My answer: 1) 8. Disk quotas based on GID (My answer: 1) 9. Disk quotas based on directory. 10. User-accessible snapshots Groupadmin-Visible Features --------------------------------------- 1. Group access (capabilities) similar to AFS pts groups. (My answer: 1) 2. Group access administration (create/delete/modify) that can be delegated to the groups themselves. (My answer: 1) 3. High limit (1000s) on the number of users per group 4. High limit (100s) on the number of groups a user can belong to. (My answer: 1) 5. Nesting groups within groups is permitted. 6. Groups are equal partners with users in terms of access control lists. 7. Group managers can adjust disk quotas 8. Group managers can create/delete/modify user spaces. 9. Group managers can create/delete/modify group spaces. Sysadmin-Visible Features ----------------------------------- 1. Namespace is expandable and shrinkable without file system downtime. (My answer: 1) 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) via some type of filtering, without manual user intervention (Data life-cycle management) 3. User can provide manual "hints" on where to place files based on usage requirements. 4. Allows resource-configurable logical relocation or actual migration of data without user downtime (Hardware life-cycle management/patching/maintenance) 5. Product has been shipping in production for a minimum of 2 years, nice to have at least 5 years. Must be comfortable with the track record. (My answer: 1 ) 6. Product has at least two commercial companies providing support. 7. Distributed metadata (or equivalent) to remove obvious file system bottlenecks. (My Answer: 1) 8. File system supports data integrity checking (e.g., ZFS checksumming) (My answer: 1) 9. Customized levels of data redundancy at the file/subdirectory/partition layer, based on user requirements. Replication. Load-balancing. 10. Management software fully spoorts command line interface (CLI) 10. Management software supports a graphical user interface (GUI) 11. Must run on non-proprietary x86/x64 hardware (Note: this might eliminate some proprietary solutions that meet every other requirement.) 12. Software provides tools to measure performance and monitor problems. (My answer: 1) 13. Robust and reliable: file system must recover gracefully from an unscheduled power outage, and not take forever for fsck. 14. Client code must support RHEL. (My answer: 1) 15. Client code must support RHEL compatible OS. (My answer: 1) 16. Client code must support Linux. (My answer: 1) 17. Client code must support Windows. (My answer: 1) 18. Affordable 19. Value for the money. 20. Provides native accounting information to support a storage service model. 21. Ability to change file owner throughout file system (generalized ability to implement metadata changes) 22. Allows discrete resource allocation in case groups want physical resource separation, yet still allows central management. Resource allocation might control bandwidth, LUNx, CPU, user/subdir/filesystem quotas, etc. 23. Built-in file system compression option 24. Built-in file-level replication option 25. Built-in file system deduplication option 26. Built-in file system encryption option 27. Support VM image movement among storage servers, including moving entire jobs (hypervisor requirement) 28. Security/authentication of local user to allow access (something stronger than host-based access) 29. WAN-based file system (e.g., for disaster recover site) 30. Must be able to access filesystem via NFSv3 (My answer: 1) 31. Can perform OPTIONAL file system rebalancing when adding new storage. 32. Protection from accidental, large scale deletions 33. Ability to transfer snapshots among hosts. 34. Ability to promote snapshot to read/write partition 35. Consideration given to number of metadata servers required to support overall service, and how that affects HA, i.e., must be able to support HA on a per namespace basis . (How many MD servers would we need to keep file service running?) 36. Consideration given to backup and restore capabilities and compatible hardware/software products. Look at timeframe requirements. (What backup solutions does it recommend?) 37. Need to specify how any given file system is not POSIX-compliant so we understand it. Make this info available to users. (What are its POSIX shortcomings?) From j.buzzard at dundee.ac.uk Thu May 31 00:39:32 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 31 May 2012 00:39:32 +0100 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <4FC671EB.7060104@slac.stanford.edu> References: <4FC671EB.7060104@slac.stanford.edu> Message-ID: <4FC6AFB4.6040300@dundee.ac.uk> Grace Tsai wrote: I am not sure who dreamed up this list but I will use two of the points to illustrate why it is bizarre. [SNIP] > 5. Nesting groups within groups is permitted. > Has absolutely nothing whatsoever to do with any file system under Linux that I am aware of. So for example traditionally under Linux you don't have nested groups. However if you are running against Active Directory with winbind you can. This is however independent of any file system you are running. [SNIP] > > 35. Consideration given to number of metadata servers required to > support overall service, and how that affects HA, i.e., > must be able to support HA on a per namespace basis . (How many MD > servers would we need to keep file service running?) > This for example would suggest that whoever drew up the list has a particular idea about how clustered file systems work that simply does not apply to GPFS; there are no metadata servers in GPFS There are lots of other points that just don't make sense to me as a storage administrator. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu May 31 09:32:24 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 31 May 2012 08:32:24 +0000 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <4FC671EB.7060104@slac.stanford.edu> References: <4FC671EB.7060104@slac.stanford.edu> Message-ID: <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> Hello Grace, I've cribbed out the questions you've already answered. Though, I think these should be best directed to IBM pre-sales tech to qualify them. Regards, Jez > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 30 May 2012 20:16 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments > > Hi, > > We are in the process of choosing a permanent file system for our > institution, GPFS is one of the three candidates. Could someone help me to > give comments or answers to the requests listed in the following. Basically, > I need your help to mark 1 or 0 in the GPFS column if a feature either exists > or doesnt exist, respectively. Please also add supporting comments if a > feature has additional info, e.g., 100PB single namespace file system > supported, etc. > I answered some of them which I have tested, or got the information from > google or manuals. > > > 9. Disk quotas based on directory. = 1 (per directory based on filesets which is a 'hard linked' directory to a storage pool via placement rules.) Max filesets is 10000 in 3.5. > Groupadmin-Visible Features > --------------------------------------- > 5. Nesting groups within groups is permitted. > > 6. Groups are equal partners with users in terms of access control lists. GPFS supports POSIX and NFS v4 ACLS (which are not quite the same as Windows ACLs) > 7. Group managers can adjust disk quotas > > 8. Group managers can create/delete/modify user spaces. > > 9. Group managers can create/delete/modify group spaces. .. .paraphrase... users with admin privs (root / sudoers) can adjust things. How you organise your user & group administration is up to you. This is external to GPFS. > Sysadmin-Visible Features > ----------------------------------- > > 1. Namespace is expandable and shrinkable without file system downtime. > (My answer: 1) > > 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) via some > type of filtering, without manual user intervention (Data life-cycle > management) = 1 . You can do this with GPFS policies and THRESHOLDS. Or look at IBM's V7000 Easy Tier. > 3. User can provide manual "hints" on where to place files based on usage > requirements. Do you mean the user is prompted, when you write a file? If so, then no. Though there is an API, so you could integrate that functionality if required, and your application defers to your GPFS API program before writes. I suggest user education is far simpler and cheaper to maintain. If you need prompts, your workflow is inefficient. It should be transparent to the user. > 4. Allows resource-configurable logical relocation or actual migration of data > without user downtime (Hardware life-cycle > management/patching/maintenance) = 1 > 6. Product has at least two commercial companies providing support. =1 Many companies provide OEM GPFS support. Though at some point this may be backed off to IBM if a problem requires development teams. > 9. Customized levels of data redundancy at the file/subdirectory/partition > layer, based on user requirements. > Replication. Load-balancing. =1 > 10. Management software fully spoorts command line interface (CLI) =1 > 10. Management software supports a graphical user interface (GUI) =1 , if you buy IBM's SONAS. Presume that v7000 has something also. > 11. Must run on non-proprietary x86/x64 hardware (Note: this might > eliminate some proprietary solutions that meet every other requirement.) =1 > 13. Robust and reliable: file system must recover gracefully from an > unscheduled power outage, and not take forever for fsck. =1. I've been through this personally. All good. All cluster nodes can participate in fsck. (Actually one of our Qlogic switches spat badness to two of our storage units which caused both units to simultaneously soft-reboot. Apparently the Qlogic firmware couldn't handle the amount of data we transfer a day in an internal counter. Needless to say, new firmware was required.) > 14. Client code must support RHEL. > (My answer: 1) > > 18. Affordable > > 19. Value for the money. Both above points are arguable. Nobody knows your budget. That said, it's cheaper to buy a GPFS system than an Isilon system of similar spec (I have both - and we're just about to switch off the Isilon due to running and expansion costs). Stornext is just too much management overhead and constant de-fragging. > 20. Provides native accounting information to support a storage service > model. What does 'Storage service model mean?' Chargeback per GB / user? If so, then you can write a list policy to obtain this information or use fileset quota accounting. > 21. Ability to change file owner throughout file system (generalized ability > to implement metadata changes) =1. You'd run a policy to do this. > 22. Allows discrete resource allocation in case groups want physical > resource separation, yet still allows central management. > Resource allocation might control bandwidth, LUNx, CPU, > user/subdir/filesystem quotas, etc. = 0.5. Max bandwidth you can control. You can't set a min. CPU is irrelevant. > 23. Built-in file system compression option No. Perhaps you could use TSM as an external storage pool and de-dupe to VTL ? If you backend that to tape, remember it will un-dupe as it writes to tape. > 24. Built-in file-level replication option =1 > 25. Built-in file system deduplication option =0 . I think. > 26. Built-in file system encryption option =1, if you buy IBM storage with on disk encryption. I.E. the disk is encrypted and is unreadable if removed, but the actual file system itself is not. > 27. Support VM image movement among storage servers, including moving > entire jobs (hypervisor requirement) That's a huge scope. Check your choice of VM requirements. GPFS is just a file system. > 28. Security/authentication of local user to allow access (something stronger > than host-based access) No. Unless you chkconfig the GPFS start scripts off and then have the user authenticate to be abel to start the script which mounts GPFS. > 29. WAN-based file system (e.g., for disaster recover site) =1 > 31. Can perform OPTIONAL file system rebalancing when adding new > storage. =1 > 32. Protection from accidental, large scale deletions =1 via snapshots. Though that's retrospective. No system is idiot proof. > 33. Ability to transfer snapshots among hosts. Unknown. All hosts in GPFS would see the snapshot. Transfer to a different GPFS cluster for DR, er, not quite sure. > 34. Ability to promote snapshot to read/write partition In what context does 'promote' mean? > 35. Consideration given to number of metadata servers required to support > overall service, and how that affects HA, i.e., > must be able to support HA on a per namespace basis . (How many MD > servers would we need to keep file service running?) 2 dedicated NSD servers for all namespaces is a good setup. Though, metadata is shared between all nodes. > 36. Consideration given to backup and restore capabilities and compatible > hardware/software products. Look at timeframe requirements. > (What backup solutions does it recommend?) I rather like TSM. Not tried HPSS. > 37. Need to specify how any given file system is not POSIX-compliant so we > understand it. Make this info available to users. > (What are its POSIX shortcomings?) GPFS is POSIX compliant. I'm personally unaware of any POSIX compatibility shortcomings. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luke.raimbach at oerc.ox.ac.uk Thu May 31 12:41:49 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Thu, 31 May 2012 11:41:49 +0000 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> References: <4FC671EB.7060104@slac.stanford.edu> <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> Message-ID: Hi Jez, >> 27. Support VM image movement among storage servers, including moving >> entire jobs (hypervisor requirement) > That's a huge scope. Check your choice of VM requirements. GPFS is just a file system. This works very nicely with VMware - we run our datastores from the cNFS exports of the file system. Putting the VM disks in a file-set allowed us to re-stripe the file-set, replicating it on to spare hardware in order to take down our main storage system for a firmware upgrade. The ESXi hosts didn't even flinch when we stopped the disks in the main file system! > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jez Tucker > Sent: 31 May 2012 09:32 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS Evaluation List - Please give some > comments > > Hello Grace, > > I've cribbed out the questions you've already answered. > Though, I think these should be best directed to IBM pre-sales tech to qualify > them. > > Regards, > > Jez > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Grace Tsai > > Sent: 30 May 2012 20:16 > > To: gpfsug-discuss at gpfsug.org > > Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some > > comments > > > > Hi, > > > > We are in the process of choosing a permanent file system for our > > institution, GPFS is one of the three candidates. Could someone help > > me to give comments or answers to the requests listed in the > > following. Basically, I need your help to mark 1 or 0 in the GPFS > > column if a feature either exists or doesnt exist, respectively. > > Please also add supporting comments if a feature has additional info, > > e.g., 100PB single namespace file system supported, etc. > > I answered some of them which I have tested, or got the information > > from google or manuals. > > > > > > 9. Disk quotas based on directory. > > = 1 (per directory based on filesets which is a 'hard linked' directory to a > storage pool via placement rules.) Max filesets is 10000 in 3.5. > > > > Groupadmin-Visible Features > > --------------------------------------- > > > 5. Nesting groups within groups is permitted. > > > > 6. Groups are equal partners with users in terms of access control lists. > > GPFS supports POSIX and NFS v4 ACLS (which are not quite the same as > Windows ACLs) > > > 7. Group managers can adjust disk quotas > > > > 8. Group managers can create/delete/modify user spaces. > > > > 9. Group managers can create/delete/modify group spaces. > > .. .paraphrase... users with admin privs (root / sudoers) can adjust things. > How you organise your user & group administration is up to you. This is > external to GPFS. > > > > Sysadmin-Visible Features > > ----------------------------------- > > > > 1. Namespace is expandable and shrinkable without file system downtime. > > (My answer: 1) > > > > 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) > > via some type of filtering, without manual user intervention (Data > > life-cycle > > management) > > = 1 . You can do this with GPFS policies and THRESHOLDS. Or look at IBM's > V7000 Easy Tier. > > > 3. User can provide manual "hints" on where to place files based on > > usage requirements. > > Do you mean the user is prompted, when you write a file? If so, then no. > Though there is an API, so you could integrate that functionality if required, > and your application defers to your GPFS API program before writes. I > suggest user education is far simpler and cheaper to maintain. If you need > prompts, your workflow is inefficient. It should be transparent to the user. > > > 4. Allows resource-configurable logical relocation or actual migration > > of data without user downtime (Hardware life-cycle > > management/patching/maintenance) > > = 1 > > > 6. Product has at least two commercial companies providing support. > > =1 Many companies provide OEM GPFS support. Though at some point this > may be backed off to IBM if a problem requires development teams. > > > 9. Customized levels of data redundancy at the > > file/subdirectory/partition layer, based on user requirements. > > Replication. Load-balancing. > > =1 > > > 10. Management software fully spoorts command line interface (CLI) > > =1 > > > > 10. Management software supports a graphical user interface (GUI) > > =1 , if you buy IBM's SONAS. Presume that v7000 has something also. > > > 11. Must run on non-proprietary x86/x64 hardware (Note: this might > > eliminate some proprietary solutions that meet every other > > requirement.) > > =1 > > > 13. Robust and reliable: file system must recover gracefully from an > > unscheduled power outage, and not take forever for fsck. > > =1. I've been through this personally. All good. All cluster nodes can > participate in fsck. > (Actually one of our Qlogic switches spat badness to two of our storage units > which caused both units to simultaneously soft-reboot. Apparently the > Qlogic firmware couldn't handle the amount of data we transfer a day in an > internal counter. Needless to say, new firmware was required.) > > > 14. Client code must support RHEL. > > (My answer: 1) > > > > > 18. Affordable > > > > 19. Value for the money. > > Both above points are arguable. Nobody knows your budget. > That said, it's cheaper to buy a GPFS system than an Isilon system of similar > spec (I have both - and we're just about to switch off the Isilon due to > running and expansion costs). Stornext is just too much management > overhead and constant de-fragging. > > > 20. Provides native accounting information to support a storage > > service model. > > What does 'Storage service model mean?' Chargeback per GB / user? > If so, then you can write a list policy to obtain this information or use fileset > quota accounting. > > > 21. Ability to change file owner throughout file system (generalized > > ability to implement metadata changes) > > =1. You'd run a policy to do this. > > > 22. Allows discrete resource allocation in case groups want physical > > resource separation, yet still allows central management. > > Resource allocation might control bandwidth, LUNx, CPU, > > user/subdir/filesystem quotas, etc. > > = 0.5. Max bandwidth you can control. You can't set a min. CPU is > irrelevant. > > > 23. Built-in file system compression option > > No. Perhaps you could use TSM as an external storage pool and de-dupe to > VTL ? If you backend that to tape, remember it will un-dupe as it writes to > tape. > > > 24. Built-in file-level replication option > > =1 > > > 25. Built-in file system deduplication option > > =0 . I think. > > > 26. Built-in file system encryption option > > =1, if you buy IBM storage with on disk encryption. I.E. the disk is encrypted > and is unreadable if removed, but the actual file system itself is not. > > > 27. Support VM image movement among storage servers, including moving > > entire jobs (hypervisor requirement) > > That's a huge scope. Check your choice of VM requirements. GPFS is just a > file system. > > > 28. Security/authentication of local user to allow access (something > > stronger than host-based access) > > No. Unless you chkconfig the GPFS start scripts off and then have the user > authenticate to be abel to start the script which mounts GPFS. > > > 29. WAN-based file system (e.g., for disaster recover site) > > =1 > > > 31. Can perform OPTIONAL file system rebalancing when adding new > > storage. > > =1 > > > 32. Protection from accidental, large scale deletions > > =1 via snapshots. Though that's retrospective. No system is idiot proof. > > > 33. Ability to transfer snapshots among hosts. > > Unknown. All hosts in GPFS would see the snapshot. Transfer to a different > GPFS cluster for DR, er, not quite sure. > > > 34. Ability to promote snapshot to read/write partition > > In what context does 'promote' mean? > > > 35. Consideration given to number of metadata servers required to > > support overall service, and how that affects HA, i.e., > > must be able to support HA on a per namespace basis . (How many > > MD servers would we need to keep file service running?) > > 2 dedicated NSD servers for all namespaces is a good setup. Though, > metadata is shared between all nodes. > > > 36. Consideration given to backup and restore capabilities and > > compatible hardware/software products. Look at timeframe requirements. > > (What backup solutions does it recommend?) > > I rather like TSM. Not tried HPSS. > > > 37. Need to specify how any given file system is not POSIX-compliant > > so we understand it. Make this info available to users. > > (What are its POSIX shortcomings?) > > GPFS is POSIX compliant. I'm personally unaware of any POSIX compatibility > shortcomings. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From j.buzzard at dundee.ac.uk Wed May 9 16:47:25 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2012 16:47:25 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba Message-ID: <4FAA918D.50101@dundee.ac.uk> Not documented, but I believe there are four ;-) allowSambaCaseInsensitiveLookup syncSambaMetadataOps cifsBypassShareLocksOnRename cifsBypassTraversalChecking From what I can determine they are binary on/off options. For example you enable the first with mmchconfig allowSambaCaseInsensitiveLookup=yes I am guessing but I would imagine when the first is turned on, then when Samba tries to lookup a filename GPFS will do the case insensitive matching for you, which should be faster than Samba having to do it. You should obviously have case sensitive = no in your Samba config as well. The cifsBypassTraversalChecking is explained in the SONAS manual page for chcfg, and I note it is on by default in SONAS. http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp?topic=%2Fcom.ibm.sonas.doc%2Fmanpages%2Fchcfg.html Some Googling indicates that NetApp and EMC have options for bypass traverse checking on their filers, so something you probably want to turn on. The other two sound fairly self explanatory, but the question is why would you want them turned on. Any one any ideas? JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From j.buzzard at dundee.ac.uk Wed May 9 23:57:56 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 09 May 2012 23:57:56 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <4FAA918D.50101@dundee.ac.uk> References: <4FAA918D.50101@dundee.ac.uk> Message-ID: <4FAAF674.9070809@dundee.ac.uk> Jonathan Buzzard wrote: > > Not documented, but I believe there are four ;-) > > allowSambaCaseInsensitiveLookup > syncSambaMetadataOps > cifsBypassShareLocksOnRename > cifsBypassTraversalChecking > Just add to this I believe there are some more, mostly because they are between the first two and last two in mmchconfig Korn shell file They are allowSynchronousFcntlRetries allowWriteWithDeleteChild Not sure what the first one does, but the second one I am guessing allows you to write to a folder if you can delete child folders and would make GPFS/Samba follow Windows schematics closer. Over the coming days I hope to play around with some of these options and see what they do. Also there is an undocumented option for ACL's on mmchfs (I am working on 3.4.0-13 here so it's not even 3.5) so that you can do mmchfs test -k samba and then [root at krebs1 bin]# mmlsfs test flag value description ------------------- ------------------------ ----------------------------------- -f 32768 Minimum fragment size in bytes -i 512 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k samba ACL semantics in effect -n 32 Estimated number of nodes that will mount file system -B 1048576 Block size -Q user;group;fileset Quotas enforced none Default quotas enabled --filesetdf no Fileset df enabled? -V 12.07 (3.4.0.4) Current file system version 11.05 (3.3.0.2) Original file system version --create-time Fri Dec 4 09:37:28 2009 File system creation time -u yes Support for large LUNs? -z yes Is DMAPI enabled? -L 4194304 Logfile size -E no Exact mtime mount option -S no Suppress atime mount option -K always Strict replica allocation option --fastea yes Fast external attributes enabled? --inode-limit 1427760 Maximum number of inodes -P system;nearline Disk storage pools in file system -d gpfs19nsd;gpfs20nsd;gpfs21nsd;gpfs22nsd;gpfs23nsd;gpfs24nsd Disks in file system -A yes Automatic mount option -o none Additional mount options -T /test Default mount point --mount-priority 0 Mount priority Not entirely sure what samba ACL's are mind you. Does it modify NFSv4 ACL's so they follow NTFS schematics more closely? JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu May 10 08:39:09 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 10 May 2012 07:39:09 +0000 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <4FAAF674.9070809@dundee.ac.uk> References: <4FAA918D.50101@dundee.ac.uk> <4FAAF674.9070809@dundee.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> If you're on 3.4.0-13, can you confirm the operation of DMAPI Windows mounts. Any serious issues? > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 09 May 2012 23:58 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS magic options for Samba > > Jonathan Buzzard wrote: > > > > Not documented, but I believe there are four ;-) > > > > allowSambaCaseInsensitiveLookup > > syncSambaMetadataOps > > cifsBypassShareLocksOnRename > > cifsBypassTraversalChecking > > > > Just add to this I believe there are some more, mostly because they are > between the first two and last two in mmchconfig Korn shell file > > They are > > allowSynchronousFcntlRetries > allowWriteWithDeleteChild > > Not sure what the first one does, but the second one I am guessing allows > you to write to a folder if you can delete child folders and would make > GPFS/Samba follow Windows schematics closer. Over the coming days I > hope to play around with some of these options and see what they do. > > Also there is an undocumented option for ACL's on mmchfs (I am working on > 3.4.0-13 here so it's not even 3.5) so that you can do > > mmchfs test -k samba > > and then > > [root at krebs1 bin]# mmlsfs test > flag value description > ------------------- ------------------------ > ----------------------------------- > -f 32768 Minimum fragment size in bytes > -i 512 Inode size in bytes > -I 32768 Indirect block size in bytes > -m 1 Default number of metadata > replicas > -M 2 Maximum number of metadata > replicas > -r 1 Default number of data > replicas > -R 2 Maximum number of data > replicas > -j cluster Block allocation type > -D nfs4 File locking semantics in > effect > -k samba ACL semantics in effect > -n 32 Estimated number of nodes > that will mount file system > -B 1048576 Block size > -Q user;group;fileset Quotas enforced > none Default quotas enabled > --filesetdf no Fileset df enabled? > -V 12.07 (3.4.0.4) Current file system version > 11.05 (3.3.0.2) Original file system version > --create-time Fri Dec 4 09:37:28 2009 File system creation time > -u yes Support for large LUNs? > -z yes Is DMAPI enabled? > -L 4194304 Logfile size > -E no Exact mtime mount option > -S no Suppress atime mount option > -K always Strict replica allocation > option > --fastea yes Fast external attributes > enabled? > --inode-limit 1427760 Maximum number of inodes > -P system;nearline Disk storage pools in file > system > -d > gpfs19nsd;gpfs20nsd;gpfs21nsd;gpfs22nsd;gpfs23nsd;gpfs24nsd Disks in file > system > -A yes Automatic mount option > -o none Additional mount options > -T /test Default mount point > --mount-priority 0 Mount priority > > > Not entirely sure what samba ACL's are mind you. Does it modify NFSv4 > ACL's so they follow NTFS schematics more closely? > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From j.buzzard at dundee.ac.uk Thu May 10 09:54:46 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 10 May 2012 09:54:46 +0100 Subject: [gpfsug-discuss] GPFS magic options for Samba In-Reply-To: <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> References: <4FAA918D.50101@dundee.ac.uk> <4FAAF674.9070809@dundee.ac.uk> <39571EA9316BE44899D59C7A640C13F530595194@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FAB8256.5010409@dundee.ac.uk> On 10/05/12 08:39, Jez Tucker wrote: > If you're on 3.4.0-13, can you confirm the operation of DMAPI Windows mounts. > Any serious issues? Yes I can confirm that it does not work. Thw documentation is/was all wrong. See this thread in the GPFS forums. http://www.ibm.com/developerworks/forums/thread.jspa?threadID=426107&tstart=15 Basically you need to wait to 3.4.0-14 or jump to 3.5.0-1 :-) I have however noticed that the per fileset quotas seem to be fully functional on 3.4.0-13, turn them on with mmchfs test --perfileset-quota and off with, mmchfs test --noperfileset-quota set a quota for user nemo on the homes fileset with mmedquota -u test:homes:nemo or if you prefer the command line than messing with an editor mmsetquota -u nemo -h 25G /test/homes JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From crobson at ocf.co.uk Fri May 18 12:15:50 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Fri, 18 May 2012 12:15:50 +0100 Subject: [gpfsug-discuss] A date for your diaries Message-ID: Dear All, The next GPFS user group meeting will take place on Thursday 20th September. Paul Tomlinson, AWE, has kindly offered to host and the meeting will take place at Bishopswood Golf Club, Bishopswood, Bishopswood Lane, Tadley, Hampshire, RG26 4AT. Agenda to follow soon. Please contact me to register your place and to highlight any agenda items. Many thanks, Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri May 18 15:57:18 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 18 May 2012 14:57:18 +0000 Subject: [gpfsug-discuss] Stupid GPFS Tricks 2012 - Call for entries Message-ID: <39571EA9316BE44899D59C7A640C13F5305997B6@WARVWEXC1.uk.deluxe-eu.com> [cid:image001.png at 01CD350C.1C9EA0D0] Hello GPFSUG peeps, Have you used GPFS to do something insanely wacky that the sheer craziness of would blow our minds? Perhaps you've done something spectacularly stupid that turned out to be, well, just brilliant. Maybe you used the GPFS API, policies or scripts to create a hack of utter awesomeness. If so, then Stupid GPFS Tricks is for you. The rules: - It must take no longer than 10 minutes to explain your stupid trick. - You must be able to attend the next UG at AWE. - All stupid tricks must be submitted by Aug 31st 2012. Entries should be submitted to secretary at gpfsug.org with the subject "Stupid GPFS Trick". A short description of your trick and any associated Powerpoint/OO/etc. slides is required or point us to a URL. Thanks Jez [This event idea has been shamelessly robbed from the Pixar UG. Thanks folks!] --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16602 bytes Desc: image001.png URL: From crobson at ocf.co.uk Wed May 23 08:53:02 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Wed, 23 May 2012 08:53:02 +0100 Subject: [gpfsug-discuss] Crispin Keable article Message-ID: Dear All, An interesting article featuring Crispin Keable (who has previously presented at our user group meetings) was published in The Register yesterday. Crispin talks about the latest GPFS 3.5 update. Read the full article: http://www.theregister.co.uk/2012/05/21/ibm_general_parallel_file_system_3dot5/ Claire Robson GPFS User Group Secretary -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Thu May 24 11:55:42 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Thu, 24 May 2012 10:55:42 +0000 Subject: [gpfsug-discuss] GPFS Question: Will stopping all tie-breaker disks break quorum semantics? Message-ID: Dear GPFS, I have a relatively simple GPFS set-up: Two manager-quorum nodes (primary and secondary configuration nodes) run the cluster with tie-breaker disk quorum semantics. The two manager nodes are SAN attached to 6 x 20TB SATA NSDs (marked as dataOnly), split in to two failure groups so we could create a file system that supported replication. Three of these NSDs are marked as the tie-breaker disks. The metadata is stored on SAS disks located in both manager-quorum nodes (marked as metaDataOnly) and replicated between them. The disk controller subsystem that runs the SATA NSDs requires a reboot, BUT I do not want to shut down GPFS as some critical services are dependent on a small (~12TB) portion of the data. I have added two additional NSD servers to the cluster using some old equipment. These are SAN attached to 10 x 2TB LUNs which is enough to keep the critical data on. I am removing one of the SATA 20TB LUNs from the file system 'system' storage pool on the manager nodes and adding it to another storage pool 'evac-pool' which contains the new 10 x 2TB NSDs. Using the policy engine, I want to migrate the file set which contains the critical data to this new storage pool and enable replication of the file set (with the single 20TB NSD in failure group 1 and the 10 x 2TB NSDs in failure group 2). I am expecting to then be able to suspend then stop the 20TB NSD and maintain access to the critical data. This plan is progressing nicely, but I'm not yet at the stage where I can stop the 20TB NSD (I'm waiting for a re-stripe to finish for something else). Does this plan sound plausible so far? I've read the relevant documentation and will run an experiment with stopping the single 20TB NSDs first. However, I thought about a potential problem - the quorum semantics in operation. When I switch off all six 20TB NSDs, the cluster manager-quorum nodes to which they are attached will remain online (to serve the metadata NSDs for the surviving data disks), but all the tiebreaker disks are on the six 20TB NSDs. My question is, will removing access to the tie-breaker disks affect GPFS quorum, or are they only referenced when quorum is lost? I'm running GPFS 3.4.7. Thanks, Luke. -- Luke Raimbach IT Manager Oxford e-Research Centre 7 Keble Road, Oxford, OX1 3QG +44(0)1865 610639 From ghemingtsai at gmail.com Sat May 26 01:10:04 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Fri, 25 May 2012 17:10:04 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Message-ID: Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon May 28 16:55:54 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 28 May 2012 15:55:54 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059CC99@WARVWEXC1.uk.deluxe-eu.com> Hello Grace This is most likely because the file system that you're trying to manage via Space Management isn't configured as such. I.E. the -z flag in mmlsfs http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html Also: This IBM red book should be a good starting point and includes the information you need should you with to setup GPFS drives TSM migration (using THRESHOLD). http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1 Suggest you read the red book first and decide which method you'd like. Regards, Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 26 May 2012 01:10 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Tue May 29 18:39:24 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Tue, 29 May 2012 10:39:24 -0700 Subject: [gpfsug-discuss] gpfsug-discuss Digest, Vol 5, Issue 6 In-Reply-To: References: Message-ID: Hi, Jez, I tried what you suggested with the command: mmchfs -z yes /dev/fs1 and the list output of "mmlsfs" is as follows: -sh-4.1# ./mmlsfs /dev/fs1 flag value description ------------------- ------------------------ ----------------------------------- -f 32768 Minimum fragment size in bytes -i 512 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k all ACL semantics in effect -n 10 Estimated number of nodes that will mount file system -B 1048576 Block size -Q none Quotas enforced none Default quotas enabled --filesetdf no Fileset df enabled? -V 12.10 (3.4.0.7) File system version --create-time Thu Feb 23 16:13:28 2012 File system creation time -u yes Support for large LUNs? -z yes Is DMAPI enabled? -L 4194304 Logfile size -E yes Exact mtime mount option -S no Suppress atime mount option -K whenpossible Strict replica allocation option --fastea yes Fast external attributes enabled? --inode-limit 571392 Maximum number of inodes -P system Disk storage pools in file system -d scratch_DL1;scratch_MDL1 Disks in file system -A no Automatic mount option -o none Additional mount options -T /gpfs_directory1/ Default mount point --mount-priority 0 Mount priority But I still got the error message in dsmsmj from "manage" on /gpfs_directory1 "A conflicting Space Management is already running in the /gpfs_directory1 file system. Please wait until the Space Management process is ready and try" Could you help please? Could you give more suggestions please? Thanks. Grace On Tue, May 29, 2012 at 4:00 AM, wrote: > Send gpfsug-discuss mailing list submissions to > gpfsug-discuss at gpfsug.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > or, via email, send a message with subject or body 'help' to > gpfsug-discuss-request at gpfsug.org > > You can reach the person managing the list at > gpfsug-discuss-owner at gpfsug.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of gpfsug-discuss digest..." > > > Today's Topics: > > 1. Re: Use HSM to backup GPFS - error message: ANS9085E (Jez Tucker) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 28 May 2012 15:55:54 +0000 > From: Jez Tucker > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Use HSM to backup GPFS - error message: > ANS9085E > Message-ID: > < > 39571EA9316BE44899D59C7A640C13F53059CC99 at WARVWEXC1.uk.deluxe-eu.com> > Content-Type: text/plain; charset="windows-1252" > > Hello Grace > > This is most likely because the file system that you're trying to manage > via Space Management isn't configured as such. > > I.E. the -z flag in mmlsfs > > > http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/index.jsp?topic=%2Fcom.ibm.itsm.hsmul.doc%2Ft_hsmul_managing.html > > Also: > > This IBM red book should be a good starting point and includes the > information you need should you with to setup GPFS drives TSM migration > (using THRESHOLD). > > http://www-304.ibm.com/support/docview.wss?uid=swg27018848&aid=1 > > Suggest you read the red book first and decide which method you'd like. > > Regards, > > Jez > > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > > > From: gpfsug-discuss-bounces at gpfsug.org [mailto: > gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 26 May 2012 01:10 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E > > Hi, > > I have a GPFS system verson 3.4, which includes the following two GPFS > file systems with the directories: > > /gpfs_directory1 > /gpfs_directory2 > > I like to use HSM to backup these GPFS files to the tapes in our TSM > server (RHAT 6.2, TSM 6.3). > I run HSM GUI on this GPFS server, the list of the file systems on this > GPFS server is as follows: > > File System State Size(KB) Free(KB) ... > ------------------ > / Not Manageable > /boot Not Manageable > ... > /gpfs_directory1 Not Managed > /gpfs_directory2 Not Managed > > > I click "gpfs_directory1", and click "Manage" > => > I got error: > """ > A conflicting Space Management process is already running in the > /gpfs_directory1 file system. > Please wait until the Space management process is ready and try again. > """ > > The dsmerror.log shows the message: > "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space > management" > > Is there anything on GPFS or HSM or TSM server that I didnt configure > correctly? Please help. Thanks. > > Grace > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20120528/b97e39e0/attachment-0001.html > > > > ------------------------------ > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > End of gpfsug-discuss Digest, Vol 5, Issue 6 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 08:28:01 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 07:28:01 +0000 Subject: [gpfsug-discuss] GPFS + RDMA + Ethernet (RoCE/iWARP) Message-ID: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> Hello all I've been having a pootle around ye olde Internet in a coffee break and noticed that RDMA over Ethernet exists. http://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet http://www.hpcwire.com/hpcwire/2010-04-22/roce_an_ethernet-infiniband_love_story.html Has anyone had any experience of using this? (even outside GPFS) I know GPFS supports RDMA with Infiniband, but unsure as to RoCE / iWARP support. It suddenly occurred to me that I have 10Gb Brocade VDX switches with DCB & PFC and making things go faster is great. Perhaps the HPC crowd do this, but only via IB? Thoughts? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 08:56:42 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 07:56:42 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059D745@WARVWEXC1.uk.deluxe-eu.com> On the command line: What's the output of dsmmigfs query -Detail and ps -ef | grep dsm From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 26 May 2012 01:10 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Use HSM to backup GPFS - error message: ANS9085E Hi, I have a GPFS system verson 3.4, which includes the following two GPFS file systems with the directories: /gpfs_directory1 /gpfs_directory2 I like to use HSM to backup these GPFS files to the tapes in our TSM server (RHAT 6.2, TSM 6.3). I run HSM GUI on this GPFS server, the list of the file systems on this GPFS server is as follows: File System State Size(KB) Free(KB) ... ------------------ / Not Manageable /boot Not Manageable ... /gpfs_directory1 Not Managed /gpfs_directory2 Not Managed I click "gpfs_directory1", and click "Manage" => I got error: """ A conflicting Space Management process is already running in the /gpfs_directory1 file system. Please wait until the Space management process is ready and try again. """ The dsmerror.log shows the message: "ANS9085E hsmapi: file system /gpfs_directory1 is not managed by space management" Is there anything on GPFS or HSM or TSM server that I didnt configure correctly? Please help. Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at arif-ali.co.uk Wed May 30 14:18:17 2012 From: mail at arif-ali.co.uk (Arif Ali) Date: Wed, 30 May 2012 14:18:17 +0100 Subject: [gpfsug-discuss] GPFS + RDMA + Ethernet (RoCE/iWARP) In-Reply-To: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F53059D725@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <4FC61E19.8060909@arif-ali.co.uk> On 30/05/12 08:28, Jez Tucker wrote: > > Hello all > > I've been having a pootle around ye olde Internet in a coffee break > and noticed that RDMA over Ethernet exists. > > http://en.wikipedia.org/wiki/RDMA_over_Converged_Ethernet > > http://www.hpcwire.com/hpcwire/2010-04-22/roce_an_ethernet-infiniband_love_story.html > > Has anyone had any experience of using this? (even outside GPFS) > > I know GPFS supports RDMA with Infiniband, but unsure as to RoCE / > iWARP support. > > It suddenly occurred to me that I have 10Gb Brocade VDX switches with > DCB & PFC and making things go faster is great. > > Perhaps the HPC crowd do this, but only via IB? > I did have a look at this about a year ago, and thought it would be great. But never thought people would be interested. and didn't find anything within the GPFS docs or secret configs that indicated that this is supported In most of our setups we do tend to stick with verbs-rdma, and that is where most of our customer's are working with. It would be very interesting to see if it was ever supported, and to see what kind of performance improvement we would get by taking the tcp layer away maybe one of the devs could shed some light on this. -- regards, Arif -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed May 30 15:36:05 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 30 May 2012 07:36:05 -0700 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows Message-ID: This came up in the user group meeting so I thought I would send this to the group. Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file systems on GPFS Windows nodes. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 15:58:12 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 14:58:12 +0000 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059E2B2@WARVWEXC1.uk.deluxe-eu.com> May I be the first to stick both hands in the air and run round the room screaming WOOOT! Thanks to the dev team for that one. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 30 May 2012 15:36 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Mount DMAPI File system on Windows This came up in the user group meeting so I thought I would send this to the group. Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file systems on GPFS Windows nodes. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.buzzard at dundee.ac.uk Wed May 30 16:55:22 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Wed, 30 May 2012 16:55:22 +0100 Subject: [gpfsug-discuss] Mount DMAPI File system on Windows In-Reply-To: References: Message-ID: <4FC642EA.8050601@dundee.ac.uk> Scott Fadden wrote: > This came up in the user group meeting so I thought I would send this to > the group. > > Starting with GPFS 3.4.0.13 you can now mount DMAPI enabled file > systems on GPFS Windows nodes. > Are we absolutely 100% sure on that? I ask because the release notes have contradictory information on this and when I asked in the GPFS forum for clarification the reply was it would be starting with 3.4.0-14 http://www.ibm.com/developerworks/forums/thread.jspa?threadID=426107&tstart=30 JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From ghemingtsai at gmail.com Wed May 30 17:06:15 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Wed, 30 May 2012 09:06:15 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghemingtsai at gmail.com Wed May 30 17:09:57 2012 From: ghemingtsai at gmail.com (Grace Tsai) Date: Wed, 30 May 2012 09:09:57 -0700 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Message-ID: Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/ bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed May 30 17:14:17 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 30 May 2012 16:14:17 +0000 Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E In-Reply-To: References: Message-ID: <39571EA9316BE44899D59C7A640C13F53059E38F@WARVWEXC1.uk.deluxe-eu.com> So. My hunch from looking at my system here is that you haven't actually told dsm that the filesystem is to be space managed. You do that here: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.hsmul.doc/t_add_spc_mgt.html Then re-run dsmmigfs query -Detail and hopefully you should see something similar to this: [root at tsm01 ~]# dsmmigfs query -Detail IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 2, Level 4.1 Client date/time: 30-05-2012 17:13:00 (c) Copyright by IBM Corporation and other(s) 1990, 2012. All Rights Reserved. The local node has Node ID: 3 The failover environment is deactivated on the local node. File System Name: /mnt/gpfs High Threshold: 100 Low Threshold: 80 Premig Percentage: 20 Quota: 999999999999999 Stub Size: 0 Server Name: TSM01 Max Candidates: 100 Max Files: 0 Min Partial Rec Size: 0 Min Stream File Size: 0 MinMigFileSize: 0 Preferred Node: tsm01 Node ID: 3 Owner Node: tsm01 Node ID: 3 Source Nodes: tsm01 Then see if your HSM GUI works properly. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Grace Tsai Sent: 30 May 2012 17:06 To: gpfsug-discuss at gpfsug.org; Jez.Tucker at rushes.co.org Subject: [gpfsug-discuss] Use HSM to backup GPFS -error message: ANS9085E Hi, Jez, Thanks to reply my questions. Here is the output of "dsmmigfs query -Detail" and "ps -ef | grep dsm". On GPFS server, dsmmigfs query -Detail => IBM Tivoli Storage Manager Command Line Space Management Client Interface Client Version 6, Release 3, Level 0.0 Client date/time: 05/30/12 08:51:55 (c) Copyright by IBM Corporation and other(s) 1990, 2011. All Rights Reserved. The local node has Node ID: 1 The failover environment is active on the local node. The recall distribution is enabled. On GPFS server, ps -ef | grep dsm => root 6157 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrootd root 6158 1 0 May29 ? 00:00:03 /opt/tivoli/tsm/client/hsm/bin/dsmmonitord root 6159 1 0 May29 ? 00:00:14 /opt/tivoli/tsm/client/hsm/bin/dsmscoutd root 6163 1 0 May29 ? 00:00:37 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6165 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 6626 4331 0 08:52 pts/0 00:00:00 grep dsm root 9034 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 9035 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 14278 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/ba/bin/dsmcad root 22236 1 0 May29 ? 00:00:35 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 22237 1 0 May29 ? 00:00:00 /opt/tivoli/tsm/client/hsm/bin/dsmrecalld root 24080 4248 0 May29 pts/1 00:00:00 /bin/ksh /usr/bin/dsmsmj root 24083 24080 0 May29 pts/1 00:00:39 java -DDSM_LANG= -DDSM_LOG=/ -DDSM_DIR= -DDSM_ROOT=/opt/tivoli/tsm/client/hsm/bin/../../ba/bin -jar lib/dsmsm.jar Thanks. Grace -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtsai at slac.stanford.edu Wed May 30 20:15:55 2012 From: gtsai at slac.stanford.edu (Grace Tsai) Date: Wed, 30 May 2012 12:15:55 -0700 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments Message-ID: <4FC671EB.7060104@slac.stanford.edu> Hi, We are in the process of choosing a permanent file system for our institution, GPFS is one of the three candidates. Could someone help me to give comments or answers to the requests listed in the following. Basically, I need your help to mark 1 or 0 in the GPFS column if a feature either exists or doesnt exist, respectively. Please also add supporting comments if a feature has additional info, e.g., 100PB single namespace file system supported, etc. I answered some of them which I have tested, or got the information from google or manuals. User-Visible Features: ----------------------------- 1. Allows a single namespace (UNIX path) of at least 10s of petabytes in size. (My answer: Current tested: 4PB) 2. Large number of files supported (specify number) per namespace. (My answer: 9*10**9) 3. Supports POSIX-like mount points (i.e., looks like NFS) (My answer: 1) 4. File system supports file level access control lists (ACLs) (My answer: 1) 5. File system supports directory level ACLs, e.g., like AFS. (My answer: 1) 6. Disk quotas that users can query. (My answer: 1) 7. Disk quotas based on UID (My answer: 1) 8. Disk quotas based on GID (My answer: 1) 9. Disk quotas based on directory. 10. User-accessible snapshots Groupadmin-Visible Features --------------------------------------- 1. Group access (capabilities) similar to AFS pts groups. (My answer: 1) 2. Group access administration (create/delete/modify) that can be delegated to the groups themselves. (My answer: 1) 3. High limit (1000s) on the number of users per group 4. High limit (100s) on the number of groups a user can belong to. (My answer: 1) 5. Nesting groups within groups is permitted. 6. Groups are equal partners with users in terms of access control lists. 7. Group managers can adjust disk quotas 8. Group managers can create/delete/modify user spaces. 9. Group managers can create/delete/modify group spaces. Sysadmin-Visible Features ----------------------------------- 1. Namespace is expandable and shrinkable without file system downtime. (My answer: 1) 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) via some type of filtering, without manual user intervention (Data life-cycle management) 3. User can provide manual "hints" on where to place files based on usage requirements. 4. Allows resource-configurable logical relocation or actual migration of data without user downtime (Hardware life-cycle management/patching/maintenance) 5. Product has been shipping in production for a minimum of 2 years, nice to have at least 5 years. Must be comfortable with the track record. (My answer: 1 ) 6. Product has at least two commercial companies providing support. 7. Distributed metadata (or equivalent) to remove obvious file system bottlenecks. (My Answer: 1) 8. File system supports data integrity checking (e.g., ZFS checksumming) (My answer: 1) 9. Customized levels of data redundancy at the file/subdirectory/partition layer, based on user requirements. Replication. Load-balancing. 10. Management software fully spoorts command line interface (CLI) 10. Management software supports a graphical user interface (GUI) 11. Must run on non-proprietary x86/x64 hardware (Note: this might eliminate some proprietary solutions that meet every other requirement.) 12. Software provides tools to measure performance and monitor problems. (My answer: 1) 13. Robust and reliable: file system must recover gracefully from an unscheduled power outage, and not take forever for fsck. 14. Client code must support RHEL. (My answer: 1) 15. Client code must support RHEL compatible OS. (My answer: 1) 16. Client code must support Linux. (My answer: 1) 17. Client code must support Windows. (My answer: 1) 18. Affordable 19. Value for the money. 20. Provides native accounting information to support a storage service model. 21. Ability to change file owner throughout file system (generalized ability to implement metadata changes) 22. Allows discrete resource allocation in case groups want physical resource separation, yet still allows central management. Resource allocation might control bandwidth, LUNx, CPU, user/subdir/filesystem quotas, etc. 23. Built-in file system compression option 24. Built-in file-level replication option 25. Built-in file system deduplication option 26. Built-in file system encryption option 27. Support VM image movement among storage servers, including moving entire jobs (hypervisor requirement) 28. Security/authentication of local user to allow access (something stronger than host-based access) 29. WAN-based file system (e.g., for disaster recover site) 30. Must be able to access filesystem via NFSv3 (My answer: 1) 31. Can perform OPTIONAL file system rebalancing when adding new storage. 32. Protection from accidental, large scale deletions 33. Ability to transfer snapshots among hosts. 34. Ability to promote snapshot to read/write partition 35. Consideration given to number of metadata servers required to support overall service, and how that affects HA, i.e., must be able to support HA on a per namespace basis . (How many MD servers would we need to keep file service running?) 36. Consideration given to backup and restore capabilities and compatible hardware/software products. Look at timeframe requirements. (What backup solutions does it recommend?) 37. Need to specify how any given file system is not POSIX-compliant so we understand it. Make this info available to users. (What are its POSIX shortcomings?) From j.buzzard at dundee.ac.uk Thu May 31 00:39:32 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 31 May 2012 00:39:32 +0100 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <4FC671EB.7060104@slac.stanford.edu> References: <4FC671EB.7060104@slac.stanford.edu> Message-ID: <4FC6AFB4.6040300@dundee.ac.uk> Grace Tsai wrote: I am not sure who dreamed up this list but I will use two of the points to illustrate why it is bizarre. [SNIP] > 5. Nesting groups within groups is permitted. > Has absolutely nothing whatsoever to do with any file system under Linux that I am aware of. So for example traditionally under Linux you don't have nested groups. However if you are running against Active Directory with winbind you can. This is however independent of any file system you are running. [SNIP] > > 35. Consideration given to number of metadata servers required to > support overall service, and how that affects HA, i.e., > must be able to support HA on a per namespace basis . (How many MD > servers would we need to keep file service running?) > This for example would suggest that whoever drew up the list has a particular idea about how clustered file systems work that simply does not apply to GPFS; there are no metadata servers in GPFS There are lots of other points that just don't make sense to me as a storage administrator. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu May 31 09:32:24 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 31 May 2012 08:32:24 +0000 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <4FC671EB.7060104@slac.stanford.edu> References: <4FC671EB.7060104@slac.stanford.edu> Message-ID: <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> Hello Grace, I've cribbed out the questions you've already answered. Though, I think these should be best directed to IBM pre-sales tech to qualify them. Regards, Jez > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Grace Tsai > Sent: 30 May 2012 20:16 > To: gpfsug-discuss at gpfsug.org > Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments > > Hi, > > We are in the process of choosing a permanent file system for our > institution, GPFS is one of the three candidates. Could someone help me to > give comments or answers to the requests listed in the following. Basically, > I need your help to mark 1 or 0 in the GPFS column if a feature either exists > or doesnt exist, respectively. Please also add supporting comments if a > feature has additional info, e.g., 100PB single namespace file system > supported, etc. > I answered some of them which I have tested, or got the information from > google or manuals. > > > 9. Disk quotas based on directory. = 1 (per directory based on filesets which is a 'hard linked' directory to a storage pool via placement rules.) Max filesets is 10000 in 3.5. > Groupadmin-Visible Features > --------------------------------------- > 5. Nesting groups within groups is permitted. > > 6. Groups are equal partners with users in terms of access control lists. GPFS supports POSIX and NFS v4 ACLS (which are not quite the same as Windows ACLs) > 7. Group managers can adjust disk quotas > > 8. Group managers can create/delete/modify user spaces. > > 9. Group managers can create/delete/modify group spaces. .. .paraphrase... users with admin privs (root / sudoers) can adjust things. How you organise your user & group administration is up to you. This is external to GPFS. > Sysadmin-Visible Features > ----------------------------------- > > 1. Namespace is expandable and shrinkable without file system downtime. > (My answer: 1) > > 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) via some > type of filtering, without manual user intervention (Data life-cycle > management) = 1 . You can do this with GPFS policies and THRESHOLDS. Or look at IBM's V7000 Easy Tier. > 3. User can provide manual "hints" on where to place files based on usage > requirements. Do you mean the user is prompted, when you write a file? If so, then no. Though there is an API, so you could integrate that functionality if required, and your application defers to your GPFS API program before writes. I suggest user education is far simpler and cheaper to maintain. If you need prompts, your workflow is inefficient. It should be transparent to the user. > 4. Allows resource-configurable logical relocation or actual migration of data > without user downtime (Hardware life-cycle > management/patching/maintenance) = 1 > 6. Product has at least two commercial companies providing support. =1 Many companies provide OEM GPFS support. Though at some point this may be backed off to IBM if a problem requires development teams. > 9. Customized levels of data redundancy at the file/subdirectory/partition > layer, based on user requirements. > Replication. Load-balancing. =1 > 10. Management software fully spoorts command line interface (CLI) =1 > 10. Management software supports a graphical user interface (GUI) =1 , if you buy IBM's SONAS. Presume that v7000 has something also. > 11. Must run on non-proprietary x86/x64 hardware (Note: this might > eliminate some proprietary solutions that meet every other requirement.) =1 > 13. Robust and reliable: file system must recover gracefully from an > unscheduled power outage, and not take forever for fsck. =1. I've been through this personally. All good. All cluster nodes can participate in fsck. (Actually one of our Qlogic switches spat badness to two of our storage units which caused both units to simultaneously soft-reboot. Apparently the Qlogic firmware couldn't handle the amount of data we transfer a day in an internal counter. Needless to say, new firmware was required.) > 14. Client code must support RHEL. > (My answer: 1) > > 18. Affordable > > 19. Value for the money. Both above points are arguable. Nobody knows your budget. That said, it's cheaper to buy a GPFS system than an Isilon system of similar spec (I have both - and we're just about to switch off the Isilon due to running and expansion costs). Stornext is just too much management overhead and constant de-fragging. > 20. Provides native accounting information to support a storage service > model. What does 'Storage service model mean?' Chargeback per GB / user? If so, then you can write a list policy to obtain this information or use fileset quota accounting. > 21. Ability to change file owner throughout file system (generalized ability > to implement metadata changes) =1. You'd run a policy to do this. > 22. Allows discrete resource allocation in case groups want physical > resource separation, yet still allows central management. > Resource allocation might control bandwidth, LUNx, CPU, > user/subdir/filesystem quotas, etc. = 0.5. Max bandwidth you can control. You can't set a min. CPU is irrelevant. > 23. Built-in file system compression option No. Perhaps you could use TSM as an external storage pool and de-dupe to VTL ? If you backend that to tape, remember it will un-dupe as it writes to tape. > 24. Built-in file-level replication option =1 > 25. Built-in file system deduplication option =0 . I think. > 26. Built-in file system encryption option =1, if you buy IBM storage with on disk encryption. I.E. the disk is encrypted and is unreadable if removed, but the actual file system itself is not. > 27. Support VM image movement among storage servers, including moving > entire jobs (hypervisor requirement) That's a huge scope. Check your choice of VM requirements. GPFS is just a file system. > 28. Security/authentication of local user to allow access (something stronger > than host-based access) No. Unless you chkconfig the GPFS start scripts off and then have the user authenticate to be abel to start the script which mounts GPFS. > 29. WAN-based file system (e.g., for disaster recover site) =1 > 31. Can perform OPTIONAL file system rebalancing when adding new > storage. =1 > 32. Protection from accidental, large scale deletions =1 via snapshots. Though that's retrospective. No system is idiot proof. > 33. Ability to transfer snapshots among hosts. Unknown. All hosts in GPFS would see the snapshot. Transfer to a different GPFS cluster for DR, er, not quite sure. > 34. Ability to promote snapshot to read/write partition In what context does 'promote' mean? > 35. Consideration given to number of metadata servers required to support > overall service, and how that affects HA, i.e., > must be able to support HA on a per namespace basis . (How many MD > servers would we need to keep file service running?) 2 dedicated NSD servers for all namespaces is a good setup. Though, metadata is shared between all nodes. > 36. Consideration given to backup and restore capabilities and compatible > hardware/software products. Look at timeframe requirements. > (What backup solutions does it recommend?) I rather like TSM. Not tried HPSS. > 37. Need to specify how any given file system is not POSIX-compliant so we > understand it. Make this info available to users. > (What are its POSIX shortcomings?) GPFS is POSIX compliant. I'm personally unaware of any POSIX compatibility shortcomings. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luke.raimbach at oerc.ox.ac.uk Thu May 31 12:41:49 2012 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Thu, 31 May 2012 11:41:49 +0000 Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some comments In-Reply-To: <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> References: <4FC671EB.7060104@slac.stanford.edu> <39571EA9316BE44899D59C7A640C13F53059E777@WARVWEXC1.uk.deluxe-eu.com> Message-ID: Hi Jez, >> 27. Support VM image movement among storage servers, including moving >> entire jobs (hypervisor requirement) > That's a huge scope. Check your choice of VM requirements. GPFS is just a file system. This works very nicely with VMware - we run our datastores from the cNFS exports of the file system. Putting the VM disks in a file-set allowed us to re-stripe the file-set, replicating it on to spare hardware in order to take down our main storage system for a firmware upgrade. The ESXi hosts didn't even flinch when we stopped the disks in the main file system! > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jez Tucker > Sent: 31 May 2012 09:32 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS Evaluation List - Please give some > comments > > Hello Grace, > > I've cribbed out the questions you've already answered. > Though, I think these should be best directed to IBM pre-sales tech to qualify > them. > > Regards, > > Jez > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Grace Tsai > > Sent: 30 May 2012 20:16 > > To: gpfsug-discuss at gpfsug.org > > Subject: [gpfsug-discuss] GPFS Evaluation List - Please give some > > comments > > > > Hi, > > > > We are in the process of choosing a permanent file system for our > > institution, GPFS is one of the three candidates. Could someone help > > me to give comments or answers to the requests listed in the > > following. Basically, I need your help to mark 1 or 0 in the GPFS > > column if a feature either exists or doesnt exist, respectively. > > Please also add supporting comments if a feature has additional info, > > e.g., 100PB single namespace file system supported, etc. > > I answered some of them which I have tested, or got the information > > from google or manuals. > > > > > > 9. Disk quotas based on directory. > > = 1 (per directory based on filesets which is a 'hard linked' directory to a > storage pool via placement rules.) Max filesets is 10000 in 3.5. > > > > Groupadmin-Visible Features > > --------------------------------------- > > > 5. Nesting groups within groups is permitted. > > > > 6. Groups are equal partners with users in terms of access control lists. > > GPFS supports POSIX and NFS v4 ACLS (which are not quite the same as > Windows ACLs) > > > 7. Group managers can adjust disk quotas > > > > 8. Group managers can create/delete/modify user spaces. > > > > 9. Group managers can create/delete/modify group spaces. > > .. .paraphrase... users with admin privs (root / sudoers) can adjust things. > How you organise your user & group administration is up to you. This is > external to GPFS. > > > > Sysadmin-Visible Features > > ----------------------------------- > > > > 1. Namespace is expandable and shrinkable without file system downtime. > > (My answer: 1) > > > > 2. Supports storage tiering (e.g., SSD, SAS, SATA, tape, grid, cloud) > > via some type of filtering, without manual user intervention (Data > > life-cycle > > management) > > = 1 . You can do this with GPFS policies and THRESHOLDS. Or look at IBM's > V7000 Easy Tier. > > > 3. User can provide manual "hints" on where to place files based on > > usage requirements. > > Do you mean the user is prompted, when you write a file? If so, then no. > Though there is an API, so you could integrate that functionality if required, > and your application defers to your GPFS API program before writes. I > suggest user education is far simpler and cheaper to maintain. If you need > prompts, your workflow is inefficient. It should be transparent to the user. > > > 4. Allows resource-configurable logical relocation or actual migration > > of data without user downtime (Hardware life-cycle > > management/patching/maintenance) > > = 1 > > > 6. Product has at least two commercial companies providing support. > > =1 Many companies provide OEM GPFS support. Though at some point this > may be backed off to IBM if a problem requires development teams. > > > 9. Customized levels of data redundancy at the > > file/subdirectory/partition layer, based on user requirements. > > Replication. Load-balancing. > > =1 > > > 10. Management software fully spoorts command line interface (CLI) > > =1 > > > > 10. Management software supports a graphical user interface (GUI) > > =1 , if you buy IBM's SONAS. Presume that v7000 has something also. > > > 11. Must run on non-proprietary x86/x64 hardware (Note: this might > > eliminate some proprietary solutions that meet every other > > requirement.) > > =1 > > > 13. Robust and reliable: file system must recover gracefully from an > > unscheduled power outage, and not take forever for fsck. > > =1. I've been through this personally. All good. All cluster nodes can > participate in fsck. > (Actually one of our Qlogic switches spat badness to two of our storage units > which caused both units to simultaneously soft-reboot. Apparently the > Qlogic firmware couldn't handle the amount of data we transfer a day in an > internal counter. Needless to say, new firmware was required.) > > > 14. Client code must support RHEL. > > (My answer: 1) > > > > > 18. Affordable > > > > 19. Value for the money. > > Both above points are arguable. Nobody knows your budget. > That said, it's cheaper to buy a GPFS system than an Isilon system of similar > spec (I have both - and we're just about to switch off the Isilon due to > running and expansion costs). Stornext is just too much management > overhead and constant de-fragging. > > > 20. Provides native accounting information to support a storage > > service model. > > What does 'Storage service model mean?' Chargeback per GB / user? > If so, then you can write a list policy to obtain this information or use fileset > quota accounting. > > > 21. Ability to change file owner throughout file system (generalized > > ability to implement metadata changes) > > =1. You'd run a policy to do this. > > > 22. Allows discrete resource allocation in case groups want physical > > resource separation, yet still allows central management. > > Resource allocation might control bandwidth, LUNx, CPU, > > user/subdir/filesystem quotas, etc. > > = 0.5. Max bandwidth you can control. You can't set a min. CPU is > irrelevant. > > > 23. Built-in file system compression option > > No. Perhaps you could use TSM as an external storage pool and de-dupe to > VTL ? If you backend that to tape, remember it will un-dupe as it writes to > tape. > > > 24. Built-in file-level replication option > > =1 > > > 25. Built-in file system deduplication option > > =0 . I think. > > > 26. Built-in file system encryption option > > =1, if you buy IBM storage with on disk encryption. I.E. the disk is encrypted > and is unreadable if removed, but the actual file system itself is not. > > > 27. Support VM image movement among storage servers, including moving > > entire jobs (hypervisor requirement) > > That's a huge scope. Check your choice of VM requirements. GPFS is just a > file system. > > > 28. Security/authentication of local user to allow access (something > > stronger than host-based access) > > No. Unless you chkconfig the GPFS start scripts off and then have the user > authenticate to be abel to start the script which mounts GPFS. > > > 29. WAN-based file system (e.g., for disaster recover site) > > =1 > > > 31. Can perform OPTIONAL file system rebalancing when adding new > > storage. > > =1 > > > 32. Protection from accidental, large scale deletions > > =1 via snapshots. Though that's retrospective. No system is idiot proof. > > > 33. Ability to transfer snapshots among hosts. > > Unknown. All hosts in GPFS would see the snapshot. Transfer to a different > GPFS cluster for DR, er, not quite sure. > > > 34. Ability to promote snapshot to read/write partition > > In what context does 'promote' mean? > > > 35. Consideration given to number of metadata servers required to > > support overall service, and how that affects HA, i.e., > > must be able to support HA on a per namespace basis . (How many > > MD servers would we need to keep file service running?) > > 2 dedicated NSD servers for all namespaces is a good setup. Though, > metadata is shared between all nodes. > > > 36. Consideration given to backup and restore capabilities and > > compatible hardware/software products. Look at timeframe requirements. > > (What backup solutions does it recommend?) > > I rather like TSM. Not tried HPSS. > > > 37. Need to specify how any given file system is not POSIX-compliant > > so we understand it. Make this info available to users. > > (What are its POSIX shortcomings?) > > GPFS is POSIX compliant. I'm personally unaware of any POSIX compatibility > shortcomings. > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss