From crobson at ocf.co.uk Tue Mar 6 17:23:24 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Tue, 6 Mar 2012 17:23:24 +0000 Subject: [gpfsug-discuss] GPFS UG Agenda Message-ID: Dear All, Please find attached next Wednesday's agenda. The meeting is taking place in Walton Room A, The Cockcroft Institute, Daresbury Laboratory, Warrington. Directions can be found http://www.cockcroft.ac.uk/pages/location.htm If you are planning on driving to the meeting, please email me your vehicle registration number as security have requested the details so that you can park on-site. Please send this to me no later than Tuesday 13th at noon. Many thanks and I look forward to seeing you next week for some interesting presentations and discussions. Claire Robson GPFS UG Secretary OCF plc Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Agenda GPFS User Group Meeting March 2012.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 74732 bytes Desc: Agenda GPFS User Group Meeting March 2012.docx URL: From viccornell at gmail.com Thu Mar 8 16:44:22 2012 From: viccornell at gmail.com (Vic Cornell) Date: Thu, 8 Mar 2012 16:44:22 +0000 Subject: [gpfsug-discuss] GPFS UG Agenda In-Reply-To: References: Message-ID: <2F14515A-B6FE-4A9D-913A-BD4B411521BC@gmail.com> Hi Claire, I will be attending the group. I will be flying up so I don't have a car reg for you. Regards, Vic On 6 Mar 2012, at 17:23, Claire Robson wrote: > Dear All, > > Please find attached next Wednesday?s agenda. > > The meeting is taking place in Walton Room A, The Cockcroft Institute, Daresbury Laboratory, Warrington. Directions can be found http://www.cockcroft.ac.uk/pages/location.htm > > If you are planning on driving to the meeting, please email me your vehicle registration number as security have requested the details so that you can park on-site. Please send this to me no later than Tuesday 13th at noon. > > Many thanks and I look forward to seeing you next week for some interesting presentations and discussions. > > Claire Robson > GPFS UG Secretary > > OCF plc > Tel: 0114 257 2200 > Mob: 07508 033896 > Fax: 0114 257 0022 > > OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG > > This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Mar 16 12:11:35 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 16 Mar 2012 12:11:35 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> Hello everyone, Firstly, many thanks to those who attended on Wednesday. I hope UG #5 proved helpful to all. Please let me know your feedback, positive critique would be most useful. Things for your bookmarks: A quick nod to Martin Glassborow aka 'Storagebod' for representing the Media Broadcast sector. Blog: http://www.storagebod.com Colin Morey volunteered to advise the committee on behalf of the HPC sector. Many thanks. If any one feels that they could volunteer their ear to the committee now and again for their own sector that would be very helpful [Pharmaceuticals, Aerospace, Oil & Gas, Formula One, etc..) The Git repository for the GPFS UG is here: https://github.com/gpfsug/gpfsug-tools If you want to commit something, drop me a quick email and I'll give you write access. (Mine will be going up soon once I've ironed out a the last [I hope] 3-4 bugs..) If there's any outstanding queries that you feel would be helpful to the group, ping a quick email to the mailing list. I'll collate these and get a response from IBM for you. I had a long chat with Boaz from ScaleIO on the way back home. vSAN looks very, very, interesting. We're going to dip our toes in and use it to underpin our ESX cluster. No brainer. Lastly. I'm afraid I'm now out of IBM Linux fleeces... Cheers Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.buzzard at dundee.ac.uk Fri Mar 16 12:55:25 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Fri, 16 Mar 2012 12:55:25 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F63383D.4020000@dundee.ac.uk> Jez Tucker wrote: [SNIP] > > The Git repository for the GPFS UG is here: > > https://github.com/gpfsug/gpfsug-tools > > If you want to commit something, drop me a quick email and I?ll give you > write access. > Write access for my mmdfree command. I also have an mmattrib command, that allows you to set the DOS style file attributes from Linux. That needs a bit of tidying up. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Fri Mar 16 15:21:31 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 16 Mar 2012 15:21:31 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <4F63383D.4020000@dundee.ac.uk> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> What's your github user id ? If you don't have one go here: https://github.com/signup/free > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 16 March 2012 12:55 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS UG #5 follow up > > Jez Tucker wrote: > > [SNIP] > > > > > The Git repository for the GPFS UG is here: > > > > https://github.com/gpfsug/gpfsug-tools > > > > If you want to commit something, drop me a quick email and I'll give > > you write access. > > > > Write access for my mmdfree command. > > I also have an mmattrib command, that allows you to set the DOS style file > attributes from Linux. That needs a bit of tidying up. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From Jez.Tucker at rushes.co.uk Mon Mar 19 12:14:23 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 12:14:23 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda Message-ID: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> Hello all I feel I should ask, is there anything that anybody thinks we should all see at the next UG? Tell us and we'll see if we can sort it out. Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Mon Mar 19 12:21:33 2012 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 19 Mar 2012 12:21:33 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda In-Reply-To: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> Hi Jez, when is it likely to be? Vic On 19 Mar 2012, at 12:14, Jez Tucker wrote: > Hello all > > I feel I should ask, is there anything that anybody thinks we should all see at the next UG? > > Tell us and we?ll see if we can sort it out. > > Jez > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Mar 19 12:31:03 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 12:31:03 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda In-Reply-To: <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> References: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5018040@WARVWEXC2.uk.deluxe-eu.com> "...next formal meeting will take place in September/October time and will be kindly hosted by AWE in Reading." From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Vic Cornell Sent: 19 March 2012 12:22 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFSUG #6 - agenda Hi Jez, when is it likely to be? Vic On 19 Mar 2012, at 12:14, Jez Tucker wrote: Hello all I feel I should ask, is there anything that anybody thinks we should all see at the next UG? Tell us and we'll see if we can sort it out. Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Mar 19 23:36:17 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 23:36:17 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? Message-ID: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> Hello Just wondering how other people go about copying loads of files in a many, many deep directory path from one file system to another. Assume filenames are full UNICODE and can contain almost any character. It has me wishing GPFS had a COPY FROM support as well as a MIGRATE FROM function for policies. Surely that would be possible...? Ways I can think of are: - Multiple 'scripted intelligent' rsync threads - Creating a policy to generate a file list to pass N batched files to N nodes to exec (again rsync?) - Barry Evans suggested via AFM. Though out file system needs to be upgraded before we could try this. Rsync handles UNICODE names well. tar, though faster for the first pass does not. Any ideas? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Tue Mar 20 00:47:14 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 20 Mar 2012 00:47:14 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> In answer to my own question ... http://www.gnu.org/software/parallel http://www.gnu.org/software/parallel/man.html#example__parallelizing_rsync Or http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). One for the bookmarks... hopefully you'll find it useful. Jez From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 19 March 2012 23:36 To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? Hello Just wondering how other people go about copying loads of files in a many, many deep directory path from one file system to another. Assume filenames are full UNICODE and can contain almost any character. r to use. Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Tue Mar 20 09:08:55 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 20 Mar 2012 09:08:55 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F684927.1060903@ed.ac.uk> We don't do this often, so when we do we tend to do a "noddy" parallelisation of rsync - manually divvy the folders up and spawn multiple threads. We might have a requirement soon(ish) to do this on a much larger scale though, with virtually no interruption to service - so I'm very keen to see what how the AFM solution looks, since this should allow us to present a continual and single view into the filesystem, whilst migrating it all in the background to the new filesystem, with just a brief wobble whilst we flip between the old and new views. On 20/03/12 00:47, Jez Tucker wrote: > In answer to my own question ? > > http://www.gnu.org/software/parallel > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_rsync > > Or > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > One for the bookmarks? hopefully you?ll find it useful. > > Jez > > *From:*gpfsug-discuss-bounces at gpfsug.org > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > *Sent:* 19 March 2012 23:36 > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > Hello > > Just wondering how other people go about copying loads of files in a > many, many deep directory path from one file system to another. Assume > filenames are full UNICODE and can contain almost any character. > > r to use. > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be > subject to legal privilege. If you are not the intended recipient, you > must not use, copy, distribute or disclose the e-mail or any part of its > contents or take any action in reliance on it. If you have received this > e-mail in error, please e-mail the sender by replying to this message. > All reasonable precautions have been taken to ensure no viruses are > present in this e-mail. Rushes Postproduction Limited cannot accept > responsibility for loss or damage arising from the use of this e-mail or > attachments and recommend that you subject these to your virus checking > procedures prior to use. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From Jez.Tucker at rushes.co.uk Wed Mar 21 16:37:02 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 16:37:02 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) Message-ID: <39571EA9316BE44899D59C7A640C13F501A9B5@WARVWEXC2.uk.deluxe-eu.com> --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed Mar 21 16:47:26 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 16:47:26 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Message-ID: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed Mar 21 18:00:01 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 21 Mar 2012 11:00:01 -0700 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: What do you want to achieve? Threshold based migration? Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed Mar 21 18:28:34 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 18:28:34 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> Yup. We're running svr 6.2.2-30 and BA/HSM 6.2.4-1, GPFS 3.4.0-10 on RH Ent 5.4 x64 At the moment, we're running migration policies 'auto-manually' via a script which checks if it needs to be run as the THRESHOLDs are not working. We've noticed the behaviour/stability of thresholds change each release from 3.4.0-8 onwards. 3.4.0-5 worked, but we jumped to .8 as we were told DMAPI for Windows was available [but undocumented], alas not. I had a previous PMR with support who told me to set: enableLowSpaceEvents=no -z=yes on our filesystem(s) Our tsm server has the correct callback setup: [root at tsm01 ~]# mmlscallback DISKSPACE command = /usr/lpp/mmfs/bin/mmstartpolicy event = lowDiskSpace,noDiskSpace node = tsm01.rushesfx.co.uk parms = %eventName %fsName N.B. I set the node just to be tsm01 as other nodes do not have HSM installed, hence if the callback occurred on those nodes, they'd run mmstartpolicy which would run dsmmigrate which is not installed on those nodes. tsm01 is currently setup as a manager-gateway node (very good for archiving up Isilons over NFS...) mmlscluster 3 tsm01.rushesfx.co.uk 10.100.106.50 tsm01.rushesfx.co.uk manager-gateway >From my testing: I can fill a test file system and receive the noDiskSpace callback, but not the lowDiskSpace. This is probably related to the enableLowSpaceEvents=no, but support told me to disable that... FYI. Follow PMRs #31788,999,866 and 67619,999,866 Jez From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 21 March 2012 18:00 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again What do you want to achieve? Threshold based migration? Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker > To: gpfsug main discussion list > Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Wed Mar 21 22:04:57 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Wed, 21 Mar 2012 22:04:57 +0000 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office until Monday 26th March Message-ID: I am out of the office until 26/03/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 3, Issue 6" sent on 21/3/2012 18:31:00. This is the only notification you will receive while this person is away. From Jez.Tucker at rushes.co.uk Thu Mar 22 09:27:56 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 09:27:56 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <4F684927.1060903@ed.ac.uk> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> <4F684927.1060903@ed.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> Here's a quick script to use GNU parallel: We saturated 8 Gb FC with this. #!/bin/sh DATADIR=/mnt/gpfs/srcfldr DESTDIR=/mnt/gpfs/destinationfldr while read LINE; do PROJ=$(echo $LINE | awk '{ print $1; }'); DESTFLDR=$(echo $LINE | awk '{ print $2; }'); echo "$PROJ -> $DEST"; mkdir -p "$DESTDIR/$DESTFLDR"; find $PROJ/ | parallel cp --parents -puv "{}" "$DESTDIR/$DESTFLDR/"; rsync -av $PROJ "$DESTDIR/$DESTFLDR/" 2>&1 > RESTORE_LOGS/$PROJ.restore.log; done < restore.my.projectlist This assumes restore.nsd01.projectlist contains something such as: ... > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Orlando Richards > Sent: 20 March 2012 09:09 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > We don't do this often, so when we do we tend to do a "noddy" > parallelisation of rsync - manually divvy the folders up and spawn > multiple threads. > > We might have a requirement soon(ish) to do this on a much larger scale > though, with virtually no interruption to service - so I'm very keen to > see what how the AFM solution looks, since this should allow us to > present a continual and single view into the filesystem, whilst > migrating it all in the background to the new filesystem, with just a > brief wobble whilst we flip between the old and new views. > > > On 20/03/12 00:47, Jez Tucker wrote: > > In answer to my own question ... > > > > http://www.gnu.org/software/parallel > > > > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_r > sync > > > > Or > > > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > > > One for the bookmarks... hopefully you'll find it useful. > > > > Jez > > > > *From:*gpfsug-discuss-bounces at gpfsug.org > > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > > *Sent:* 19 March 2012 23:36 > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from one > > stgpool to another- how do you do it? > > > > Hello > > > > Just wondering how other people go about copying loads of files in a > > many, many deep directory path from one file system to another. Assume > > filenames are full UNICODE and can contain almost any character. > > > > r to use. > > > > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > > tel: +44 (0)20 7437 8676 > > web: http://www.rushes.co.uk > > The information contained in this e-mail is confidential and may be > > subject to legal privilege. If you are not the intended recipient, you > > must not use, copy, distribute or disclose the e-mail or any part of its > > contents or take any action in reliance on it. If you have received this > > e-mail in error, please e-mail the sender by replying to this message. > > All reasonable precautions have been taken to ensure no viruses are > > present in this e-mail. Rushes Postproduction Limited cannot accept > > responsibility for loss or damage arising from the use of this e-mail or > > attachments and recommend that you subject these to your virus checking > > procedures prior to use. > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From Jez.Tucker at rushes.co.uk Thu Mar 22 09:35:00 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 09:35:00 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> <4F684927.1060903@ed.ac.uk> <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F501AEC2@WARVWEXC2.uk.deluxe-eu.com> Apologies should have read: 'This assumes restore.my.projectlist contains'.. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jez Tucker > Sent: 22 March 2012 09:28 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > Here's a quick script to use GNU parallel: > > We saturated 8 Gb FC with this. > > > #!/bin/sh > > DATADIR=/mnt/gpfs/srcfldr > DESTDIR=/mnt/gpfs/destinationfldr > > while read LINE; do > > PROJ=$(echo $LINE | awk '{ print $1; }'); > DESTFLDR=$(echo $LINE | awk '{ print $2; }'); > > echo "$PROJ -> $DEST"; > mkdir -p "$DESTDIR/$DESTFLDR"; > > find $PROJ/ | parallel cp --parents -puv "{}" "$DESTDIR/$DESTFLDR/"; > rsync -av $PROJ "$DESTDIR/$DESTFLDR/" 2>&1 > > RESTORE_LOGS/$PROJ.restore.log; > > done < restore.my.projectlist > > > This assumes restore.nsd01.projectlist contains something such as: > > > > ... > > > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Orlando Richards > > Sent: 20 March 2012 09:09 > > To: gpfsug-discuss at gpfsug.org > > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from > > one stgpool to another- how do you do it? > > > > We don't do this often, so when we do we tend to do a "noddy" > > parallelisation of rsync - manually divvy the folders up and spawn > > multiple threads. > > > > We might have a requirement soon(ish) to do this on a much larger > > scale though, with virtually no interruption to service - so I'm very > > keen to see what how the AFM solution looks, since this should allow > > us to present a continual and single view into the filesystem, whilst > > migrating it all in the background to the new filesystem, with just a > > brief wobble whilst we flip between the old and new views. > > > > > > On 20/03/12 00:47, Jez Tucker wrote: > > > In answer to my own question ... > > > > > > http://www.gnu.org/software/parallel > > > > > > > > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_r > > sync > > > > > > Or > > > > > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > > > > > One for the bookmarks... hopefully you'll find it useful. > > > > > > Jez > > > > > > *From:*gpfsug-discuss-bounces at gpfsug.org > > > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > > > *Sent:* 19 March 2012 23:36 > > > *To:* gpfsug main discussion list > > > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from > > > one stgpool to another- how do you do it? > > > > > > Hello > > > > > > Just wondering how other people go about copying loads of files in a > > > many, many deep directory path from one file system to another. > > > Assume filenames are full UNICODE and can contain almost any > character. > > > > > > r to use. > > > > > > > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D > 4UH > > > tel: +44 (0)20 7437 8676 > > > web: http://www.rushes.co.uk > > > The information contained in this e-mail is confidential and may be > > > subject to legal privilege. If you are not the intended recipient, > > > you must not use, copy, distribute or disclose the e-mail or any > > > part of its contents or take any action in reliance on it. If you > > > have received this e-mail in error, please e-mail the sender by replying > to this message. > > > All reasonable precautions have been taken to ensure no viruses are > > > present in this e-mail. Rushes Postproduction Limited cannot accept > > > responsibility for loss or damage arising from the use of this > > > e-mail or attachments and recommend that you subject these to your > > > virus checking procedures prior to use. > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at gpfsug.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > -- > > -- > > Dr Orlando Richards > > Information Services > > IT Infrastructure Division > > Unix Section > > Tel: 0131 650 4994 > > > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be subject > to legal privilege. If you are not the intended recipient, you must not use, > copy, distribute or disclose the e-mail or any part of its contents or take any > action in reliance on it. If you have received this e-mail in error, please e- > mail the sender by replying to this message. All reasonable precautions > have been taken to ensure no viruses are present in this e-mail. Rushes > Postproduction Limited cannot accept responsibility for loss or damage > arising from the use of this e-mail or attachments and recommend that you > subject these to your virus checking procedures prior to use. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From j.buzzard at dundee.ac.uk Thu Mar 22 12:04:50 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 22 Mar 2012 12:04:50 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F6B1562.9020403@dundee.ac.uk> On 03/21/2012 06:28 PM, Jez Tucker wrote: > Yup. > > We?re running svr 6.2.2-30 and BA/HSM 6.2.4-1, GPFS 3.4.0-10 on RH Ent > 5.4 x64 > > At the moment, we?re running migration policies ?auto-manually? via a > script which checks if it needs to be run as the THRESHOLDs are not working. > > We?ve noticed the behaviour/stability of thresholds change each release > from 3.4.0-8 onwards. 3.4.0-5 worked, but we jumped to .8 as we were > told DMAPI for Windows was available [but undocumented], alas not. > > I had a previous PMR with support who told me to set: > > enableLowSpaceEvents=no > > -z=yes on our filesystem(s) > > Our tsm server has the correct callback setup: > > [root at tsm01 ~]# mmlscallback > > DISKSPACE > > command = /usr/lpp/mmfs/bin/mmstartpolicy > > event = lowDiskSpace,noDiskSpace > > node = tsm01.rushesfx.co.uk > > parms = %eventName %fsName > > N.B. I set the node just to be tsm01 as other nodes do not have HSM > installed, hence if the callback occurred on those nodes, they?d run > mmstartpolicy which would run dsmmigrate which is not installed on those > nodes. Note that you can have more than one node with the hsm client installed. Gives some redundancy should a node fail. Apart from that your current setup is a *REALLY* bad idea. As I understand it when you hit the lowDiskSpace event every two minutes it will call the mmstartpolicy command. That's fine if your policy can run inside two minutes and cause the usage to fall below the threshold. As that is extremely unlikely you need to write a script with locking to prevent that happening, otherwise you will have multiple instances of the policy running all at once and bringing everything to it's knees. I would add that the GPFS documentation surrounding this is *very* poor, and complete with the utter failure in the release notes to mention the change of behaviour between 3.2 and 3.3 this whole area needs to be approached with caution as clearly IBM are happy to break things with out telling us. That said I run with the following on 3.4.0-6 DISKSPACE command = /usr/local/bin/run_ilm_cycle event = lowDiskSpace node = nsdnodes parms = %eventName %fsName And the run_ilm_cycle works just fine, and is included inline below. It is installed on all NSD nodes. This is not strict HSM as it is pushing from my fast to slow disk. However as my nearline pool is not full, I have not yet applied HSM to that pool. In fact although I have HSM enabled and it works on the file system it is all turned off as we are still running with 5.5 servers we cannot install the 6.3 client, and without the 6.3 client you cannot turn of dsmscoutd and that just tanks our file system when it starts. Note anyone still reading I urge you to read http://www-01.ibm.com/support/docview.wss?uid=swg1IC73091 and upgrade your TSM client if necessary. JAB. #!/bin/bash # # Wrapper script to run an mmapplypolicy on a GPFS file system when a callback # is triggered. Specifically it is intended to be triggered by a lowDiskSpace # event registered with a call back like the following. # # mmaddcallback DISKSPACE --command /usr/local/bin/run_ilm_cycle --event # lowDiskSpace -N nsdnodes --parms "%eventname %fsName" # # The script includes cluster wide quiescence locking so that it plays nicely # with other automated scripts that need GPFS quiescence to run. # EVENT_NAME=$1 FS=$2 # determine the mount point for the file system MOUNT_POINT=`/usr/lpp/mmfs/bin/mmlsfs ${FS} |grep "\-T" |awk '{print $2}'` HOSTNAME=`/bin/hostname -s` # lock file LOCKDIR="${MOUNT_POINT}/ctdb/quiescence.lock" # exit codes and text for them ENO_SUCCESS=0; ETXT[0]="ENO_SUCCESS" ENO_GENERAL=1; ETXT[1]="ENO_GENERAL" ENO_LOCKFAIL=2; ETXT[2]="ENO_LOCKFAIL" ENO_RECVSIG=3; ETXT[3]="ENO_RECVSIG" # # Attempt to get a lock # trap 'ECODE=$?; echo "[${PROG}] Exit: ${ETXT[ECODE]}($ECODE)" >&2' 0 echo -n "[${PROG}] Locking: " >&2 if mkdir "${LOCKDIR}" &>/dev/null; then # lock succeeded, install signal handlers trap 'ECODE=$?; echo "[${PROG}] Removing lock. Exit: ${ETXT[ECODE]}($ECODE)" >&2 rm -rf "${LOCKDIR}"' 0 # the following handler will exit the script on receiving these signals # the trap on "0" (EXIT) from above will be triggered by this scripts # "exit" command! trap 'echo "[${PROG}] Killed by a signal." >&2 exit ${ENO_RECVSIG}' 1 2 3 15 echo "success, installed signal handlers" else # exit, we're locked! echo "lock failed other operation running" >&2 exit ${ENO_LOCKFAIL} fi # note what we are doing and where we are doing it /bin/touch $LOCKDIR/${EVENT_NAME}.${HOSTNAME} # apply the policy echo "running mmapplypolicy for the file system: ${FS}" /usr/lpp/mmfs/bin/mmapplypolicy $FS -N nsdnodes -P $MOUNT_POINT/rules.txt exit 0; -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From j.buzzard at dundee.ac.uk Thu Mar 22 12:41:34 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 22 Mar 2012 12:41:34 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F6B1DFE.4020803@dundee.ac.uk> On 03/16/2012 03:21 PM, Jez Tucker wrote: > What's your github user id ? > If you don't have one go here: https://github.com/signup/free > Been having major GPFS woes this week. My id is jabuzzard. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu Mar 22 12:52:06 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 12:52:06 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <4F6B1DFE.4020803@dundee.ac.uk> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> <4F6B1DFE.4020803@dundee.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F501B04D@WARVWEXC2.uk.deluxe-eu.com> Allo You should now have pull+push access. I've not setup a proper branch structure yet, but I suggest you do a pull and add something along the lines of trunk/master/scripts/ > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 22 March 2012 12:42 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS UG #5 follow up > > On 03/16/2012 03:21 PM, Jez Tucker wrote: > > What's your github user id ? > > If you don't have one go here: https://github.com/signup/free > > > > Been having major GPFS woes this week. My id is jabuzzard. > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences University of Dundee, DD1 > 5EH _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From sfadden at us.ibm.com Thu Mar 22 18:15:05 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Thu, 22 Mar 2012 11:15:05 -0700 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: Let me know if this helps http://www.ibm.com/developerworks/wikis/display/hpccentral/Threshold+based+migration+using+callbacks+example It is not specifically TSM but the model is the same. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From geraint.north at uk.ibm.com Fri Mar 23 18:15:53 2012 From: geraint.north at uk.ibm.com (Geraint North) Date: Fri, 23 Mar 2012 18:15:53 +0000 Subject: [gpfsug-discuss] AUTO: Geraint North is prepared for DELETION (FREEZE) (returning 29/03/2012) Message-ID: I am out of the office until 29/03/2012. Note: This is an automated response to your message "[gpfsug-discuss] GPFSUG #6 - agenda" sent on 19/3/2012 12:14:23. This is the only notification you will receive while this person is away. From crobson at ocf.co.uk Tue Mar 6 17:23:24 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Tue, 6 Mar 2012 17:23:24 +0000 Subject: [gpfsug-discuss] GPFS UG Agenda Message-ID: Dear All, Please find attached next Wednesday's agenda. The meeting is taking place in Walton Room A, The Cockcroft Institute, Daresbury Laboratory, Warrington. Directions can be found http://www.cockcroft.ac.uk/pages/location.htm If you are planning on driving to the meeting, please email me your vehicle registration number as security have requested the details so that you can park on-site. Please send this to me no later than Tuesday 13th at noon. Many thanks and I look forward to seeing you next week for some interesting presentations and discussions. Claire Robson GPFS UG Secretary OCF plc Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Agenda GPFS User Group Meeting March 2012.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 74732 bytes Desc: Agenda GPFS User Group Meeting March 2012.docx URL: From viccornell at gmail.com Thu Mar 8 16:44:22 2012 From: viccornell at gmail.com (Vic Cornell) Date: Thu, 8 Mar 2012 16:44:22 +0000 Subject: [gpfsug-discuss] GPFS UG Agenda In-Reply-To: References: Message-ID: <2F14515A-B6FE-4A9D-913A-BD4B411521BC@gmail.com> Hi Claire, I will be attending the group. I will be flying up so I don't have a car reg for you. Regards, Vic On 6 Mar 2012, at 17:23, Claire Robson wrote: > Dear All, > > Please find attached next Wednesday?s agenda. > > The meeting is taking place in Walton Room A, The Cockcroft Institute, Daresbury Laboratory, Warrington. Directions can be found http://www.cockcroft.ac.uk/pages/location.htm > > If you are planning on driving to the meeting, please email me your vehicle registration number as security have requested the details so that you can park on-site. Please send this to me no later than Tuesday 13th at noon. > > Many thanks and I look forward to seeing you next week for some interesting presentations and discussions. > > Claire Robson > GPFS UG Secretary > > OCF plc > Tel: 0114 257 2200 > Mob: 07508 033896 > Fax: 0114 257 0022 > > OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG > > This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Mar 16 12:11:35 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 16 Mar 2012 12:11:35 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> Hello everyone, Firstly, many thanks to those who attended on Wednesday. I hope UG #5 proved helpful to all. Please let me know your feedback, positive critique would be most useful. Things for your bookmarks: A quick nod to Martin Glassborow aka 'Storagebod' for representing the Media Broadcast sector. Blog: http://www.storagebod.com Colin Morey volunteered to advise the committee on behalf of the HPC sector. Many thanks. If any one feels that they could volunteer their ear to the committee now and again for their own sector that would be very helpful [Pharmaceuticals, Aerospace, Oil & Gas, Formula One, etc..) The Git repository for the GPFS UG is here: https://github.com/gpfsug/gpfsug-tools If you want to commit something, drop me a quick email and I'll give you write access. (Mine will be going up soon once I've ironed out a the last [I hope] 3-4 bugs..) If there's any outstanding queries that you feel would be helpful to the group, ping a quick email to the mailing list. I'll collate these and get a response from IBM for you. I had a long chat with Boaz from ScaleIO on the way back home. vSAN looks very, very, interesting. We're going to dip our toes in and use it to underpin our ESX cluster. No brainer. Lastly. I'm afraid I'm now out of IBM Linux fleeces... Cheers Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.buzzard at dundee.ac.uk Fri Mar 16 12:55:25 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Fri, 16 Mar 2012 12:55:25 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F63383D.4020000@dundee.ac.uk> Jez Tucker wrote: [SNIP] > > The Git repository for the GPFS UG is here: > > https://github.com/gpfsug/gpfsug-tools > > If you want to commit something, drop me a quick email and I?ll give you > write access. > Write access for my mmdfree command. I also have an mmattrib command, that allows you to set the DOS style file attributes from Linux. That needs a bit of tidying up. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Fri Mar 16 15:21:31 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 16 Mar 2012 15:21:31 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <4F63383D.4020000@dundee.ac.uk> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> What's your github user id ? If you don't have one go here: https://github.com/signup/free > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 16 March 2012 12:55 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS UG #5 follow up > > Jez Tucker wrote: > > [SNIP] > > > > > The Git repository for the GPFS UG is here: > > > > https://github.com/gpfsug/gpfsug-tools > > > > If you want to commit something, drop me a quick email and I'll give > > you write access. > > > > Write access for my mmdfree command. > > I also have an mmattrib command, that allows you to set the DOS style file > attributes from Linux. That needs a bit of tidying up. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From Jez.Tucker at rushes.co.uk Mon Mar 19 12:14:23 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 12:14:23 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda Message-ID: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> Hello all I feel I should ask, is there anything that anybody thinks we should all see at the next UG? Tell us and we'll see if we can sort it out. Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Mon Mar 19 12:21:33 2012 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 19 Mar 2012 12:21:33 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda In-Reply-To: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> Hi Jez, when is it likely to be? Vic On 19 Mar 2012, at 12:14, Jez Tucker wrote: > Hello all > > I feel I should ask, is there anything that anybody thinks we should all see at the next UG? > > Tell us and we?ll see if we can sort it out. > > Jez > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Mar 19 12:31:03 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 12:31:03 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda In-Reply-To: <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> References: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5018040@WARVWEXC2.uk.deluxe-eu.com> "...next formal meeting will take place in September/October time and will be kindly hosted by AWE in Reading." From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Vic Cornell Sent: 19 March 2012 12:22 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFSUG #6 - agenda Hi Jez, when is it likely to be? Vic On 19 Mar 2012, at 12:14, Jez Tucker wrote: Hello all I feel I should ask, is there anything that anybody thinks we should all see at the next UG? Tell us and we'll see if we can sort it out. Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Mar 19 23:36:17 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 23:36:17 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? Message-ID: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> Hello Just wondering how other people go about copying loads of files in a many, many deep directory path from one file system to another. Assume filenames are full UNICODE and can contain almost any character. It has me wishing GPFS had a COPY FROM support as well as a MIGRATE FROM function for policies. Surely that would be possible...? Ways I can think of are: - Multiple 'scripted intelligent' rsync threads - Creating a policy to generate a file list to pass N batched files to N nodes to exec (again rsync?) - Barry Evans suggested via AFM. Though out file system needs to be upgraded before we could try this. Rsync handles UNICODE names well. tar, though faster for the first pass does not. Any ideas? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Tue Mar 20 00:47:14 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 20 Mar 2012 00:47:14 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> In answer to my own question ... http://www.gnu.org/software/parallel http://www.gnu.org/software/parallel/man.html#example__parallelizing_rsync Or http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). One for the bookmarks... hopefully you'll find it useful. Jez From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 19 March 2012 23:36 To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? Hello Just wondering how other people go about copying loads of files in a many, many deep directory path from one file system to another. Assume filenames are full UNICODE and can contain almost any character. r to use. Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Tue Mar 20 09:08:55 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 20 Mar 2012 09:08:55 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F684927.1060903@ed.ac.uk> We don't do this often, so when we do we tend to do a "noddy" parallelisation of rsync - manually divvy the folders up and spawn multiple threads. We might have a requirement soon(ish) to do this on a much larger scale though, with virtually no interruption to service - so I'm very keen to see what how the AFM solution looks, since this should allow us to present a continual and single view into the filesystem, whilst migrating it all in the background to the new filesystem, with just a brief wobble whilst we flip between the old and new views. On 20/03/12 00:47, Jez Tucker wrote: > In answer to my own question ? > > http://www.gnu.org/software/parallel > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_rsync > > Or > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > One for the bookmarks? hopefully you?ll find it useful. > > Jez > > *From:*gpfsug-discuss-bounces at gpfsug.org > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > *Sent:* 19 March 2012 23:36 > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > Hello > > Just wondering how other people go about copying loads of files in a > many, many deep directory path from one file system to another. Assume > filenames are full UNICODE and can contain almost any character. > > r to use. > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be > subject to legal privilege. If you are not the intended recipient, you > must not use, copy, distribute or disclose the e-mail or any part of its > contents or take any action in reliance on it. If you have received this > e-mail in error, please e-mail the sender by replying to this message. > All reasonable precautions have been taken to ensure no viruses are > present in this e-mail. Rushes Postproduction Limited cannot accept > responsibility for loss or damage arising from the use of this e-mail or > attachments and recommend that you subject these to your virus checking > procedures prior to use. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From Jez.Tucker at rushes.co.uk Wed Mar 21 16:37:02 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 16:37:02 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) Message-ID: <39571EA9316BE44899D59C7A640C13F501A9B5@WARVWEXC2.uk.deluxe-eu.com> --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed Mar 21 16:47:26 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 16:47:26 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Message-ID: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed Mar 21 18:00:01 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 21 Mar 2012 11:00:01 -0700 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: What do you want to achieve? Threshold based migration? Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed Mar 21 18:28:34 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 18:28:34 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> Yup. We're running svr 6.2.2-30 and BA/HSM 6.2.4-1, GPFS 3.4.0-10 on RH Ent 5.4 x64 At the moment, we're running migration policies 'auto-manually' via a script which checks if it needs to be run as the THRESHOLDs are not working. We've noticed the behaviour/stability of thresholds change each release from 3.4.0-8 onwards. 3.4.0-5 worked, but we jumped to .8 as we were told DMAPI for Windows was available [but undocumented], alas not. I had a previous PMR with support who told me to set: enableLowSpaceEvents=no -z=yes on our filesystem(s) Our tsm server has the correct callback setup: [root at tsm01 ~]# mmlscallback DISKSPACE command = /usr/lpp/mmfs/bin/mmstartpolicy event = lowDiskSpace,noDiskSpace node = tsm01.rushesfx.co.uk parms = %eventName %fsName N.B. I set the node just to be tsm01 as other nodes do not have HSM installed, hence if the callback occurred on those nodes, they'd run mmstartpolicy which would run dsmmigrate which is not installed on those nodes. tsm01 is currently setup as a manager-gateway node (very good for archiving up Isilons over NFS...) mmlscluster 3 tsm01.rushesfx.co.uk 10.100.106.50 tsm01.rushesfx.co.uk manager-gateway >From my testing: I can fill a test file system and receive the noDiskSpace callback, but not the lowDiskSpace. This is probably related to the enableLowSpaceEvents=no, but support told me to disable that... FYI. Follow PMRs #31788,999,866 and 67619,999,866 Jez From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 21 March 2012 18:00 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again What do you want to achieve? Threshold based migration? Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker > To: gpfsug main discussion list > Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Wed Mar 21 22:04:57 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Wed, 21 Mar 2012 22:04:57 +0000 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office until Monday 26th March Message-ID: I am out of the office until 26/03/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 3, Issue 6" sent on 21/3/2012 18:31:00. This is the only notification you will receive while this person is away. From Jez.Tucker at rushes.co.uk Thu Mar 22 09:27:56 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 09:27:56 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <4F684927.1060903@ed.ac.uk> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> <4F684927.1060903@ed.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> Here's a quick script to use GNU parallel: We saturated 8 Gb FC with this. #!/bin/sh DATADIR=/mnt/gpfs/srcfldr DESTDIR=/mnt/gpfs/destinationfldr while read LINE; do PROJ=$(echo $LINE | awk '{ print $1; }'); DESTFLDR=$(echo $LINE | awk '{ print $2; }'); echo "$PROJ -> $DEST"; mkdir -p "$DESTDIR/$DESTFLDR"; find $PROJ/ | parallel cp --parents -puv "{}" "$DESTDIR/$DESTFLDR/"; rsync -av $PROJ "$DESTDIR/$DESTFLDR/" 2>&1 > RESTORE_LOGS/$PROJ.restore.log; done < restore.my.projectlist This assumes restore.nsd01.projectlist contains something such as: ... > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Orlando Richards > Sent: 20 March 2012 09:09 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > We don't do this often, so when we do we tend to do a "noddy" > parallelisation of rsync - manually divvy the folders up and spawn > multiple threads. > > We might have a requirement soon(ish) to do this on a much larger scale > though, with virtually no interruption to service - so I'm very keen to > see what how the AFM solution looks, since this should allow us to > present a continual and single view into the filesystem, whilst > migrating it all in the background to the new filesystem, with just a > brief wobble whilst we flip between the old and new views. > > > On 20/03/12 00:47, Jez Tucker wrote: > > In answer to my own question ... > > > > http://www.gnu.org/software/parallel > > > > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_r > sync > > > > Or > > > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > > > One for the bookmarks... hopefully you'll find it useful. > > > > Jez > > > > *From:*gpfsug-discuss-bounces at gpfsug.org > > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > > *Sent:* 19 March 2012 23:36 > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from one > > stgpool to another- how do you do it? > > > > Hello > > > > Just wondering how other people go about copying loads of files in a > > many, many deep directory path from one file system to another. Assume > > filenames are full UNICODE and can contain almost any character. > > > > r to use. > > > > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > > tel: +44 (0)20 7437 8676 > > web: http://www.rushes.co.uk > > The information contained in this e-mail is confidential and may be > > subject to legal privilege. If you are not the intended recipient, you > > must not use, copy, distribute or disclose the e-mail or any part of its > > contents or take any action in reliance on it. If you have received this > > e-mail in error, please e-mail the sender by replying to this message. > > All reasonable precautions have been taken to ensure no viruses are > > present in this e-mail. Rushes Postproduction Limited cannot accept > > responsibility for loss or damage arising from the use of this e-mail or > > attachments and recommend that you subject these to your virus checking > > procedures prior to use. > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From Jez.Tucker at rushes.co.uk Thu Mar 22 09:35:00 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 09:35:00 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> <4F684927.1060903@ed.ac.uk> <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F501AEC2@WARVWEXC2.uk.deluxe-eu.com> Apologies should have read: 'This assumes restore.my.projectlist contains'.. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jez Tucker > Sent: 22 March 2012 09:28 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > Here's a quick script to use GNU parallel: > > We saturated 8 Gb FC with this. > > > #!/bin/sh > > DATADIR=/mnt/gpfs/srcfldr > DESTDIR=/mnt/gpfs/destinationfldr > > while read LINE; do > > PROJ=$(echo $LINE | awk '{ print $1; }'); > DESTFLDR=$(echo $LINE | awk '{ print $2; }'); > > echo "$PROJ -> $DEST"; > mkdir -p "$DESTDIR/$DESTFLDR"; > > find $PROJ/ | parallel cp --parents -puv "{}" "$DESTDIR/$DESTFLDR/"; > rsync -av $PROJ "$DESTDIR/$DESTFLDR/" 2>&1 > > RESTORE_LOGS/$PROJ.restore.log; > > done < restore.my.projectlist > > > This assumes restore.nsd01.projectlist contains something such as: > > > > ... > > > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Orlando Richards > > Sent: 20 March 2012 09:09 > > To: gpfsug-discuss at gpfsug.org > > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from > > one stgpool to another- how do you do it? > > > > We don't do this often, so when we do we tend to do a "noddy" > > parallelisation of rsync - manually divvy the folders up and spawn > > multiple threads. > > > > We might have a requirement soon(ish) to do this on a much larger > > scale though, with virtually no interruption to service - so I'm very > > keen to see what how the AFM solution looks, since this should allow > > us to present a continual and single view into the filesystem, whilst > > migrating it all in the background to the new filesystem, with just a > > brief wobble whilst we flip between the old and new views. > > > > > > On 20/03/12 00:47, Jez Tucker wrote: > > > In answer to my own question ... > > > > > > http://www.gnu.org/software/parallel > > > > > > > > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_r > > sync > > > > > > Or > > > > > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > > > > > One for the bookmarks... hopefully you'll find it useful. > > > > > > Jez > > > > > > *From:*gpfsug-discuss-bounces at gpfsug.org > > > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > > > *Sent:* 19 March 2012 23:36 > > > *To:* gpfsug main discussion list > > > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from > > > one stgpool to another- how do you do it? > > > > > > Hello > > > > > > Just wondering how other people go about copying loads of files in a > > > many, many deep directory path from one file system to another. > > > Assume filenames are full UNICODE and can contain almost any > character. > > > > > > r to use. > > > > > > > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D > 4UH > > > tel: +44 (0)20 7437 8676 > > > web: http://www.rushes.co.uk > > > The information contained in this e-mail is confidential and may be > > > subject to legal privilege. If you are not the intended recipient, > > > you must not use, copy, distribute or disclose the e-mail or any > > > part of its contents or take any action in reliance on it. If you > > > have received this e-mail in error, please e-mail the sender by replying > to this message. > > > All reasonable precautions have been taken to ensure no viruses are > > > present in this e-mail. Rushes Postproduction Limited cannot accept > > > responsibility for loss or damage arising from the use of this > > > e-mail or attachments and recommend that you subject these to your > > > virus checking procedures prior to use. > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at gpfsug.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > -- > > -- > > Dr Orlando Richards > > Information Services > > IT Infrastructure Division > > Unix Section > > Tel: 0131 650 4994 > > > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be subject > to legal privilege. If you are not the intended recipient, you must not use, > copy, distribute or disclose the e-mail or any part of its contents or take any > action in reliance on it. If you have received this e-mail in error, please e- > mail the sender by replying to this message. All reasonable precautions > have been taken to ensure no viruses are present in this e-mail. Rushes > Postproduction Limited cannot accept responsibility for loss or damage > arising from the use of this e-mail or attachments and recommend that you > subject these to your virus checking procedures prior to use. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From j.buzzard at dundee.ac.uk Thu Mar 22 12:04:50 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 22 Mar 2012 12:04:50 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F6B1562.9020403@dundee.ac.uk> On 03/21/2012 06:28 PM, Jez Tucker wrote: > Yup. > > We?re running svr 6.2.2-30 and BA/HSM 6.2.4-1, GPFS 3.4.0-10 on RH Ent > 5.4 x64 > > At the moment, we?re running migration policies ?auto-manually? via a > script which checks if it needs to be run as the THRESHOLDs are not working. > > We?ve noticed the behaviour/stability of thresholds change each release > from 3.4.0-8 onwards. 3.4.0-5 worked, but we jumped to .8 as we were > told DMAPI for Windows was available [but undocumented], alas not. > > I had a previous PMR with support who told me to set: > > enableLowSpaceEvents=no > > -z=yes on our filesystem(s) > > Our tsm server has the correct callback setup: > > [root at tsm01 ~]# mmlscallback > > DISKSPACE > > command = /usr/lpp/mmfs/bin/mmstartpolicy > > event = lowDiskSpace,noDiskSpace > > node = tsm01.rushesfx.co.uk > > parms = %eventName %fsName > > N.B. I set the node just to be tsm01 as other nodes do not have HSM > installed, hence if the callback occurred on those nodes, they?d run > mmstartpolicy which would run dsmmigrate which is not installed on those > nodes. Note that you can have more than one node with the hsm client installed. Gives some redundancy should a node fail. Apart from that your current setup is a *REALLY* bad idea. As I understand it when you hit the lowDiskSpace event every two minutes it will call the mmstartpolicy command. That's fine if your policy can run inside two minutes and cause the usage to fall below the threshold. As that is extremely unlikely you need to write a script with locking to prevent that happening, otherwise you will have multiple instances of the policy running all at once and bringing everything to it's knees. I would add that the GPFS documentation surrounding this is *very* poor, and complete with the utter failure in the release notes to mention the change of behaviour between 3.2 and 3.3 this whole area needs to be approached with caution as clearly IBM are happy to break things with out telling us. That said I run with the following on 3.4.0-6 DISKSPACE command = /usr/local/bin/run_ilm_cycle event = lowDiskSpace node = nsdnodes parms = %eventName %fsName And the run_ilm_cycle works just fine, and is included inline below. It is installed on all NSD nodes. This is not strict HSM as it is pushing from my fast to slow disk. However as my nearline pool is not full, I have not yet applied HSM to that pool. In fact although I have HSM enabled and it works on the file system it is all turned off as we are still running with 5.5 servers we cannot install the 6.3 client, and without the 6.3 client you cannot turn of dsmscoutd and that just tanks our file system when it starts. Note anyone still reading I urge you to read http://www-01.ibm.com/support/docview.wss?uid=swg1IC73091 and upgrade your TSM client if necessary. JAB. #!/bin/bash # # Wrapper script to run an mmapplypolicy on a GPFS file system when a callback # is triggered. Specifically it is intended to be triggered by a lowDiskSpace # event registered with a call back like the following. # # mmaddcallback DISKSPACE --command /usr/local/bin/run_ilm_cycle --event # lowDiskSpace -N nsdnodes --parms "%eventname %fsName" # # The script includes cluster wide quiescence locking so that it plays nicely # with other automated scripts that need GPFS quiescence to run. # EVENT_NAME=$1 FS=$2 # determine the mount point for the file system MOUNT_POINT=`/usr/lpp/mmfs/bin/mmlsfs ${FS} |grep "\-T" |awk '{print $2}'` HOSTNAME=`/bin/hostname -s` # lock file LOCKDIR="${MOUNT_POINT}/ctdb/quiescence.lock" # exit codes and text for them ENO_SUCCESS=0; ETXT[0]="ENO_SUCCESS" ENO_GENERAL=1; ETXT[1]="ENO_GENERAL" ENO_LOCKFAIL=2; ETXT[2]="ENO_LOCKFAIL" ENO_RECVSIG=3; ETXT[3]="ENO_RECVSIG" # # Attempt to get a lock # trap 'ECODE=$?; echo "[${PROG}] Exit: ${ETXT[ECODE]}($ECODE)" >&2' 0 echo -n "[${PROG}] Locking: " >&2 if mkdir "${LOCKDIR}" &>/dev/null; then # lock succeeded, install signal handlers trap 'ECODE=$?; echo "[${PROG}] Removing lock. Exit: ${ETXT[ECODE]}($ECODE)" >&2 rm -rf "${LOCKDIR}"' 0 # the following handler will exit the script on receiving these signals # the trap on "0" (EXIT) from above will be triggered by this scripts # "exit" command! trap 'echo "[${PROG}] Killed by a signal." >&2 exit ${ENO_RECVSIG}' 1 2 3 15 echo "success, installed signal handlers" else # exit, we're locked! echo "lock failed other operation running" >&2 exit ${ENO_LOCKFAIL} fi # note what we are doing and where we are doing it /bin/touch $LOCKDIR/${EVENT_NAME}.${HOSTNAME} # apply the policy echo "running mmapplypolicy for the file system: ${FS}" /usr/lpp/mmfs/bin/mmapplypolicy $FS -N nsdnodes -P $MOUNT_POINT/rules.txt exit 0; -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From j.buzzard at dundee.ac.uk Thu Mar 22 12:41:34 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 22 Mar 2012 12:41:34 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F6B1DFE.4020803@dundee.ac.uk> On 03/16/2012 03:21 PM, Jez Tucker wrote: > What's your github user id ? > If you don't have one go here: https://github.com/signup/free > Been having major GPFS woes this week. My id is jabuzzard. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu Mar 22 12:52:06 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 12:52:06 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <4F6B1DFE.4020803@dundee.ac.uk> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> <4F6B1DFE.4020803@dundee.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F501B04D@WARVWEXC2.uk.deluxe-eu.com> Allo You should now have pull+push access. I've not setup a proper branch structure yet, but I suggest you do a pull and add something along the lines of trunk/master/scripts/ > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 22 March 2012 12:42 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS UG #5 follow up > > On 03/16/2012 03:21 PM, Jez Tucker wrote: > > What's your github user id ? > > If you don't have one go here: https://github.com/signup/free > > > > Been having major GPFS woes this week. My id is jabuzzard. > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences University of Dundee, DD1 > 5EH _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From sfadden at us.ibm.com Thu Mar 22 18:15:05 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Thu, 22 Mar 2012 11:15:05 -0700 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: Let me know if this helps http://www.ibm.com/developerworks/wikis/display/hpccentral/Threshold+based+migration+using+callbacks+example It is not specifically TSM but the model is the same. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From geraint.north at uk.ibm.com Fri Mar 23 18:15:53 2012 From: geraint.north at uk.ibm.com (Geraint North) Date: Fri, 23 Mar 2012 18:15:53 +0000 Subject: [gpfsug-discuss] AUTO: Geraint North is prepared for DELETION (FREEZE) (returning 29/03/2012) Message-ID: I am out of the office until 29/03/2012. Note: This is an automated response to your message "[gpfsug-discuss] GPFSUG #6 - agenda" sent on 19/3/2012 12:14:23. This is the only notification you will receive while this person is away. From crobson at ocf.co.uk Tue Mar 6 17:23:24 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Tue, 6 Mar 2012 17:23:24 +0000 Subject: [gpfsug-discuss] GPFS UG Agenda Message-ID: Dear All, Please find attached next Wednesday's agenda. The meeting is taking place in Walton Room A, The Cockcroft Institute, Daresbury Laboratory, Warrington. Directions can be found http://www.cockcroft.ac.uk/pages/location.htm If you are planning on driving to the meeting, please email me your vehicle registration number as security have requested the details so that you can park on-site. Please send this to me no later than Tuesday 13th at noon. Many thanks and I look forward to seeing you next week for some interesting presentations and discussions. Claire Robson GPFS UG Secretary OCF plc Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Agenda GPFS User Group Meeting March 2012.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 74732 bytes Desc: Agenda GPFS User Group Meeting March 2012.docx URL: From viccornell at gmail.com Thu Mar 8 16:44:22 2012 From: viccornell at gmail.com (Vic Cornell) Date: Thu, 8 Mar 2012 16:44:22 +0000 Subject: [gpfsug-discuss] GPFS UG Agenda In-Reply-To: References: Message-ID: <2F14515A-B6FE-4A9D-913A-BD4B411521BC@gmail.com> Hi Claire, I will be attending the group. I will be flying up so I don't have a car reg for you. Regards, Vic On 6 Mar 2012, at 17:23, Claire Robson wrote: > Dear All, > > Please find attached next Wednesday?s agenda. > > The meeting is taking place in Walton Room A, The Cockcroft Institute, Daresbury Laboratory, Warrington. Directions can be found http://www.cockcroft.ac.uk/pages/location.htm > > If you are planning on driving to the meeting, please email me your vehicle registration number as security have requested the details so that you can park on-site. Please send this to me no later than Tuesday 13th at noon. > > Many thanks and I look forward to seeing you next week for some interesting presentations and discussions. > > Claire Robson > GPFS UG Secretary > > OCF plc > Tel: 0114 257 2200 > Mob: 07508 033896 > Fax: 0114 257 0022 > > OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG > > This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Mar 16 12:11:35 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 16 Mar 2012 12:11:35 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> Hello everyone, Firstly, many thanks to those who attended on Wednesday. I hope UG #5 proved helpful to all. Please let me know your feedback, positive critique would be most useful. Things for your bookmarks: A quick nod to Martin Glassborow aka 'Storagebod' for representing the Media Broadcast sector. Blog: http://www.storagebod.com Colin Morey volunteered to advise the committee on behalf of the HPC sector. Many thanks. If any one feels that they could volunteer their ear to the committee now and again for their own sector that would be very helpful [Pharmaceuticals, Aerospace, Oil & Gas, Formula One, etc..) The Git repository for the GPFS UG is here: https://github.com/gpfsug/gpfsug-tools If you want to commit something, drop me a quick email and I'll give you write access. (Mine will be going up soon once I've ironed out a the last [I hope] 3-4 bugs..) If there's any outstanding queries that you feel would be helpful to the group, ping a quick email to the mailing list. I'll collate these and get a response from IBM for you. I had a long chat with Boaz from ScaleIO on the way back home. vSAN looks very, very, interesting. We're going to dip our toes in and use it to underpin our ESX cluster. No brainer. Lastly. I'm afraid I'm now out of IBM Linux fleeces... Cheers Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.buzzard at dundee.ac.uk Fri Mar 16 12:55:25 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Fri, 16 Mar 2012 12:55:25 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F63383D.4020000@dundee.ac.uk> Jez Tucker wrote: [SNIP] > > The Git repository for the GPFS UG is here: > > https://github.com/gpfsug/gpfsug-tools > > If you want to commit something, drop me a quick email and I?ll give you > write access. > Write access for my mmdfree command. I also have an mmattrib command, that allows you to set the DOS style file attributes from Linux. That needs a bit of tidying up. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Fri Mar 16 15:21:31 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 16 Mar 2012 15:21:31 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <4F63383D.4020000@dundee.ac.uk> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> What's your github user id ? If you don't have one go here: https://github.com/signup/free > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 16 March 2012 12:55 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS UG #5 follow up > > Jez Tucker wrote: > > [SNIP] > > > > > The Git repository for the GPFS UG is here: > > > > https://github.com/gpfsug/gpfsug-tools > > > > If you want to commit something, drop me a quick email and I'll give > > you write access. > > > > Write access for my mmdfree command. > > I also have an mmattrib command, that allows you to set the DOS style file > attributes from Linux. That needs a bit of tidying up. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From Jez.Tucker at rushes.co.uk Mon Mar 19 12:14:23 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 12:14:23 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda Message-ID: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> Hello all I feel I should ask, is there anything that anybody thinks we should all see at the next UG? Tell us and we'll see if we can sort it out. Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Mon Mar 19 12:21:33 2012 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 19 Mar 2012 12:21:33 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda In-Reply-To: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> Hi Jez, when is it likely to be? Vic On 19 Mar 2012, at 12:14, Jez Tucker wrote: > Hello all > > I feel I should ask, is there anything that anybody thinks we should all see at the next UG? > > Tell us and we?ll see if we can sort it out. > > Jez > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Mar 19 12:31:03 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 12:31:03 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda In-Reply-To: <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> References: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5018040@WARVWEXC2.uk.deluxe-eu.com> "...next formal meeting will take place in September/October time and will be kindly hosted by AWE in Reading." From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Vic Cornell Sent: 19 March 2012 12:22 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFSUG #6 - agenda Hi Jez, when is it likely to be? Vic On 19 Mar 2012, at 12:14, Jez Tucker wrote: Hello all I feel I should ask, is there anything that anybody thinks we should all see at the next UG? Tell us and we'll see if we can sort it out. Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Mar 19 23:36:17 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 23:36:17 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? Message-ID: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> Hello Just wondering how other people go about copying loads of files in a many, many deep directory path from one file system to another. Assume filenames are full UNICODE and can contain almost any character. It has me wishing GPFS had a COPY FROM support as well as a MIGRATE FROM function for policies. Surely that would be possible...? Ways I can think of are: - Multiple 'scripted intelligent' rsync threads - Creating a policy to generate a file list to pass N batched files to N nodes to exec (again rsync?) - Barry Evans suggested via AFM. Though out file system needs to be upgraded before we could try this. Rsync handles UNICODE names well. tar, though faster for the first pass does not. Any ideas? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Tue Mar 20 00:47:14 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 20 Mar 2012 00:47:14 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> In answer to my own question ... http://www.gnu.org/software/parallel http://www.gnu.org/software/parallel/man.html#example__parallelizing_rsync Or http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). One for the bookmarks... hopefully you'll find it useful. Jez From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 19 March 2012 23:36 To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? Hello Just wondering how other people go about copying loads of files in a many, many deep directory path from one file system to another. Assume filenames are full UNICODE and can contain almost any character. r to use. Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Tue Mar 20 09:08:55 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 20 Mar 2012 09:08:55 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F684927.1060903@ed.ac.uk> We don't do this often, so when we do we tend to do a "noddy" parallelisation of rsync - manually divvy the folders up and spawn multiple threads. We might have a requirement soon(ish) to do this on a much larger scale though, with virtually no interruption to service - so I'm very keen to see what how the AFM solution looks, since this should allow us to present a continual and single view into the filesystem, whilst migrating it all in the background to the new filesystem, with just a brief wobble whilst we flip between the old and new views. On 20/03/12 00:47, Jez Tucker wrote: > In answer to my own question ? > > http://www.gnu.org/software/parallel > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_rsync > > Or > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > One for the bookmarks? hopefully you?ll find it useful. > > Jez > > *From:*gpfsug-discuss-bounces at gpfsug.org > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > *Sent:* 19 March 2012 23:36 > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > Hello > > Just wondering how other people go about copying loads of files in a > many, many deep directory path from one file system to another. Assume > filenames are full UNICODE and can contain almost any character. > > r to use. > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be > subject to legal privilege. If you are not the intended recipient, you > must not use, copy, distribute or disclose the e-mail or any part of its > contents or take any action in reliance on it. If you have received this > e-mail in error, please e-mail the sender by replying to this message. > All reasonable precautions have been taken to ensure no viruses are > present in this e-mail. Rushes Postproduction Limited cannot accept > responsibility for loss or damage arising from the use of this e-mail or > attachments and recommend that you subject these to your virus checking > procedures prior to use. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From Jez.Tucker at rushes.co.uk Wed Mar 21 16:37:02 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 16:37:02 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) Message-ID: <39571EA9316BE44899D59C7A640C13F501A9B5@WARVWEXC2.uk.deluxe-eu.com> --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed Mar 21 16:47:26 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 16:47:26 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Message-ID: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed Mar 21 18:00:01 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 21 Mar 2012 11:00:01 -0700 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: What do you want to achieve? Threshold based migration? Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed Mar 21 18:28:34 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 18:28:34 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> Yup. We're running svr 6.2.2-30 and BA/HSM 6.2.4-1, GPFS 3.4.0-10 on RH Ent 5.4 x64 At the moment, we're running migration policies 'auto-manually' via a script which checks if it needs to be run as the THRESHOLDs are not working. We've noticed the behaviour/stability of thresholds change each release from 3.4.0-8 onwards. 3.4.0-5 worked, but we jumped to .8 as we were told DMAPI for Windows was available [but undocumented], alas not. I had a previous PMR with support who told me to set: enableLowSpaceEvents=no -z=yes on our filesystem(s) Our tsm server has the correct callback setup: [root at tsm01 ~]# mmlscallback DISKSPACE command = /usr/lpp/mmfs/bin/mmstartpolicy event = lowDiskSpace,noDiskSpace node = tsm01.rushesfx.co.uk parms = %eventName %fsName N.B. I set the node just to be tsm01 as other nodes do not have HSM installed, hence if the callback occurred on those nodes, they'd run mmstartpolicy which would run dsmmigrate which is not installed on those nodes. tsm01 is currently setup as a manager-gateway node (very good for archiving up Isilons over NFS...) mmlscluster 3 tsm01.rushesfx.co.uk 10.100.106.50 tsm01.rushesfx.co.uk manager-gateway >From my testing: I can fill a test file system and receive the noDiskSpace callback, but not the lowDiskSpace. This is probably related to the enableLowSpaceEvents=no, but support told me to disable that... FYI. Follow PMRs #31788,999,866 and 67619,999,866 Jez From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 21 March 2012 18:00 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again What do you want to achieve? Threshold based migration? Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker > To: gpfsug main discussion list > Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Wed Mar 21 22:04:57 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Wed, 21 Mar 2012 22:04:57 +0000 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office until Monday 26th March Message-ID: I am out of the office until 26/03/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 3, Issue 6" sent on 21/3/2012 18:31:00. This is the only notification you will receive while this person is away. From Jez.Tucker at rushes.co.uk Thu Mar 22 09:27:56 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 09:27:56 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <4F684927.1060903@ed.ac.uk> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> <4F684927.1060903@ed.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> Here's a quick script to use GNU parallel: We saturated 8 Gb FC with this. #!/bin/sh DATADIR=/mnt/gpfs/srcfldr DESTDIR=/mnt/gpfs/destinationfldr while read LINE; do PROJ=$(echo $LINE | awk '{ print $1; }'); DESTFLDR=$(echo $LINE | awk '{ print $2; }'); echo "$PROJ -> $DEST"; mkdir -p "$DESTDIR/$DESTFLDR"; find $PROJ/ | parallel cp --parents -puv "{}" "$DESTDIR/$DESTFLDR/"; rsync -av $PROJ "$DESTDIR/$DESTFLDR/" 2>&1 > RESTORE_LOGS/$PROJ.restore.log; done < restore.my.projectlist This assumes restore.nsd01.projectlist contains something such as: ... > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Orlando Richards > Sent: 20 March 2012 09:09 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > We don't do this often, so when we do we tend to do a "noddy" > parallelisation of rsync - manually divvy the folders up and spawn > multiple threads. > > We might have a requirement soon(ish) to do this on a much larger scale > though, with virtually no interruption to service - so I'm very keen to > see what how the AFM solution looks, since this should allow us to > present a continual and single view into the filesystem, whilst > migrating it all in the background to the new filesystem, with just a > brief wobble whilst we flip between the old and new views. > > > On 20/03/12 00:47, Jez Tucker wrote: > > In answer to my own question ... > > > > http://www.gnu.org/software/parallel > > > > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_r > sync > > > > Or > > > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > > > One for the bookmarks... hopefully you'll find it useful. > > > > Jez > > > > *From:*gpfsug-discuss-bounces at gpfsug.org > > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > > *Sent:* 19 March 2012 23:36 > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from one > > stgpool to another- how do you do it? > > > > Hello > > > > Just wondering how other people go about copying loads of files in a > > many, many deep directory path from one file system to another. Assume > > filenames are full UNICODE and can contain almost any character. > > > > r to use. > > > > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > > tel: +44 (0)20 7437 8676 > > web: http://www.rushes.co.uk > > The information contained in this e-mail is confidential and may be > > subject to legal privilege. If you are not the intended recipient, you > > must not use, copy, distribute or disclose the e-mail or any part of its > > contents or take any action in reliance on it. If you have received this > > e-mail in error, please e-mail the sender by replying to this message. > > All reasonable precautions have been taken to ensure no viruses are > > present in this e-mail. Rushes Postproduction Limited cannot accept > > responsibility for loss or damage arising from the use of this e-mail or > > attachments and recommend that you subject these to your virus checking > > procedures prior to use. > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From Jez.Tucker at rushes.co.uk Thu Mar 22 09:35:00 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 09:35:00 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> <4F684927.1060903@ed.ac.uk> <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F501AEC2@WARVWEXC2.uk.deluxe-eu.com> Apologies should have read: 'This assumes restore.my.projectlist contains'.. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jez Tucker > Sent: 22 March 2012 09:28 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > Here's a quick script to use GNU parallel: > > We saturated 8 Gb FC with this. > > > #!/bin/sh > > DATADIR=/mnt/gpfs/srcfldr > DESTDIR=/mnt/gpfs/destinationfldr > > while read LINE; do > > PROJ=$(echo $LINE | awk '{ print $1; }'); > DESTFLDR=$(echo $LINE | awk '{ print $2; }'); > > echo "$PROJ -> $DEST"; > mkdir -p "$DESTDIR/$DESTFLDR"; > > find $PROJ/ | parallel cp --parents -puv "{}" "$DESTDIR/$DESTFLDR/"; > rsync -av $PROJ "$DESTDIR/$DESTFLDR/" 2>&1 > > RESTORE_LOGS/$PROJ.restore.log; > > done < restore.my.projectlist > > > This assumes restore.nsd01.projectlist contains something such as: > > > > ... > > > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Orlando Richards > > Sent: 20 March 2012 09:09 > > To: gpfsug-discuss at gpfsug.org > > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from > > one stgpool to another- how do you do it? > > > > We don't do this often, so when we do we tend to do a "noddy" > > parallelisation of rsync - manually divvy the folders up and spawn > > multiple threads. > > > > We might have a requirement soon(ish) to do this on a much larger > > scale though, with virtually no interruption to service - so I'm very > > keen to see what how the AFM solution looks, since this should allow > > us to present a continual and single view into the filesystem, whilst > > migrating it all in the background to the new filesystem, with just a > > brief wobble whilst we flip between the old and new views. > > > > > > On 20/03/12 00:47, Jez Tucker wrote: > > > In answer to my own question ... > > > > > > http://www.gnu.org/software/parallel > > > > > > > > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_r > > sync > > > > > > Or > > > > > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > > > > > One for the bookmarks... hopefully you'll find it useful. > > > > > > Jez > > > > > > *From:*gpfsug-discuss-bounces at gpfsug.org > > > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > > > *Sent:* 19 March 2012 23:36 > > > *To:* gpfsug main discussion list > > > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from > > > one stgpool to another- how do you do it? > > > > > > Hello > > > > > > Just wondering how other people go about copying loads of files in a > > > many, many deep directory path from one file system to another. > > > Assume filenames are full UNICODE and can contain almost any > character. > > > > > > r to use. > > > > > > > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D > 4UH > > > tel: +44 (0)20 7437 8676 > > > web: http://www.rushes.co.uk > > > The information contained in this e-mail is confidential and may be > > > subject to legal privilege. If you are not the intended recipient, > > > you must not use, copy, distribute or disclose the e-mail or any > > > part of its contents or take any action in reliance on it. If you > > > have received this e-mail in error, please e-mail the sender by replying > to this message. > > > All reasonable precautions have been taken to ensure no viruses are > > > present in this e-mail. Rushes Postproduction Limited cannot accept > > > responsibility for loss or damage arising from the use of this > > > e-mail or attachments and recommend that you subject these to your > > > virus checking procedures prior to use. > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at gpfsug.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > -- > > -- > > Dr Orlando Richards > > Information Services > > IT Infrastructure Division > > Unix Section > > Tel: 0131 650 4994 > > > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be subject > to legal privilege. If you are not the intended recipient, you must not use, > copy, distribute or disclose the e-mail or any part of its contents or take any > action in reliance on it. If you have received this e-mail in error, please e- > mail the sender by replying to this message. All reasonable precautions > have been taken to ensure no viruses are present in this e-mail. Rushes > Postproduction Limited cannot accept responsibility for loss or damage > arising from the use of this e-mail or attachments and recommend that you > subject these to your virus checking procedures prior to use. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From j.buzzard at dundee.ac.uk Thu Mar 22 12:04:50 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 22 Mar 2012 12:04:50 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F6B1562.9020403@dundee.ac.uk> On 03/21/2012 06:28 PM, Jez Tucker wrote: > Yup. > > We?re running svr 6.2.2-30 and BA/HSM 6.2.4-1, GPFS 3.4.0-10 on RH Ent > 5.4 x64 > > At the moment, we?re running migration policies ?auto-manually? via a > script which checks if it needs to be run as the THRESHOLDs are not working. > > We?ve noticed the behaviour/stability of thresholds change each release > from 3.4.0-8 onwards. 3.4.0-5 worked, but we jumped to .8 as we were > told DMAPI for Windows was available [but undocumented], alas not. > > I had a previous PMR with support who told me to set: > > enableLowSpaceEvents=no > > -z=yes on our filesystem(s) > > Our tsm server has the correct callback setup: > > [root at tsm01 ~]# mmlscallback > > DISKSPACE > > command = /usr/lpp/mmfs/bin/mmstartpolicy > > event = lowDiskSpace,noDiskSpace > > node = tsm01.rushesfx.co.uk > > parms = %eventName %fsName > > N.B. I set the node just to be tsm01 as other nodes do not have HSM > installed, hence if the callback occurred on those nodes, they?d run > mmstartpolicy which would run dsmmigrate which is not installed on those > nodes. Note that you can have more than one node with the hsm client installed. Gives some redundancy should a node fail. Apart from that your current setup is a *REALLY* bad idea. As I understand it when you hit the lowDiskSpace event every two minutes it will call the mmstartpolicy command. That's fine if your policy can run inside two minutes and cause the usage to fall below the threshold. As that is extremely unlikely you need to write a script with locking to prevent that happening, otherwise you will have multiple instances of the policy running all at once and bringing everything to it's knees. I would add that the GPFS documentation surrounding this is *very* poor, and complete with the utter failure in the release notes to mention the change of behaviour between 3.2 and 3.3 this whole area needs to be approached with caution as clearly IBM are happy to break things with out telling us. That said I run with the following on 3.4.0-6 DISKSPACE command = /usr/local/bin/run_ilm_cycle event = lowDiskSpace node = nsdnodes parms = %eventName %fsName And the run_ilm_cycle works just fine, and is included inline below. It is installed on all NSD nodes. This is not strict HSM as it is pushing from my fast to slow disk. However as my nearline pool is not full, I have not yet applied HSM to that pool. In fact although I have HSM enabled and it works on the file system it is all turned off as we are still running with 5.5 servers we cannot install the 6.3 client, and without the 6.3 client you cannot turn of dsmscoutd and that just tanks our file system when it starts. Note anyone still reading I urge you to read http://www-01.ibm.com/support/docview.wss?uid=swg1IC73091 and upgrade your TSM client if necessary. JAB. #!/bin/bash # # Wrapper script to run an mmapplypolicy on a GPFS file system when a callback # is triggered. Specifically it is intended to be triggered by a lowDiskSpace # event registered with a call back like the following. # # mmaddcallback DISKSPACE --command /usr/local/bin/run_ilm_cycle --event # lowDiskSpace -N nsdnodes --parms "%eventname %fsName" # # The script includes cluster wide quiescence locking so that it plays nicely # with other automated scripts that need GPFS quiescence to run. # EVENT_NAME=$1 FS=$2 # determine the mount point for the file system MOUNT_POINT=`/usr/lpp/mmfs/bin/mmlsfs ${FS} |grep "\-T" |awk '{print $2}'` HOSTNAME=`/bin/hostname -s` # lock file LOCKDIR="${MOUNT_POINT}/ctdb/quiescence.lock" # exit codes and text for them ENO_SUCCESS=0; ETXT[0]="ENO_SUCCESS" ENO_GENERAL=1; ETXT[1]="ENO_GENERAL" ENO_LOCKFAIL=2; ETXT[2]="ENO_LOCKFAIL" ENO_RECVSIG=3; ETXT[3]="ENO_RECVSIG" # # Attempt to get a lock # trap 'ECODE=$?; echo "[${PROG}] Exit: ${ETXT[ECODE]}($ECODE)" >&2' 0 echo -n "[${PROG}] Locking: " >&2 if mkdir "${LOCKDIR}" &>/dev/null; then # lock succeeded, install signal handlers trap 'ECODE=$?; echo "[${PROG}] Removing lock. Exit: ${ETXT[ECODE]}($ECODE)" >&2 rm -rf "${LOCKDIR}"' 0 # the following handler will exit the script on receiving these signals # the trap on "0" (EXIT) from above will be triggered by this scripts # "exit" command! trap 'echo "[${PROG}] Killed by a signal." >&2 exit ${ENO_RECVSIG}' 1 2 3 15 echo "success, installed signal handlers" else # exit, we're locked! echo "lock failed other operation running" >&2 exit ${ENO_LOCKFAIL} fi # note what we are doing and where we are doing it /bin/touch $LOCKDIR/${EVENT_NAME}.${HOSTNAME} # apply the policy echo "running mmapplypolicy for the file system: ${FS}" /usr/lpp/mmfs/bin/mmapplypolicy $FS -N nsdnodes -P $MOUNT_POINT/rules.txt exit 0; -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From j.buzzard at dundee.ac.uk Thu Mar 22 12:41:34 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 22 Mar 2012 12:41:34 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F6B1DFE.4020803@dundee.ac.uk> On 03/16/2012 03:21 PM, Jez Tucker wrote: > What's your github user id ? > If you don't have one go here: https://github.com/signup/free > Been having major GPFS woes this week. My id is jabuzzard. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu Mar 22 12:52:06 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 12:52:06 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <4F6B1DFE.4020803@dundee.ac.uk> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> <4F6B1DFE.4020803@dundee.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F501B04D@WARVWEXC2.uk.deluxe-eu.com> Allo You should now have pull+push access. I've not setup a proper branch structure yet, but I suggest you do a pull and add something along the lines of trunk/master/scripts/ > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 22 March 2012 12:42 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS UG #5 follow up > > On 03/16/2012 03:21 PM, Jez Tucker wrote: > > What's your github user id ? > > If you don't have one go here: https://github.com/signup/free > > > > Been having major GPFS woes this week. My id is jabuzzard. > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences University of Dundee, DD1 > 5EH _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From sfadden at us.ibm.com Thu Mar 22 18:15:05 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Thu, 22 Mar 2012 11:15:05 -0700 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: Let me know if this helps http://www.ibm.com/developerworks/wikis/display/hpccentral/Threshold+based+migration+using+callbacks+example It is not specifically TSM but the model is the same. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From geraint.north at uk.ibm.com Fri Mar 23 18:15:53 2012 From: geraint.north at uk.ibm.com (Geraint North) Date: Fri, 23 Mar 2012 18:15:53 +0000 Subject: [gpfsug-discuss] AUTO: Geraint North is prepared for DELETION (FREEZE) (returning 29/03/2012) Message-ID: I am out of the office until 29/03/2012. Note: This is an automated response to your message "[gpfsug-discuss] GPFSUG #6 - agenda" sent on 19/3/2012 12:14:23. This is the only notification you will receive while this person is away. From crobson at ocf.co.uk Tue Mar 6 17:23:24 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Tue, 6 Mar 2012 17:23:24 +0000 Subject: [gpfsug-discuss] GPFS UG Agenda Message-ID: Dear All, Please find attached next Wednesday's agenda. The meeting is taking place in Walton Room A, The Cockcroft Institute, Daresbury Laboratory, Warrington. Directions can be found http://www.cockcroft.ac.uk/pages/location.htm If you are planning on driving to the meeting, please email me your vehicle registration number as security have requested the details so that you can park on-site. Please send this to me no later than Tuesday 13th at noon. Many thanks and I look forward to seeing you next week for some interesting presentations and discussions. Claire Robson GPFS UG Secretary OCF plc Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Agenda GPFS User Group Meeting March 2012.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 74732 bytes Desc: Agenda GPFS User Group Meeting March 2012.docx URL: From viccornell at gmail.com Thu Mar 8 16:44:22 2012 From: viccornell at gmail.com (Vic Cornell) Date: Thu, 8 Mar 2012 16:44:22 +0000 Subject: [gpfsug-discuss] GPFS UG Agenda In-Reply-To: References: Message-ID: <2F14515A-B6FE-4A9D-913A-BD4B411521BC@gmail.com> Hi Claire, I will be attending the group. I will be flying up so I don't have a car reg for you. Regards, Vic On 6 Mar 2012, at 17:23, Claire Robson wrote: > Dear All, > > Please find attached next Wednesday?s agenda. > > The meeting is taking place in Walton Room A, The Cockcroft Institute, Daresbury Laboratory, Warrington. Directions can be found http://www.cockcroft.ac.uk/pages/location.htm > > If you are planning on driving to the meeting, please email me your vehicle registration number as security have requested the details so that you can park on-site. Please send this to me no later than Tuesday 13th at noon. > > Many thanks and I look forward to seeing you next week for some interesting presentations and discussions. > > Claire Robson > GPFS UG Secretary > > OCF plc > Tel: 0114 257 2200 > Mob: 07508 033896 > Fax: 0114 257 0022 > > OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG > > This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Mar 16 12:11:35 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 16 Mar 2012 12:11:35 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> Hello everyone, Firstly, many thanks to those who attended on Wednesday. I hope UG #5 proved helpful to all. Please let me know your feedback, positive critique would be most useful. Things for your bookmarks: A quick nod to Martin Glassborow aka 'Storagebod' for representing the Media Broadcast sector. Blog: http://www.storagebod.com Colin Morey volunteered to advise the committee on behalf of the HPC sector. Many thanks. If any one feels that they could volunteer their ear to the committee now and again for their own sector that would be very helpful [Pharmaceuticals, Aerospace, Oil & Gas, Formula One, etc..) The Git repository for the GPFS UG is here: https://github.com/gpfsug/gpfsug-tools If you want to commit something, drop me a quick email and I'll give you write access. (Mine will be going up soon once I've ironed out a the last [I hope] 3-4 bugs..) If there's any outstanding queries that you feel would be helpful to the group, ping a quick email to the mailing list. I'll collate these and get a response from IBM for you. I had a long chat with Boaz from ScaleIO on the way back home. vSAN looks very, very, interesting. We're going to dip our toes in and use it to underpin our ESX cluster. No brainer. Lastly. I'm afraid I'm now out of IBM Linux fleeces... Cheers Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.buzzard at dundee.ac.uk Fri Mar 16 12:55:25 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Fri, 16 Mar 2012 12:55:25 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F63383D.4020000@dundee.ac.uk> Jez Tucker wrote: [SNIP] > > The Git repository for the GPFS UG is here: > > https://github.com/gpfsug/gpfsug-tools > > If you want to commit something, drop me a quick email and I?ll give you > write access. > Write access for my mmdfree command. I also have an mmattrib command, that allows you to set the DOS style file attributes from Linux. That needs a bit of tidying up. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Fri Mar 16 15:21:31 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 16 Mar 2012 15:21:31 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <4F63383D.4020000@dundee.ac.uk> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> Message-ID: <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> What's your github user id ? If you don't have one go here: https://github.com/signup/free > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 16 March 2012 12:55 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS UG #5 follow up > > Jez Tucker wrote: > > [SNIP] > > > > > The Git repository for the GPFS UG is here: > > > > https://github.com/gpfsug/gpfsug-tools > > > > If you want to commit something, drop me a quick email and I'll give > > you write access. > > > > Write access for my mmdfree command. > > I also have an mmattrib command, that allows you to set the DOS style file > attributes from Linux. That needs a bit of tidying up. > > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences > University of Dundee, DD1 5EH > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From Jez.Tucker at rushes.co.uk Mon Mar 19 12:14:23 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 12:14:23 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda Message-ID: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> Hello all I feel I should ask, is there anything that anybody thinks we should all see at the next UG? Tell us and we'll see if we can sort it out. Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From viccornell at gmail.com Mon Mar 19 12:21:33 2012 From: viccornell at gmail.com (Vic Cornell) Date: Mon, 19 Mar 2012 12:21:33 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda In-Reply-To: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> Hi Jez, when is it likely to be? Vic On 19 Mar 2012, at 12:14, Jez Tucker wrote: > Hello all > > I feel I should ask, is there anything that anybody thinks we should all see at the next UG? > > Tell us and we?ll see if we can sort it out. > > Jez > --- > Jez Tucker > Senior Sysadmin > Rushes > > GPFSUG Chairman (chair at gpfsug.org) > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Mar 19 12:31:03 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 12:31:03 +0000 Subject: [gpfsug-discuss] GPFSUG #6 - agenda In-Reply-To: <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> References: <39571EA9316BE44899D59C7A640C13F5017E1A@WARVWEXC2.uk.deluxe-eu.com> <500D7FAA-3C2D-4115-AC03-CC721A9E62AD@gmail.com> Message-ID: <39571EA9316BE44899D59C7A640C13F5018040@WARVWEXC2.uk.deluxe-eu.com> "...next formal meeting will take place in September/October time and will be kindly hosted by AWE in Reading." From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Vic Cornell Sent: 19 March 2012 12:22 To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] GPFSUG #6 - agenda Hi Jez, when is it likely to be? Vic On 19 Mar 2012, at 12:14, Jez Tucker wrote: Hello all I feel I should ask, is there anything that anybody thinks we should all see at the next UG? Tell us and we'll see if we can sort it out. Jez --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Mon Mar 19 23:36:17 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Mon, 19 Mar 2012 23:36:17 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? Message-ID: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> Hello Just wondering how other people go about copying loads of files in a many, many deep directory path from one file system to another. Assume filenames are full UNICODE and can contain almost any character. It has me wishing GPFS had a COPY FROM support as well as a MIGRATE FROM function for policies. Surely that would be possible...? Ways I can think of are: - Multiple 'scripted intelligent' rsync threads - Creating a policy to generate a file list to pass N batched files to N nodes to exec (again rsync?) - Barry Evans suggested via AFM. Though out file system needs to be upgraded before we could try this. Rsync handles UNICODE names well. tar, though faster for the first pass does not. Any ideas? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Tue Mar 20 00:47:14 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Tue, 20 Mar 2012 00:47:14 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> In answer to my own question ... http://www.gnu.org/software/parallel http://www.gnu.org/software/parallel/man.html#example__parallelizing_rsync Or http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). One for the bookmarks... hopefully you'll find it useful. Jez From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 19 March 2012 23:36 To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? Hello Just wondering how other people go about copying loads of files in a many, many deep directory path from one file system to another. Assume filenames are full UNICODE and can contain almost any character. r to use. Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Tue Mar 20 09:08:55 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 20 Mar 2012 09:08:55 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F684927.1060903@ed.ac.uk> We don't do this often, so when we do we tend to do a "noddy" parallelisation of rsync - manually divvy the folders up and spawn multiple threads. We might have a requirement soon(ish) to do this on a much larger scale though, with virtually no interruption to service - so I'm very keen to see what how the AFM solution looks, since this should allow us to present a continual and single view into the filesystem, whilst migrating it all in the background to the new filesystem, with just a brief wobble whilst we flip between the old and new views. On 20/03/12 00:47, Jez Tucker wrote: > In answer to my own question ? > > http://www.gnu.org/software/parallel > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_rsync > > Or > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > One for the bookmarks? hopefully you?ll find it useful. > > Jez > > *From:*gpfsug-discuss-bounces at gpfsug.org > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > *Sent:* 19 March 2012 23:36 > *To:* gpfsug main discussion list > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > Hello > > Just wondering how other people go about copying loads of files in a > many, many deep directory path from one file system to another. Assume > filenames are full UNICODE and can contain almost any character. > > r to use. > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be > subject to legal privilege. If you are not the intended recipient, you > must not use, copy, distribute or disclose the e-mail or any part of its > contents or take any action in reliance on it. If you have received this > e-mail in error, please e-mail the sender by replying to this message. > All reasonable precautions have been taken to ensure no viruses are > present in this e-mail. Rushes Postproduction Limited cannot accept > responsibility for loss or damage arising from the use of this e-mail or > attachments and recommend that you subject these to your virus checking > procedures prior to use. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From Jez.Tucker at rushes.co.uk Wed Mar 21 16:37:02 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 16:37:02 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) Message-ID: <39571EA9316BE44899D59C7A640C13F501A9B5@WARVWEXC2.uk.deluxe-eu.com> --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed Mar 21 16:47:26 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 16:47:26 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Message-ID: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfadden at us.ibm.com Wed Mar 21 18:00:01 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Wed, 21 Mar 2012 11:00:01 -0700 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: What do you want to achieve? Threshold based migration? Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Wed Mar 21 18:28:34 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Wed, 21 Mar 2012 18:28:34 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> Yup. We're running svr 6.2.2-30 and BA/HSM 6.2.4-1, GPFS 3.4.0-10 on RH Ent 5.4 x64 At the moment, we're running migration policies 'auto-manually' via a script which checks if it needs to be run as the THRESHOLDs are not working. We've noticed the behaviour/stability of thresholds change each release from 3.4.0-8 onwards. 3.4.0-5 worked, but we jumped to .8 as we were told DMAPI for Windows was available [but undocumented], alas not. I had a previous PMR with support who told me to set: enableLowSpaceEvents=no -z=yes on our filesystem(s) Our tsm server has the correct callback setup: [root at tsm01 ~]# mmlscallback DISKSPACE command = /usr/lpp/mmfs/bin/mmstartpolicy event = lowDiskSpace,noDiskSpace node = tsm01.rushesfx.co.uk parms = %eventName %fsName N.B. I set the node just to be tsm01 as other nodes do not have HSM installed, hence if the callback occurred on those nodes, they'd run mmstartpolicy which would run dsmmigrate which is not installed on those nodes. tsm01 is currently setup as a manager-gateway node (very good for archiving up Isilons over NFS...) mmlscluster 3 tsm01.rushesfx.co.uk 10.100.106.50 tsm01.rushesfx.co.uk manager-gateway >From my testing: I can fill a test file system and receive the noDiskSpace callback, but not the lowDiskSpace. This is probably related to the enableLowSpaceEvents=no, but support told me to disable that... FYI. Follow PMRs #31788,999,866 and 67619,999,866 Jez From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Scott Fadden Sent: 21 March 2012 18:00 To: gpfsug main discussion list Cc: gpfsug main discussion list; gpfsug-discuss-bounces at gpfsug.org Subject: Re: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again What do you want to achieve? Threshold based migration? Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker > To: gpfsug main discussion list > Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Wed Mar 21 22:04:57 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Wed, 21 Mar 2012 22:04:57 +0000 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office until Monday 26th March Message-ID: I am out of the office until 26/03/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 3, Issue 6" sent on 21/3/2012 18:31:00. This is the only notification you will receive while this person is away. From Jez.Tucker at rushes.co.uk Thu Mar 22 09:27:56 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 09:27:56 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <4F684927.1060903@ed.ac.uk> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> <4F684927.1060903@ed.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> Here's a quick script to use GNU parallel: We saturated 8 Gb FC with this. #!/bin/sh DATADIR=/mnt/gpfs/srcfldr DESTDIR=/mnt/gpfs/destinationfldr while read LINE; do PROJ=$(echo $LINE | awk '{ print $1; }'); DESTFLDR=$(echo $LINE | awk '{ print $2; }'); echo "$PROJ -> $DEST"; mkdir -p "$DESTDIR/$DESTFLDR"; find $PROJ/ | parallel cp --parents -puv "{}" "$DESTDIR/$DESTFLDR/"; rsync -av $PROJ "$DESTDIR/$DESTFLDR/" 2>&1 > RESTORE_LOGS/$PROJ.restore.log; done < restore.my.projectlist This assumes restore.nsd01.projectlist contains something such as: ... > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Orlando Richards > Sent: 20 March 2012 09:09 > To: gpfsug-discuss at gpfsug.org > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > We don't do this often, so when we do we tend to do a "noddy" > parallelisation of rsync - manually divvy the folders up and spawn > multiple threads. > > We might have a requirement soon(ish) to do this on a much larger scale > though, with virtually no interruption to service - so I'm very keen to > see what how the AFM solution looks, since this should allow us to > present a continual and single view into the filesystem, whilst > migrating it all in the background to the new filesystem, with just a > brief wobble whilst we flip between the old and new views. > > > On 20/03/12 00:47, Jez Tucker wrote: > > In answer to my own question ... > > > > http://www.gnu.org/software/parallel > > > > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_r > sync > > > > Or > > > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > > > One for the bookmarks... hopefully you'll find it useful. > > > > Jez > > > > *From:*gpfsug-discuss-bounces at gpfsug.org > > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > > *Sent:* 19 March 2012 23:36 > > *To:* gpfsug main discussion list > > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from one > > stgpool to another- how do you do it? > > > > Hello > > > > Just wondering how other people go about copying loads of files in a > > many, many deep directory path from one file system to another. Assume > > filenames are full UNICODE and can contain almost any character. > > > > r to use. > > > > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > > tel: +44 (0)20 7437 8676 > > web: http://www.rushes.co.uk > > The information contained in this e-mail is confidential and may be > > subject to legal privilege. If you are not the intended recipient, you > > must not use, copy, distribute or disclose the e-mail or any part of its > > contents or take any action in reliance on it. If you have received this > > e-mail in error, please e-mail the sender by replying to this message. > > All reasonable precautions have been taken to ensure no viruses are > > present in this e-mail. Rushes Postproduction Limited cannot accept > > responsibility for loss or damage arising from the use of this e-mail or > > attachments and recommend that you subject these to your virus checking > > procedures prior to use. > > > > > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From Jez.Tucker at rushes.co.uk Thu Mar 22 09:35:00 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 09:35:00 +0000 Subject: [gpfsug-discuss] GPFS cpoying huge batches of files from one stgpool to another- how do you do it? In-Reply-To: <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F50194BE@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F50194E0@WARVWEXC2.uk.deluxe-eu.com> <4F684927.1060903@ed.ac.uk> <39571EA9316BE44899D59C7A640C13F501AE99@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <39571EA9316BE44899D59C7A640C13F501AEC2@WARVWEXC2.uk.deluxe-eu.com> Apologies should have read: 'This assumes restore.my.projectlist contains'.. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jez Tucker > Sent: 22 March 2012 09:28 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from one > stgpool to another- how do you do it? > > Here's a quick script to use GNU parallel: > > We saturated 8 Gb FC with this. > > > #!/bin/sh > > DATADIR=/mnt/gpfs/srcfldr > DESTDIR=/mnt/gpfs/destinationfldr > > while read LINE; do > > PROJ=$(echo $LINE | awk '{ print $1; }'); > DESTFLDR=$(echo $LINE | awk '{ print $2; }'); > > echo "$PROJ -> $DEST"; > mkdir -p "$DESTDIR/$DESTFLDR"; > > find $PROJ/ | parallel cp --parents -puv "{}" "$DESTDIR/$DESTFLDR/"; > rsync -av $PROJ "$DESTDIR/$DESTFLDR/" 2>&1 > > RESTORE_LOGS/$PROJ.restore.log; > > done < restore.my.projectlist > > > This assumes restore.nsd01.projectlist contains something such as: > > > > ... > > > > > -----Original Message----- > > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > > bounces at gpfsug.org] On Behalf Of Orlando Richards > > Sent: 20 March 2012 09:09 > > To: gpfsug-discuss at gpfsug.org > > Subject: Re: [gpfsug-discuss] GPFS cpoying huge batches of files from > > one stgpool to another- how do you do it? > > > > We don't do this often, so when we do we tend to do a "noddy" > > parallelisation of rsync - manually divvy the folders up and spawn > > multiple threads. > > > > We might have a requirement soon(ish) to do this on a much larger > > scale though, with virtually no interruption to service - so I'm very > > keen to see what how the AFM solution looks, since this should allow > > us to present a continual and single view into the filesystem, whilst > > migrating it all in the background to the new filesystem, with just a > > brief wobble whilst we flip between the old and new views. > > > > > > On 20/03/12 00:47, Jez Tucker wrote: > > > In answer to my own question ... > > > > > > http://www.gnu.org/software/parallel > > > > > > > > > http://www.gnu.org/software/parallel/man.html#example__parallelizing_r > > sync > > > > > > Or > > > > > > http://code.google.com/p/parallel-ssh/ (parallel versions of rsync etc). > > > > > > One for the bookmarks... hopefully you'll find it useful. > > > > > > Jez > > > > > > *From:*gpfsug-discuss-bounces at gpfsug.org > > > [mailto:gpfsug-discuss-bounces at gpfsug.org] *On Behalf Of *Jez Tucker > > > *Sent:* 19 March 2012 23:36 > > > *To:* gpfsug main discussion list > > > *Subject:* [gpfsug-discuss] GPFS cpoying huge batches of files from > > > one stgpool to another- how do you do it? > > > > > > Hello > > > > > > Just wondering how other people go about copying loads of files in a > > > many, many deep directory path from one file system to another. > > > Assume filenames are full UNICODE and can contain almost any > character. > > > > > > r to use. > > > > > > > > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D > 4UH > > > tel: +44 (0)20 7437 8676 > > > web: http://www.rushes.co.uk > > > The information contained in this e-mail is confidential and may be > > > subject to legal privilege. If you are not the intended recipient, > > > you must not use, copy, distribute or disclose the e-mail or any > > > part of its contents or take any action in reliance on it. If you > > > have received this e-mail in error, please e-mail the sender by replying > to this message. > > > All reasonable precautions have been taken to ensure no viruses are > > > present in this e-mail. Rushes Postproduction Limited cannot accept > > > responsibility for loss or damage arising from the use of this > > > e-mail or attachments and recommend that you subject these to your > > > virus checking procedures prior to use. > > > > > > > > > _______________________________________________ > > > gpfsug-discuss mailing list > > > gpfsug-discuss at gpfsug.org > > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > > > -- > > -- > > Dr Orlando Richards > > Information Services > > IT Infrastructure Division > > Unix Section > > Tel: 0131 650 4994 > > > > The University of Edinburgh is a charitable body, registered in > > Scotland, with registration number SC005336. > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH > tel: +44 (0)20 7437 8676 > web: http://www.rushes.co.uk > The information contained in this e-mail is confidential and may be subject > to legal privilege. If you are not the intended recipient, you must not use, > copy, distribute or disclose the e-mail or any part of its contents or take any > action in reliance on it. If you have received this e-mail in error, please e- > mail the sender by replying to this message. All reasonable precautions > have been taken to ensure no viruses are present in this e-mail. Rushes > Postproduction Limited cannot accept responsibility for loss or damage > arising from the use of this e-mail or attachments and recommend that you > subject these to your virus checking procedures prior to use. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From j.buzzard at dundee.ac.uk Thu Mar 22 12:04:50 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 22 Mar 2012 12:04:50 +0000 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> <39571EA9316BE44899D59C7A640C13F501AB28@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F6B1562.9020403@dundee.ac.uk> On 03/21/2012 06:28 PM, Jez Tucker wrote: > Yup. > > We?re running svr 6.2.2-30 and BA/HSM 6.2.4-1, GPFS 3.4.0-10 on RH Ent > 5.4 x64 > > At the moment, we?re running migration policies ?auto-manually? via a > script which checks if it needs to be run as the THRESHOLDs are not working. > > We?ve noticed the behaviour/stability of thresholds change each release > from 3.4.0-8 onwards. 3.4.0-5 worked, but we jumped to .8 as we were > told DMAPI for Windows was available [but undocumented], alas not. > > I had a previous PMR with support who told me to set: > > enableLowSpaceEvents=no > > -z=yes on our filesystem(s) > > Our tsm server has the correct callback setup: > > [root at tsm01 ~]# mmlscallback > > DISKSPACE > > command = /usr/lpp/mmfs/bin/mmstartpolicy > > event = lowDiskSpace,noDiskSpace > > node = tsm01.rushesfx.co.uk > > parms = %eventName %fsName > > N.B. I set the node just to be tsm01 as other nodes do not have HSM > installed, hence if the callback occurred on those nodes, they?d run > mmstartpolicy which would run dsmmigrate which is not installed on those > nodes. Note that you can have more than one node with the hsm client installed. Gives some redundancy should a node fail. Apart from that your current setup is a *REALLY* bad idea. As I understand it when you hit the lowDiskSpace event every two minutes it will call the mmstartpolicy command. That's fine if your policy can run inside two minutes and cause the usage to fall below the threshold. As that is extremely unlikely you need to write a script with locking to prevent that happening, otherwise you will have multiple instances of the policy running all at once and bringing everything to it's knees. I would add that the GPFS documentation surrounding this is *very* poor, and complete with the utter failure in the release notes to mention the change of behaviour between 3.2 and 3.3 this whole area needs to be approached with caution as clearly IBM are happy to break things with out telling us. That said I run with the following on 3.4.0-6 DISKSPACE command = /usr/local/bin/run_ilm_cycle event = lowDiskSpace node = nsdnodes parms = %eventName %fsName And the run_ilm_cycle works just fine, and is included inline below. It is installed on all NSD nodes. This is not strict HSM as it is pushing from my fast to slow disk. However as my nearline pool is not full, I have not yet applied HSM to that pool. In fact although I have HSM enabled and it works on the file system it is all turned off as we are still running with 5.5 servers we cannot install the 6.3 client, and without the 6.3 client you cannot turn of dsmscoutd and that just tanks our file system when it starts. Note anyone still reading I urge you to read http://www-01.ibm.com/support/docview.wss?uid=swg1IC73091 and upgrade your TSM client if necessary. JAB. #!/bin/bash # # Wrapper script to run an mmapplypolicy on a GPFS file system when a callback # is triggered. Specifically it is intended to be triggered by a lowDiskSpace # event registered with a call back like the following. # # mmaddcallback DISKSPACE --command /usr/local/bin/run_ilm_cycle --event # lowDiskSpace -N nsdnodes --parms "%eventname %fsName" # # The script includes cluster wide quiescence locking so that it plays nicely # with other automated scripts that need GPFS quiescence to run. # EVENT_NAME=$1 FS=$2 # determine the mount point for the file system MOUNT_POINT=`/usr/lpp/mmfs/bin/mmlsfs ${FS} |grep "\-T" |awk '{print $2}'` HOSTNAME=`/bin/hostname -s` # lock file LOCKDIR="${MOUNT_POINT}/ctdb/quiescence.lock" # exit codes and text for them ENO_SUCCESS=0; ETXT[0]="ENO_SUCCESS" ENO_GENERAL=1; ETXT[1]="ENO_GENERAL" ENO_LOCKFAIL=2; ETXT[2]="ENO_LOCKFAIL" ENO_RECVSIG=3; ETXT[3]="ENO_RECVSIG" # # Attempt to get a lock # trap 'ECODE=$?; echo "[${PROG}] Exit: ${ETXT[ECODE]}($ECODE)" >&2' 0 echo -n "[${PROG}] Locking: " >&2 if mkdir "${LOCKDIR}" &>/dev/null; then # lock succeeded, install signal handlers trap 'ECODE=$?; echo "[${PROG}] Removing lock. Exit: ${ETXT[ECODE]}($ECODE)" >&2 rm -rf "${LOCKDIR}"' 0 # the following handler will exit the script on receiving these signals # the trap on "0" (EXIT) from above will be triggered by this scripts # "exit" command! trap 'echo "[${PROG}] Killed by a signal." >&2 exit ${ENO_RECVSIG}' 1 2 3 15 echo "success, installed signal handlers" else # exit, we're locked! echo "lock failed other operation running" >&2 exit ${ENO_LOCKFAIL} fi # note what we are doing and where we are doing it /bin/touch $LOCKDIR/${EVENT_NAME}.${HOSTNAME} # apply the policy echo "running mmapplypolicy for the file system: ${FS}" /usr/lpp/mmfs/bin/mmapplypolicy $FS -N nsdnodes -P $MOUNT_POINT/rules.txt exit 0; -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From j.buzzard at dundee.ac.uk Thu Mar 22 12:41:34 2012 From: j.buzzard at dundee.ac.uk (Jonathan Buzzard) Date: Thu, 22 Mar 2012 12:41:34 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> Message-ID: <4F6B1DFE.4020803@dundee.ac.uk> On 03/16/2012 03:21 PM, Jez Tucker wrote: > What's your github user id ? > If you don't have one go here: https://github.com/signup/free > Been having major GPFS woes this week. My id is jabuzzard. JAB. -- Jonathan A. Buzzard Tel: +441382-386998 Storage Administrator, College of Life Sciences University of Dundee, DD1 5EH From Jez.Tucker at rushes.co.uk Thu Mar 22 12:52:06 2012 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Thu, 22 Mar 2012 12:52:06 +0000 Subject: [gpfsug-discuss] GPFS UG #5 follow up In-Reply-To: <4F6B1DFE.4020803@dundee.ac.uk> References: <3147C311DEF9304ABFB764A6D5C1B6AE8140C9A0@WARVWEXC2.uk.deluxe-eu.com> <4F63383D.4020000@dundee.ac.uk> <3147C311DEF9304ABFB764A6D5C1B6AE8140CAB1@WARVWEXC2.uk.deluxe-eu.com> <4F6B1DFE.4020803@dundee.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F501B04D@WARVWEXC2.uk.deluxe-eu.com> Allo You should now have pull+push access. I've not setup a proper branch structure yet, but I suggest you do a pull and add something along the lines of trunk/master/scripts/ > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 22 March 2012 12:42 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] GPFS UG #5 follow up > > On 03/16/2012 03:21 PM, Jez Tucker wrote: > > What's your github user id ? > > If you don't have one go here: https://github.com/signup/free > > > > Been having major GPFS woes this week. My id is jabuzzard. > > JAB. > > -- > Jonathan A. Buzzard Tel: +441382-386998 > Storage Administrator, College of Life Sciences University of Dundee, DD1 > 5EH _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use. From sfadden at us.ibm.com Thu Mar 22 18:15:05 2012 From: sfadden at us.ibm.com (Scott Fadden) Date: Thu, 22 Mar 2012 11:15:05 -0700 Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again In-Reply-To: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F501A9EC@WARVWEXC2.uk.deluxe-eu.com> Message-ID: Let me know if this helps http://www.ibm.com/developerworks/wikis/display/hpccentral/Threshold+based+migration+using+callbacks+example It is not specifically TSM but the model is the same. Scott Fadden GPFS Technical Marketing Desk: (503) 578-5630 Cell: (503) 880-5833 sfadden at us.ibm.com http://www.ibm.com/systems/gpfs From: Jez Tucker To: gpfsug main discussion list Date: 03/21/2012 09:50 AM Subject: [gpfsug-discuss] TSM 6.x + Space Management + GPFS 3.4.xx (all linux) ... try again Sent by: gpfsug-discuss-bounces at gpfsug.org Is anyone using this configuration? If so, can you confirm your settings of: enableLowSpaceEvents mmlscallback (re: lowSpace/noDiskSpace) mmlscluster | grep It seems support and the developers cannot decide how this should be setup (!!) Ta --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) Rushes Postproduction Limited, 66 Old Compton Street, London W1D 4UH tel: +44 (0)20 7437 8676 web: http://www.rushes.co.uk The information contained in this e-mail is confidential and may be subject to legal privilege. If you are not the intended recipient, you must not use, copy, distribute or disclose the e-mail or any part of its contents or take any action in reliance on it. If you have received this e-mail in error, please e-mail the sender by replying to this message. All reasonable precautions have been taken to ensure no viruses are present in this e-mail. Rushes Postproduction Limited cannot accept responsibility for loss or damage arising from the use of this e-mail or attachments and recommend that you subject these to your virus checking procedures prior to use._______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From geraint.north at uk.ibm.com Fri Mar 23 18:15:53 2012 From: geraint.north at uk.ibm.com (Geraint North) Date: Fri, 23 Mar 2012 18:15:53 +0000 Subject: [gpfsug-discuss] AUTO: Geraint North is prepared for DELETION (FREEZE) (returning 29/03/2012) Message-ID: I am out of the office until 29/03/2012. Note: This is an automated response to your message "[gpfsug-discuss] GPFSUG #6 - agenda" sent on 19/3/2012 12:14:23. This is the only notification you will receive while this person is away.