From luke.raimbach at oerc.ox.ac.uk Fri Mar 1 09:13:35 2013 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Fri, 1 Mar 2013 09:13:35 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I really hope this isn't a problem as I will want to end up doing this. Does it do in-line copy when you backup TSM HSMd data using TSM? Surely it does? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 28 February 2013 17:25 To: gpfsug main discussion list Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? Hello all, I have to ask Does anyone else do this? We have a problem and I'm told that "it's so rare that anyone would archive data which is HSMd". I.E. to create an archive whereby a project is entirely or partially HSMd to LTO - online data is archived to tape - offline data is copied from HSM tape to archive tape 'inline' Surely nobody pulls back all their data to disk before re-archiving back to tape? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Mar 1 09:56:39 2013 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Mar 2013 09:56:39 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: Message-ID: <39571EA9316BE44899D59C7A640C13F5306EEFDE@WARVWEXC1.uk.deluxe-eu.com> AFAIK it does not do inline for backups. I may be entirely wrong. It might depend on our setup. It definitely does for archive, which is where we are seeing our issue. That said, it looks at present like a memory allocation bug which the deb team are working on fixing. We're limited to filelists no bigger than 4000 files at present as a work around. I was looking to archive 195K files so you can imagine how inefficient that is. Let me drag out the actual reference doc when I get into work. From: Luke Raimbach [mailto:luke.raimbach at oerc.ox.ac.uk] Sent: Friday, March 01, 2013 09:13 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? I really hope this isn?t a problem as I will want to end up doing this. Does it do in-line copy when you backup TSM HSMd data using TSM? Surely it does? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 28 February 2013 17:25 To: gpfsug main discussion list Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? Hello all, I have to ask Does anyone else do this? We have a problem and I?m told that ?it?s so rare that anyone would archive data which is HSMd?. I.E. to create an archive whereby a project is entirely or partially HSMd to LTO - online data is archived to tape - offline data is copied from HSM tape to archive tape ?inline? Surely nobody pulls back all their data to disk before re-archiving back to tape? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Fri Mar 1 10:16:08 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 01 Mar 2013 10:16:08 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> On Fri, 2013-03-01 at 09:13 +0000, Luke Raimbach wrote: > I really hope this isn?t a problem as I will want to end up doing > this. I imagine the notion is that if you are using HSM what do you gain from archiving so why do it... The traditional answer would be to reduce the number of files in the file system, but with faster backup clients and now policy based reconciliation that requirement should be much reduced. > > Does it do in-line copy when you backup TSM HSMd data using TSM? > Surely it does? > That is not as useful as you might imagine. With the smart recalls that TSM 6.3 can do if you have the space you are probably better recalling them before the backup. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Jez.Tucker at rushes.co.uk Fri Mar 1 12:43:54 2013 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Mar 2013 12:43:54 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F5306EF0E3@WARVWEXC1.uk.deluxe-eu.com> Here's the relevant section of the manual regarding in-line archiving: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.hsmul.doc/t_arc_mig_premigs.html It looks like inline backup may be possible if you backup files after they have been migrated. However, for obviously sensible reasons, our mgmt. classes specify 'must be backed up before migration'. http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.hsmul.doc/c_bck_before.html We're using archiving and deleting of finalised projects as a means to reclaim valuable metadata space. Clearly if you're close to your threshold levels and you're recalling to archive again, you'll end up migrating other data. You can't worry about this too much - it should be 'auto-magical' but it will highly utilise your tape drives for some time. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 01 March 2013 10:16 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Who uses TSM to archive HSMd data > (inline) ? > > On Fri, 2013-03-01 at 09:13 +0000, Luke Raimbach wrote: > > I really hope this isn?t a problem as I will want to end up doing > > this. > > I imagine the notion is that if you are using HSM what do you gain > from > archiving so why do it... > > The traditional answer would be to reduce the number of files in the > file system, but with faster backup clients and now policy based > reconciliation that requirement should be much reduced. > > > > > Does it do in-line copy when you backup TSM HSMd data using > TSM? > > Surely it does? > > > > That is not as useful as you might imagine. With the smart recalls > that > TSM 6.3 can do if you have the space you are probably better > recalling > them before the backup. > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From mark.bergman at uphs.upenn.edu Mon Mar 11 19:26:58 2013 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Mon, 11 Mar 2013 15:26:58 -0400 Subject: [gpfsug-discuss] GPFS architecture choice: large servers or directly-attached clients? Message-ID: <11099.1363030018@localhost> I'm in the process of planning a new HPC cluster, and I'd appreciate getting some feedback on different approaches to the GPFS architecture. The cluster will have about 25~50 nodes initially (up to 1000 CPU-cores), expected to grow to about 50~80 nodes. The jobs are primarily independent, single-threaded, with a mixture of small- to medium-sized IO, and a lot of random access. It is very common to have 100s or 1000s of jobs on different cores and nodes each accessing the same directories, often with an overlap of the same data files. For example, many jobs on different nodes will use the same executable and the same baseline data models, but will differ in individual data files to compare to the model. My goal is to ensure reasonable performance, particularly when there's a lot of contention from multiple jobs accessing the same meta-data and some of the same data. My question here is in a choice between two GPFS archicture designs (the storage array configurations, drive types, RAID types, etc. are also being examined separately). I'd really like to hear any suggestions about these (or other) configurations: [1] Large GPFS servers About 5 GPFS servers with significant RAM. Each GPFS server would be connected to storage via an 8Gb/s fibre SAN (multiple paths) to storage arrays. Each GPFS server would provide NSDs via 10Gb/s and 1Gb/s (for legacy servers) ethernet to GPFS clients (computational compute nodes). Questions: Since the GPFS clients would not be SAN attached with direct access to block storage, and many clients (~50) will access similar data (and the same directories) for many jobs, it seems like it would make sense to do a lot of caching on the GPFS servers. Multiple clients would benefit by reading from the same cached data on the servers. I'm thinking of sizing caches to handle 1~2GB per core in the compute nodes, divided by the number of GPFS servers. This would mean caching (maxFilesToCache, pagepool, maxStatCache) on the GPFS servers of about 200GB+ on each GPFS server. Is there any way to configure GPFS so that the GPFS servers can do a large amount of caching without requiring the same resources on the GPFS clients? Is there any way to configure the GPFS clients so that their RAM can be used primarily for computational jobs? [2] Direct-attached GPFS clients About 3~5 GPFS servers with modest resources (8CPU-cores, ~60GB RAM). Each GPFS server and client (HPC compute node) would be directly connected to the SAN (8Gb/s fibre, iSCSI over 10Gb/s ethernet, FCoE over 10Gb/s ethernet). Either 10Gb/s or 1Gb/s ethernet for communication between GPFS nodes. Since this is a relatively small cluster in terms of the total node count, the increased cost in terms of HBAs, switches, and cabling for direct-connecting all nodes to the storage shouldn't be excessive. Ideas? Suggestions? Things I'm overlooking? Thanks, Mark From erich at uw.edu Mon Mar 11 20:18:55 2013 From: erich at uw.edu (Eric Horst) Date: Mon, 11 Mar 2013 13:18:55 -0700 Subject: [gpfsug-discuss] GPFS architecture choice: large servers or directly-attached clients? In-Reply-To: <11099.1363030018@localhost> References: <11099.1363030018@localhost> Message-ID: GPFS NSD servers (the ones with the disks attached) do not do any caching. There is no benefit to configuring the NSD servers with significant amounts of memory and increasing pagepool will not provide caching. NSD servers with pagepool in the single digit GB is plenty. The NSD servers for our 4000 core cluster have 12GB RAM and pagepool of 4GB. The 500 clients have pagepool of 2GB. This is some info from the GPFS wiki regarding NSD servers: "Assuming no applications or Filesystem Manager services are running on the NSD servers, the pagepool is only used transiently by the NSD worker threads to gather data from client nodes and write the data to disk. The NSD server does not cache any of the data. Each NSD worker just needs one pagepool buffer per operation, and the buffer can be potentially as large as the largest filesystem blocksize that the disks belong to. With the default NSD configuration, there will be 3 NSD worker threads per LUN (nsdThreadsPerDisk) that the node services. So the amount of memory needed in the pagepool will be 3*#LUNS*maxBlockSize. The target amount of space in the pagepool for NSD workers is controlled by nsdBufSpace which defaults to 30%. So the pagepool should be large enough so that 30% of it has enough buffers." -Eric On Mon, Mar 11, 2013 at 12:26 PM, wrote: > [1] Large GPFS servers > About 5 GPFS servers with significant RAM. Each GPFS server would > be connected to storage via an 8Gb/s fibre SAN (multiple paths) > to storage arrays. > > Each GPFS server would provide NSDs via 10Gb/s and 1Gb/s (for legacy > servers) ethernet to GPFS clients (computational compute nodes). > > Questions: > > Since the GPFS clients would not be SAN attached > with direct access to block storage, and many > clients (~50) will access similar data (and the > same directories) for many jobs, it seems like it > would make sense to do a lot of caching on the > GPFS servers. Multiple clients would benefit by > reading from the same cached data on the servers. > > I'm thinking of sizing caches to handle 1~2GB > per core in the compute nodes, divided by the > number of GPFS servers. This would mean caching > (maxFilesToCache, pagepool, maxStatCache) on the > GPFS servers of about 200GB+ on each GPFS server. > > Is there any way to configure GPFS so that the > GPFS servers can do a large amount of caching > without requiring the same resources on the > GPFS clients? > > Is there any way to configure the GPFS clients > so that their RAM can be used primarily for > computational jobs? From ZEYNEP at de.ibm.com Mon Mar 25 11:12:25 2013 From: ZEYNEP at de.ibm.com (Zeynep Oeztuerk) Date: Mon, 25 Mar 2013 12:12:25 +0100 Subject: [gpfsug-discuss] Hello Message-ID: Hello together, I'm Zeynep Oeztuerk and I'm an computer science student at the University of Stuttgart/Germany. Now I'm writing my diploma thesis at IBM. My diploma thesis is about GPFS encryption and key management. It would be great, if I get more information about GPFS encryption. Thanks :-) Regards, Zeynep Oeztuerk Student Diplom Informatik Software Group E-mail: ZEYNEP at de.ibm.com Find me on: Schoenaicher Str. 220 Boeblingen, 71032 Germany IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 6398 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 453 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 2022 bytes Desc: not available URL: From AHMADYH at sa.ibm.com Mon Mar 25 12:02:59 2013 From: AHMADYH at sa.ibm.com (Ahmad Y Hussein) Date: Mon, 25 Mar 2013 16:02:59 +0400 Subject: [gpfsug-discuss] AUTO: Ahmad Y Hussein is out of the office (returning 03/30/2013) Message-ID: I am out of the office until 03/30/2013. Dear Sender; I am in a customer engagement with extremely limited email access, I will respond to your emails as soon as i can. For Urjent cases please call me on my mobile (+966542001289). Thank you for understanding. Regards; Ahmad Y Hussein Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 15, Issue 5" sent on 25/03/2013 16:00:03. This is the only notification you will receive while this person is away. From Tobias.Kuebler at sva.de Mon Mar 25 12:45:57 2013 From: Tobias.Kuebler at sva.de (Tobias.Kuebler at sva.de) Date: Mon, 25 Mar 2013 13:45:57 +0100 Subject: [gpfsug-discuss] =?iso-8859-1?q?AUTO=3A_Tobias_Kuebler_ist_au=DFe?= =?iso-8859-1?q?r_Haus_=28R=FCckkehr_am_Di=2C_04/02/2013=29?= Message-ID: Ich bin von Mo, 03/25/2013 bis Di, 04/02/2013 abwesend. Vielen Dank f?r Ihre Nachricht. Ankommende E-Mails werden w?hrend meiner Abwesenheit nicht weitergeleitet, ich versuche Sie jedoch m?glichst rasch nach meiner R?ckkehr zu beantworten. In dringenden F?llen wenden Sie sich bitte an Ihren zust?ndigen Vertriebsbeauftragten. Hinweis: Dies ist eine automatische Antwort auf Ihre Nachricht "[gpfsug-discuss] AUTO: Ahmad Y Hussein is out of the office (returning 03/30/2013)" gesendet am 25.03.2013 13:02:59. Diese ist die einzige Benachrichtigung, die Sie empfangen werden, w?hrend diese Person abwesend ist. -------------- next part -------------- An HTML attachment was scrubbed... URL: From crobson at ocf.co.uk Mon Mar 25 14:38:45 2013 From: crobson at ocf.co.uk (Claire Robson) Date: Mon, 25 Mar 2013 14:38:45 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged Message-ID: Dear All, The next meeting date is set for Wednesday 24th April and will be taking place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4-6 Soho Square, London W1D 3PZ). Getting to Dolby Europe Limited, Soho Square, London Leave the Tottenham Court Road tube station by the South Oxford Street exit [Exit 1]. Turn left onto Oxford Street. After about 50m turn left into Soho Street. Turn right into Soho Square. 4-6 Soho Square is directly in front of you. Our tentative agenda is as follows: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 GPFS OpenStack Integration Prasenhit Sarkar, IBM Almaden Research Labs GPFS FPO Dinesh Subhraveti, IBM Almaden Research Labs 11:45 SAMBA 4.0 & CTDB 2.0 Michael Adams, SAMBA Development Team 12:15 SAMBA & GPFS Integration Volker Lendecke, SAMBA Development Team 13:00 Lunch (Buffet provided) 14:00 GPFS Native RAID & LTFS Jim Roche, IBM 14:45 User Stories 15:45 Group discussion: Challenges, experiences and questions & Committee matters Led by Jez Tucker, Group Chairperson 16:00 Close We will be starting at 11:00am and concluding at 4pm but some of the speaker timings may alter slightly. I will be posting further details on what the presentations cover over the coming week or so. We hope you can make it for what will be a really interesting day of GPFS discussions. Please register with me if you would like to attend - registrations are based on a first come first served basis. Best regards, Claire Robson GPFS User Group Secreatry Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Mon Mar 25 15:15:16 2013 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Mon, 25 Mar 2013 15:15:16 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: Hi Claire, Please register me! Cheers, Luke. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Claire Robson Sent: 25 March 2013 14:39 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged Dear All, The next meeting date is set for Wednesday 24th April and will be taking place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4-6 Soho Square, London W1D 3PZ). Getting to Dolby Europe Limited, Soho Square, London Leave the Tottenham Court Road tube station by the South Oxford Street exit [Exit 1]. Turn left onto Oxford Street. After about 50m turn left into Soho Street. Turn right into Soho Square. 4-6 Soho Square is directly in front of you. Our tentative agenda is as follows: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 GPFS OpenStack Integration Prasenhit Sarkar, IBM Almaden Research Labs GPFS FPO Dinesh Subhraveti, IBM Almaden Research Labs 11:45 SAMBA 4.0 & CTDB 2.0 Michael Adams, SAMBA Development Team 12:15 SAMBA & GPFS Integration Volker Lendecke, SAMBA Development Team 13:00 Lunch (Buffet provided) 14:00 GPFS Native RAID & LTFS Jim Roche, IBM 14:45 User Stories 15:45 Group discussion: Challenges, experiences and questions & Committee matters Led by Jez Tucker, Group Chairperson 16:00 Close We will be starting at 11:00am and concluding at 4pm but some of the speaker timings may alter slightly. I will be posting further details on what the presentations cover over the coming week or so. We hope you can make it for what will be a really interesting day of GPFS discussions. Please register with me if you would like to attend - registrations are based on a first come first served basis. Best regards, Claire Robson GPFS User Group Secreatry Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Mon Mar 25 15:19:22 2013 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Mon, 25 Mar 2013 15:19:22 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: <51506AFA.5040507@ed.ac.uk> Hi Claire, Please add my name to the list! See you then, Orlando On 25/03/13 14:38, Claire Robson wrote: > Dear All, > > The next meeting date is set for *Wednesday 24^th April* and will be > taking place at the fantastic Dolby Studios in London (Dolby Europe > Limited, 4?6 Soho Square, London W1D 3PZ). > > *Getting to Dolby Europe Limited, Soho Square, London* > > Leave the Tottenham Court Road tube station by the South Oxford Street > exit [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > Our tentative agenda is as follows: > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences and questions & > Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > We will be starting at 11:00am and concluding at 4pm but some of the > speaker timings may alter slightly. I will be posting further details on > what the presentations cover over the coming week or so. > > We hope you can make it for what will be a really interesting day of > GPFS discussions. *Please register with me if you would like to attend* > ? registrations are based on a first come first served basis. > > Best regards, > > *Claire Robson* > > GPFS User Group Secreatry > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: _www.gpfsug.org _ > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From bdeluca at gmail.com Mon Mar 25 16:40:48 2013 From: bdeluca at gmail.com (Ben De Luca) Date: Mon, 25 Mar 2013 16:40:48 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: Hi Claire, Please add my name to the list! On Mon, Mar 25, 2013 at 2:38 PM, Claire Robson wrote: > Dear All, > > > > The next meeting date is set for Wednesday 24th April and will be taking > place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4?6 > Soho Square, London W1D 3PZ). > > > > Getting to Dolby Europe Limited, Soho Square, London > > Leave the Tottenham Court Road tube station by the South Oxford Street exit > [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > > > Our tentative agenda is as follows: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences and questions & > Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > > > We will be starting at 11:00am and concluding at 4pm but some of the speaker > timings may alter slightly. I will be posting further details on what the > presentations cover over the coming week or so. > > > > We hope you can make it for what will be a really interesting day of GPFS > discussions. Please register with me if you would like to attend ? > registrations are based on a first come first served basis. > > > > Best regards, > > > > Claire Robson > > GPFS User Group Secreatry > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: www.gpfsug.org > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From robert at strubi.ox.ac.uk Wed Mar 27 09:55:19 2013 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Wed, 27 Mar 2013 09:55:19 +0000 (GMT) Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: <201303270955.064911@mail.strubi.ox.ac.uk> Dear Claire, Please sign me up to. Sounds a great venue. Regards, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 ---- Original message ---- >Date: Mon, 25 Mar 2013 14:38:45 +0000 >From: gpfsug-discuss-bounces at gpfsug.org (on behalf of Claire Robson ) >Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged >To: "gpfsug-discuss at gpfsug.org" > > Dear All, > > > > The next meeting date is set for Wednesday 24^th > April and will be taking place at the fantastic > Dolby Studios in London (Dolby Europe Limited, 4-6 > Soho Square, London W1D 3PZ). > > > > Getting to Dolby Europe Limited, Soho Square, London > > Leave the Tottenham Court Road tube station by the > South Oxford Street exit [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > > > Our tentative agenda is as follows: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group > Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden > Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development > Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences > and questions & Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > > > We will be starting at 11:00am and concluding at 4pm > but some of the speaker timings may alter slightly. > I will be posting further details on what the > presentations cover over the coming week or so. > > > > We hope you can make it for what will be a really > interesting day of GPFS discussions. Please register > with me if you would like to attend - registrations > are based on a first come first served basis. > > > > Best regards, > > > > Claire Robson > > GPFS User Group Secreatry > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: www.gpfsug.org > > >________________ >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luke.raimbach at oerc.ox.ac.uk Fri Mar 1 09:13:35 2013 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Fri, 1 Mar 2013 09:13:35 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I really hope this isn't a problem as I will want to end up doing this. Does it do in-line copy when you backup TSM HSMd data using TSM? Surely it does? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 28 February 2013 17:25 To: gpfsug main discussion list Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? Hello all, I have to ask Does anyone else do this? We have a problem and I'm told that "it's so rare that anyone would archive data which is HSMd". I.E. to create an archive whereby a project is entirely or partially HSMd to LTO - online data is archived to tape - offline data is copied from HSM tape to archive tape 'inline' Surely nobody pulls back all their data to disk before re-archiving back to tape? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Mar 1 09:56:39 2013 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Mar 2013 09:56:39 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: Message-ID: <39571EA9316BE44899D59C7A640C13F5306EEFDE@WARVWEXC1.uk.deluxe-eu.com> AFAIK it does not do inline for backups. I may be entirely wrong. It might depend on our setup. It definitely does for archive, which is where we are seeing our issue. That said, it looks at present like a memory allocation bug which the deb team are working on fixing. We're limited to filelists no bigger than 4000 files at present as a work around. I was looking to archive 195K files so you can imagine how inefficient that is. Let me drag out the actual reference doc when I get into work. From: Luke Raimbach [mailto:luke.raimbach at oerc.ox.ac.uk] Sent: Friday, March 01, 2013 09:13 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? I really hope this isn?t a problem as I will want to end up doing this. Does it do in-line copy when you backup TSM HSMd data using TSM? Surely it does? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 28 February 2013 17:25 To: gpfsug main discussion list Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? Hello all, I have to ask Does anyone else do this? We have a problem and I?m told that ?it?s so rare that anyone would archive data which is HSMd?. I.E. to create an archive whereby a project is entirely or partially HSMd to LTO - online data is archived to tape - offline data is copied from HSM tape to archive tape ?inline? Surely nobody pulls back all their data to disk before re-archiving back to tape? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Fri Mar 1 10:16:08 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 01 Mar 2013 10:16:08 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> On Fri, 2013-03-01 at 09:13 +0000, Luke Raimbach wrote: > I really hope this isn?t a problem as I will want to end up doing > this. I imagine the notion is that if you are using HSM what do you gain from archiving so why do it... The traditional answer would be to reduce the number of files in the file system, but with faster backup clients and now policy based reconciliation that requirement should be much reduced. > > Does it do in-line copy when you backup TSM HSMd data using TSM? > Surely it does? > That is not as useful as you might imagine. With the smart recalls that TSM 6.3 can do if you have the space you are probably better recalling them before the backup. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Jez.Tucker at rushes.co.uk Fri Mar 1 12:43:54 2013 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Mar 2013 12:43:54 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F5306EF0E3@WARVWEXC1.uk.deluxe-eu.com> Here's the relevant section of the manual regarding in-line archiving: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.hsmul.doc/t_arc_mig_premigs.html It looks like inline backup may be possible if you backup files after they have been migrated. However, for obviously sensible reasons, our mgmt. classes specify 'must be backed up before migration'. http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.hsmul.doc/c_bck_before.html We're using archiving and deleting of finalised projects as a means to reclaim valuable metadata space. Clearly if you're close to your threshold levels and you're recalling to archive again, you'll end up migrating other data. You can't worry about this too much - it should be 'auto-magical' but it will highly utilise your tape drives for some time. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 01 March 2013 10:16 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Who uses TSM to archive HSMd data > (inline) ? > > On Fri, 2013-03-01 at 09:13 +0000, Luke Raimbach wrote: > > I really hope this isn?t a problem as I will want to end up doing > > this. > > I imagine the notion is that if you are using HSM what do you gain > from > archiving so why do it... > > The traditional answer would be to reduce the number of files in the > file system, but with faster backup clients and now policy based > reconciliation that requirement should be much reduced. > > > > > Does it do in-line copy when you backup TSM HSMd data using > TSM? > > Surely it does? > > > > That is not as useful as you might imagine. With the smart recalls > that > TSM 6.3 can do if you have the space you are probably better > recalling > them before the backup. > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From mark.bergman at uphs.upenn.edu Mon Mar 11 19:26:58 2013 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Mon, 11 Mar 2013 15:26:58 -0400 Subject: [gpfsug-discuss] GPFS architecture choice: large servers or directly-attached clients? Message-ID: <11099.1363030018@localhost> I'm in the process of planning a new HPC cluster, and I'd appreciate getting some feedback on different approaches to the GPFS architecture. The cluster will have about 25~50 nodes initially (up to 1000 CPU-cores), expected to grow to about 50~80 nodes. The jobs are primarily independent, single-threaded, with a mixture of small- to medium-sized IO, and a lot of random access. It is very common to have 100s or 1000s of jobs on different cores and nodes each accessing the same directories, often with an overlap of the same data files. For example, many jobs on different nodes will use the same executable and the same baseline data models, but will differ in individual data files to compare to the model. My goal is to ensure reasonable performance, particularly when there's a lot of contention from multiple jobs accessing the same meta-data and some of the same data. My question here is in a choice between two GPFS archicture designs (the storage array configurations, drive types, RAID types, etc. are also being examined separately). I'd really like to hear any suggestions about these (or other) configurations: [1] Large GPFS servers About 5 GPFS servers with significant RAM. Each GPFS server would be connected to storage via an 8Gb/s fibre SAN (multiple paths) to storage arrays. Each GPFS server would provide NSDs via 10Gb/s and 1Gb/s (for legacy servers) ethernet to GPFS clients (computational compute nodes). Questions: Since the GPFS clients would not be SAN attached with direct access to block storage, and many clients (~50) will access similar data (and the same directories) for many jobs, it seems like it would make sense to do a lot of caching on the GPFS servers. Multiple clients would benefit by reading from the same cached data on the servers. I'm thinking of sizing caches to handle 1~2GB per core in the compute nodes, divided by the number of GPFS servers. This would mean caching (maxFilesToCache, pagepool, maxStatCache) on the GPFS servers of about 200GB+ on each GPFS server. Is there any way to configure GPFS so that the GPFS servers can do a large amount of caching without requiring the same resources on the GPFS clients? Is there any way to configure the GPFS clients so that their RAM can be used primarily for computational jobs? [2] Direct-attached GPFS clients About 3~5 GPFS servers with modest resources (8CPU-cores, ~60GB RAM). Each GPFS server and client (HPC compute node) would be directly connected to the SAN (8Gb/s fibre, iSCSI over 10Gb/s ethernet, FCoE over 10Gb/s ethernet). Either 10Gb/s or 1Gb/s ethernet for communication between GPFS nodes. Since this is a relatively small cluster in terms of the total node count, the increased cost in terms of HBAs, switches, and cabling for direct-connecting all nodes to the storage shouldn't be excessive. Ideas? Suggestions? Things I'm overlooking? Thanks, Mark From erich at uw.edu Mon Mar 11 20:18:55 2013 From: erich at uw.edu (Eric Horst) Date: Mon, 11 Mar 2013 13:18:55 -0700 Subject: [gpfsug-discuss] GPFS architecture choice: large servers or directly-attached clients? In-Reply-To: <11099.1363030018@localhost> References: <11099.1363030018@localhost> Message-ID: GPFS NSD servers (the ones with the disks attached) do not do any caching. There is no benefit to configuring the NSD servers with significant amounts of memory and increasing pagepool will not provide caching. NSD servers with pagepool in the single digit GB is plenty. The NSD servers for our 4000 core cluster have 12GB RAM and pagepool of 4GB. The 500 clients have pagepool of 2GB. This is some info from the GPFS wiki regarding NSD servers: "Assuming no applications or Filesystem Manager services are running on the NSD servers, the pagepool is only used transiently by the NSD worker threads to gather data from client nodes and write the data to disk. The NSD server does not cache any of the data. Each NSD worker just needs one pagepool buffer per operation, and the buffer can be potentially as large as the largest filesystem blocksize that the disks belong to. With the default NSD configuration, there will be 3 NSD worker threads per LUN (nsdThreadsPerDisk) that the node services. So the amount of memory needed in the pagepool will be 3*#LUNS*maxBlockSize. The target amount of space in the pagepool for NSD workers is controlled by nsdBufSpace which defaults to 30%. So the pagepool should be large enough so that 30% of it has enough buffers." -Eric On Mon, Mar 11, 2013 at 12:26 PM, wrote: > [1] Large GPFS servers > About 5 GPFS servers with significant RAM. Each GPFS server would > be connected to storage via an 8Gb/s fibre SAN (multiple paths) > to storage arrays. > > Each GPFS server would provide NSDs via 10Gb/s and 1Gb/s (for legacy > servers) ethernet to GPFS clients (computational compute nodes). > > Questions: > > Since the GPFS clients would not be SAN attached > with direct access to block storage, and many > clients (~50) will access similar data (and the > same directories) for many jobs, it seems like it > would make sense to do a lot of caching on the > GPFS servers. Multiple clients would benefit by > reading from the same cached data on the servers. > > I'm thinking of sizing caches to handle 1~2GB > per core in the compute nodes, divided by the > number of GPFS servers. This would mean caching > (maxFilesToCache, pagepool, maxStatCache) on the > GPFS servers of about 200GB+ on each GPFS server. > > Is there any way to configure GPFS so that the > GPFS servers can do a large amount of caching > without requiring the same resources on the > GPFS clients? > > Is there any way to configure the GPFS clients > so that their RAM can be used primarily for > computational jobs? From ZEYNEP at de.ibm.com Mon Mar 25 11:12:25 2013 From: ZEYNEP at de.ibm.com (Zeynep Oeztuerk) Date: Mon, 25 Mar 2013 12:12:25 +0100 Subject: [gpfsug-discuss] Hello Message-ID: Hello together, I'm Zeynep Oeztuerk and I'm an computer science student at the University of Stuttgart/Germany. Now I'm writing my diploma thesis at IBM. My diploma thesis is about GPFS encryption and key management. It would be great, if I get more information about GPFS encryption. Thanks :-) Regards, Zeynep Oeztuerk Student Diplom Informatik Software Group E-mail: ZEYNEP at de.ibm.com Find me on: Schoenaicher Str. 220 Boeblingen, 71032 Germany IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 6398 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 453 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 2022 bytes Desc: not available URL: From AHMADYH at sa.ibm.com Mon Mar 25 12:02:59 2013 From: AHMADYH at sa.ibm.com (Ahmad Y Hussein) Date: Mon, 25 Mar 2013 16:02:59 +0400 Subject: [gpfsug-discuss] AUTO: Ahmad Y Hussein is out of the office (returning 03/30/2013) Message-ID: I am out of the office until 03/30/2013. Dear Sender; I am in a customer engagement with extremely limited email access, I will respond to your emails as soon as i can. For Urjent cases please call me on my mobile (+966542001289). Thank you for understanding. Regards; Ahmad Y Hussein Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 15, Issue 5" sent on 25/03/2013 16:00:03. This is the only notification you will receive while this person is away. From Tobias.Kuebler at sva.de Mon Mar 25 12:45:57 2013 From: Tobias.Kuebler at sva.de (Tobias.Kuebler at sva.de) Date: Mon, 25 Mar 2013 13:45:57 +0100 Subject: [gpfsug-discuss] =?iso-8859-1?q?AUTO=3A_Tobias_Kuebler_ist_au=DFe?= =?iso-8859-1?q?r_Haus_=28R=FCckkehr_am_Di=2C_04/02/2013=29?= Message-ID: Ich bin von Mo, 03/25/2013 bis Di, 04/02/2013 abwesend. Vielen Dank f?r Ihre Nachricht. Ankommende E-Mails werden w?hrend meiner Abwesenheit nicht weitergeleitet, ich versuche Sie jedoch m?glichst rasch nach meiner R?ckkehr zu beantworten. In dringenden F?llen wenden Sie sich bitte an Ihren zust?ndigen Vertriebsbeauftragten. Hinweis: Dies ist eine automatische Antwort auf Ihre Nachricht "[gpfsug-discuss] AUTO: Ahmad Y Hussein is out of the office (returning 03/30/2013)" gesendet am 25.03.2013 13:02:59. Diese ist die einzige Benachrichtigung, die Sie empfangen werden, w?hrend diese Person abwesend ist. -------------- next part -------------- An HTML attachment was scrubbed... URL: From crobson at ocf.co.uk Mon Mar 25 14:38:45 2013 From: crobson at ocf.co.uk (Claire Robson) Date: Mon, 25 Mar 2013 14:38:45 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged Message-ID: Dear All, The next meeting date is set for Wednesday 24th April and will be taking place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4-6 Soho Square, London W1D 3PZ). Getting to Dolby Europe Limited, Soho Square, London Leave the Tottenham Court Road tube station by the South Oxford Street exit [Exit 1]. Turn left onto Oxford Street. After about 50m turn left into Soho Street. Turn right into Soho Square. 4-6 Soho Square is directly in front of you. Our tentative agenda is as follows: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 GPFS OpenStack Integration Prasenhit Sarkar, IBM Almaden Research Labs GPFS FPO Dinesh Subhraveti, IBM Almaden Research Labs 11:45 SAMBA 4.0 & CTDB 2.0 Michael Adams, SAMBA Development Team 12:15 SAMBA & GPFS Integration Volker Lendecke, SAMBA Development Team 13:00 Lunch (Buffet provided) 14:00 GPFS Native RAID & LTFS Jim Roche, IBM 14:45 User Stories 15:45 Group discussion: Challenges, experiences and questions & Committee matters Led by Jez Tucker, Group Chairperson 16:00 Close We will be starting at 11:00am and concluding at 4pm but some of the speaker timings may alter slightly. I will be posting further details on what the presentations cover over the coming week or so. We hope you can make it for what will be a really interesting day of GPFS discussions. Please register with me if you would like to attend - registrations are based on a first come first served basis. Best regards, Claire Robson GPFS User Group Secreatry Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Mon Mar 25 15:15:16 2013 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Mon, 25 Mar 2013 15:15:16 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: Hi Claire, Please register me! Cheers, Luke. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Claire Robson Sent: 25 March 2013 14:39 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged Dear All, The next meeting date is set for Wednesday 24th April and will be taking place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4-6 Soho Square, London W1D 3PZ). Getting to Dolby Europe Limited, Soho Square, London Leave the Tottenham Court Road tube station by the South Oxford Street exit [Exit 1]. Turn left onto Oxford Street. After about 50m turn left into Soho Street. Turn right into Soho Square. 4-6 Soho Square is directly in front of you. Our tentative agenda is as follows: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 GPFS OpenStack Integration Prasenhit Sarkar, IBM Almaden Research Labs GPFS FPO Dinesh Subhraveti, IBM Almaden Research Labs 11:45 SAMBA 4.0 & CTDB 2.0 Michael Adams, SAMBA Development Team 12:15 SAMBA & GPFS Integration Volker Lendecke, SAMBA Development Team 13:00 Lunch (Buffet provided) 14:00 GPFS Native RAID & LTFS Jim Roche, IBM 14:45 User Stories 15:45 Group discussion: Challenges, experiences and questions & Committee matters Led by Jez Tucker, Group Chairperson 16:00 Close We will be starting at 11:00am and concluding at 4pm but some of the speaker timings may alter slightly. I will be posting further details on what the presentations cover over the coming week or so. We hope you can make it for what will be a really interesting day of GPFS discussions. Please register with me if you would like to attend - registrations are based on a first come first served basis. Best regards, Claire Robson GPFS User Group Secreatry Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Mon Mar 25 15:19:22 2013 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Mon, 25 Mar 2013 15:19:22 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: <51506AFA.5040507@ed.ac.uk> Hi Claire, Please add my name to the list! See you then, Orlando On 25/03/13 14:38, Claire Robson wrote: > Dear All, > > The next meeting date is set for *Wednesday 24^th April* and will be > taking place at the fantastic Dolby Studios in London (Dolby Europe > Limited, 4?6 Soho Square, London W1D 3PZ). > > *Getting to Dolby Europe Limited, Soho Square, London* > > Leave the Tottenham Court Road tube station by the South Oxford Street > exit [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > Our tentative agenda is as follows: > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences and questions & > Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > We will be starting at 11:00am and concluding at 4pm but some of the > speaker timings may alter slightly. I will be posting further details on > what the presentations cover over the coming week or so. > > We hope you can make it for what will be a really interesting day of > GPFS discussions. *Please register with me if you would like to attend* > ? registrations are based on a first come first served basis. > > Best regards, > > *Claire Robson* > > GPFS User Group Secreatry > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: _www.gpfsug.org _ > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From bdeluca at gmail.com Mon Mar 25 16:40:48 2013 From: bdeluca at gmail.com (Ben De Luca) Date: Mon, 25 Mar 2013 16:40:48 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: Hi Claire, Please add my name to the list! On Mon, Mar 25, 2013 at 2:38 PM, Claire Robson wrote: > Dear All, > > > > The next meeting date is set for Wednesday 24th April and will be taking > place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4?6 > Soho Square, London W1D 3PZ). > > > > Getting to Dolby Europe Limited, Soho Square, London > > Leave the Tottenham Court Road tube station by the South Oxford Street exit > [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > > > Our tentative agenda is as follows: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences and questions & > Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > > > We will be starting at 11:00am and concluding at 4pm but some of the speaker > timings may alter slightly. I will be posting further details on what the > presentations cover over the coming week or so. > > > > We hope you can make it for what will be a really interesting day of GPFS > discussions. Please register with me if you would like to attend ? > registrations are based on a first come first served basis. > > > > Best regards, > > > > Claire Robson > > GPFS User Group Secreatry > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: www.gpfsug.org > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From robert at strubi.ox.ac.uk Wed Mar 27 09:55:19 2013 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Wed, 27 Mar 2013 09:55:19 +0000 (GMT) Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: <201303270955.064911@mail.strubi.ox.ac.uk> Dear Claire, Please sign me up to. Sounds a great venue. Regards, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 ---- Original message ---- >Date: Mon, 25 Mar 2013 14:38:45 +0000 >From: gpfsug-discuss-bounces at gpfsug.org (on behalf of Claire Robson ) >Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged >To: "gpfsug-discuss at gpfsug.org" > > Dear All, > > > > The next meeting date is set for Wednesday 24^th > April and will be taking place at the fantastic > Dolby Studios in London (Dolby Europe Limited, 4-6 > Soho Square, London W1D 3PZ). > > > > Getting to Dolby Europe Limited, Soho Square, London > > Leave the Tottenham Court Road tube station by the > South Oxford Street exit [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > > > Our tentative agenda is as follows: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group > Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden > Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development > Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences > and questions & Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > > > We will be starting at 11:00am and concluding at 4pm > but some of the speaker timings may alter slightly. > I will be posting further details on what the > presentations cover over the coming week or so. > > > > We hope you can make it for what will be a really > interesting day of GPFS discussions. Please register > with me if you would like to attend - registrations > are based on a first come first served basis. > > > > Best regards, > > > > Claire Robson > > GPFS User Group Secreatry > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: www.gpfsug.org > > >________________ >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luke.raimbach at oerc.ox.ac.uk Fri Mar 1 09:13:35 2013 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Fri, 1 Mar 2013 09:13:35 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I really hope this isn't a problem as I will want to end up doing this. Does it do in-line copy when you backup TSM HSMd data using TSM? Surely it does? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 28 February 2013 17:25 To: gpfsug main discussion list Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? Hello all, I have to ask Does anyone else do this? We have a problem and I'm told that "it's so rare that anyone would archive data which is HSMd". I.E. to create an archive whereby a project is entirely or partially HSMd to LTO - online data is archived to tape - offline data is copied from HSM tape to archive tape 'inline' Surely nobody pulls back all their data to disk before re-archiving back to tape? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Mar 1 09:56:39 2013 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Mar 2013 09:56:39 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: Message-ID: <39571EA9316BE44899D59C7A640C13F5306EEFDE@WARVWEXC1.uk.deluxe-eu.com> AFAIK it does not do inline for backups. I may be entirely wrong. It might depend on our setup. It definitely does for archive, which is where we are seeing our issue. That said, it looks at present like a memory allocation bug which the deb team are working on fixing. We're limited to filelists no bigger than 4000 files at present as a work around. I was looking to archive 195K files so you can imagine how inefficient that is. Let me drag out the actual reference doc when I get into work. From: Luke Raimbach [mailto:luke.raimbach at oerc.ox.ac.uk] Sent: Friday, March 01, 2013 09:13 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? I really hope this isn?t a problem as I will want to end up doing this. Does it do in-line copy when you backup TSM HSMd data using TSM? Surely it does? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 28 February 2013 17:25 To: gpfsug main discussion list Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? Hello all, I have to ask Does anyone else do this? We have a problem and I?m told that ?it?s so rare that anyone would archive data which is HSMd?. I.E. to create an archive whereby a project is entirely or partially HSMd to LTO - online data is archived to tape - offline data is copied from HSM tape to archive tape ?inline? Surely nobody pulls back all their data to disk before re-archiving back to tape? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Fri Mar 1 10:16:08 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 01 Mar 2013 10:16:08 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> On Fri, 2013-03-01 at 09:13 +0000, Luke Raimbach wrote: > I really hope this isn?t a problem as I will want to end up doing > this. I imagine the notion is that if you are using HSM what do you gain from archiving so why do it... The traditional answer would be to reduce the number of files in the file system, but with faster backup clients and now policy based reconciliation that requirement should be much reduced. > > Does it do in-line copy when you backup TSM HSMd data using TSM? > Surely it does? > That is not as useful as you might imagine. With the smart recalls that TSM 6.3 can do if you have the space you are probably better recalling them before the backup. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Jez.Tucker at rushes.co.uk Fri Mar 1 12:43:54 2013 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Mar 2013 12:43:54 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F5306EF0E3@WARVWEXC1.uk.deluxe-eu.com> Here's the relevant section of the manual regarding in-line archiving: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.hsmul.doc/t_arc_mig_premigs.html It looks like inline backup may be possible if you backup files after they have been migrated. However, for obviously sensible reasons, our mgmt. classes specify 'must be backed up before migration'. http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.hsmul.doc/c_bck_before.html We're using archiving and deleting of finalised projects as a means to reclaim valuable metadata space. Clearly if you're close to your threshold levels and you're recalling to archive again, you'll end up migrating other data. You can't worry about this too much - it should be 'auto-magical' but it will highly utilise your tape drives for some time. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 01 March 2013 10:16 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Who uses TSM to archive HSMd data > (inline) ? > > On Fri, 2013-03-01 at 09:13 +0000, Luke Raimbach wrote: > > I really hope this isn?t a problem as I will want to end up doing > > this. > > I imagine the notion is that if you are using HSM what do you gain > from > archiving so why do it... > > The traditional answer would be to reduce the number of files in the > file system, but with faster backup clients and now policy based > reconciliation that requirement should be much reduced. > > > > > Does it do in-line copy when you backup TSM HSMd data using > TSM? > > Surely it does? > > > > That is not as useful as you might imagine. With the smart recalls > that > TSM 6.3 can do if you have the space you are probably better > recalling > them before the backup. > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From mark.bergman at uphs.upenn.edu Mon Mar 11 19:26:58 2013 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Mon, 11 Mar 2013 15:26:58 -0400 Subject: [gpfsug-discuss] GPFS architecture choice: large servers or directly-attached clients? Message-ID: <11099.1363030018@localhost> I'm in the process of planning a new HPC cluster, and I'd appreciate getting some feedback on different approaches to the GPFS architecture. The cluster will have about 25~50 nodes initially (up to 1000 CPU-cores), expected to grow to about 50~80 nodes. The jobs are primarily independent, single-threaded, with a mixture of small- to medium-sized IO, and a lot of random access. It is very common to have 100s or 1000s of jobs on different cores and nodes each accessing the same directories, often with an overlap of the same data files. For example, many jobs on different nodes will use the same executable and the same baseline data models, but will differ in individual data files to compare to the model. My goal is to ensure reasonable performance, particularly when there's a lot of contention from multiple jobs accessing the same meta-data and some of the same data. My question here is in a choice between two GPFS archicture designs (the storage array configurations, drive types, RAID types, etc. are also being examined separately). I'd really like to hear any suggestions about these (or other) configurations: [1] Large GPFS servers About 5 GPFS servers with significant RAM. Each GPFS server would be connected to storage via an 8Gb/s fibre SAN (multiple paths) to storage arrays. Each GPFS server would provide NSDs via 10Gb/s and 1Gb/s (for legacy servers) ethernet to GPFS clients (computational compute nodes). Questions: Since the GPFS clients would not be SAN attached with direct access to block storage, and many clients (~50) will access similar data (and the same directories) for many jobs, it seems like it would make sense to do a lot of caching on the GPFS servers. Multiple clients would benefit by reading from the same cached data on the servers. I'm thinking of sizing caches to handle 1~2GB per core in the compute nodes, divided by the number of GPFS servers. This would mean caching (maxFilesToCache, pagepool, maxStatCache) on the GPFS servers of about 200GB+ on each GPFS server. Is there any way to configure GPFS so that the GPFS servers can do a large amount of caching without requiring the same resources on the GPFS clients? Is there any way to configure the GPFS clients so that their RAM can be used primarily for computational jobs? [2] Direct-attached GPFS clients About 3~5 GPFS servers with modest resources (8CPU-cores, ~60GB RAM). Each GPFS server and client (HPC compute node) would be directly connected to the SAN (8Gb/s fibre, iSCSI over 10Gb/s ethernet, FCoE over 10Gb/s ethernet). Either 10Gb/s or 1Gb/s ethernet for communication between GPFS nodes. Since this is a relatively small cluster in terms of the total node count, the increased cost in terms of HBAs, switches, and cabling for direct-connecting all nodes to the storage shouldn't be excessive. Ideas? Suggestions? Things I'm overlooking? Thanks, Mark From erich at uw.edu Mon Mar 11 20:18:55 2013 From: erich at uw.edu (Eric Horst) Date: Mon, 11 Mar 2013 13:18:55 -0700 Subject: [gpfsug-discuss] GPFS architecture choice: large servers or directly-attached clients? In-Reply-To: <11099.1363030018@localhost> References: <11099.1363030018@localhost> Message-ID: GPFS NSD servers (the ones with the disks attached) do not do any caching. There is no benefit to configuring the NSD servers with significant amounts of memory and increasing pagepool will not provide caching. NSD servers with pagepool in the single digit GB is plenty. The NSD servers for our 4000 core cluster have 12GB RAM and pagepool of 4GB. The 500 clients have pagepool of 2GB. This is some info from the GPFS wiki regarding NSD servers: "Assuming no applications or Filesystem Manager services are running on the NSD servers, the pagepool is only used transiently by the NSD worker threads to gather data from client nodes and write the data to disk. The NSD server does not cache any of the data. Each NSD worker just needs one pagepool buffer per operation, and the buffer can be potentially as large as the largest filesystem blocksize that the disks belong to. With the default NSD configuration, there will be 3 NSD worker threads per LUN (nsdThreadsPerDisk) that the node services. So the amount of memory needed in the pagepool will be 3*#LUNS*maxBlockSize. The target amount of space in the pagepool for NSD workers is controlled by nsdBufSpace which defaults to 30%. So the pagepool should be large enough so that 30% of it has enough buffers." -Eric On Mon, Mar 11, 2013 at 12:26 PM, wrote: > [1] Large GPFS servers > About 5 GPFS servers with significant RAM. Each GPFS server would > be connected to storage via an 8Gb/s fibre SAN (multiple paths) > to storage arrays. > > Each GPFS server would provide NSDs via 10Gb/s and 1Gb/s (for legacy > servers) ethernet to GPFS clients (computational compute nodes). > > Questions: > > Since the GPFS clients would not be SAN attached > with direct access to block storage, and many > clients (~50) will access similar data (and the > same directories) for many jobs, it seems like it > would make sense to do a lot of caching on the > GPFS servers. Multiple clients would benefit by > reading from the same cached data on the servers. > > I'm thinking of sizing caches to handle 1~2GB > per core in the compute nodes, divided by the > number of GPFS servers. This would mean caching > (maxFilesToCache, pagepool, maxStatCache) on the > GPFS servers of about 200GB+ on each GPFS server. > > Is there any way to configure GPFS so that the > GPFS servers can do a large amount of caching > without requiring the same resources on the > GPFS clients? > > Is there any way to configure the GPFS clients > so that their RAM can be used primarily for > computational jobs? From ZEYNEP at de.ibm.com Mon Mar 25 11:12:25 2013 From: ZEYNEP at de.ibm.com (Zeynep Oeztuerk) Date: Mon, 25 Mar 2013 12:12:25 +0100 Subject: [gpfsug-discuss] Hello Message-ID: Hello together, I'm Zeynep Oeztuerk and I'm an computer science student at the University of Stuttgart/Germany. Now I'm writing my diploma thesis at IBM. My diploma thesis is about GPFS encryption and key management. It would be great, if I get more information about GPFS encryption. Thanks :-) Regards, Zeynep Oeztuerk Student Diplom Informatik Software Group E-mail: ZEYNEP at de.ibm.com Find me on: Schoenaicher Str. 220 Boeblingen, 71032 Germany IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 6398 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 453 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 2022 bytes Desc: not available URL: From AHMADYH at sa.ibm.com Mon Mar 25 12:02:59 2013 From: AHMADYH at sa.ibm.com (Ahmad Y Hussein) Date: Mon, 25 Mar 2013 16:02:59 +0400 Subject: [gpfsug-discuss] AUTO: Ahmad Y Hussein is out of the office (returning 03/30/2013) Message-ID: I am out of the office until 03/30/2013. Dear Sender; I am in a customer engagement with extremely limited email access, I will respond to your emails as soon as i can. For Urjent cases please call me on my mobile (+966542001289). Thank you for understanding. Regards; Ahmad Y Hussein Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 15, Issue 5" sent on 25/03/2013 16:00:03. This is the only notification you will receive while this person is away. From Tobias.Kuebler at sva.de Mon Mar 25 12:45:57 2013 From: Tobias.Kuebler at sva.de (Tobias.Kuebler at sva.de) Date: Mon, 25 Mar 2013 13:45:57 +0100 Subject: [gpfsug-discuss] =?iso-8859-1?q?AUTO=3A_Tobias_Kuebler_ist_au=DFe?= =?iso-8859-1?q?r_Haus_=28R=FCckkehr_am_Di=2C_04/02/2013=29?= Message-ID: Ich bin von Mo, 03/25/2013 bis Di, 04/02/2013 abwesend. Vielen Dank f?r Ihre Nachricht. Ankommende E-Mails werden w?hrend meiner Abwesenheit nicht weitergeleitet, ich versuche Sie jedoch m?glichst rasch nach meiner R?ckkehr zu beantworten. In dringenden F?llen wenden Sie sich bitte an Ihren zust?ndigen Vertriebsbeauftragten. Hinweis: Dies ist eine automatische Antwort auf Ihre Nachricht "[gpfsug-discuss] AUTO: Ahmad Y Hussein is out of the office (returning 03/30/2013)" gesendet am 25.03.2013 13:02:59. Diese ist die einzige Benachrichtigung, die Sie empfangen werden, w?hrend diese Person abwesend ist. -------------- next part -------------- An HTML attachment was scrubbed... URL: From crobson at ocf.co.uk Mon Mar 25 14:38:45 2013 From: crobson at ocf.co.uk (Claire Robson) Date: Mon, 25 Mar 2013 14:38:45 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged Message-ID: Dear All, The next meeting date is set for Wednesday 24th April and will be taking place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4-6 Soho Square, London W1D 3PZ). Getting to Dolby Europe Limited, Soho Square, London Leave the Tottenham Court Road tube station by the South Oxford Street exit [Exit 1]. Turn left onto Oxford Street. After about 50m turn left into Soho Street. Turn right into Soho Square. 4-6 Soho Square is directly in front of you. Our tentative agenda is as follows: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 GPFS OpenStack Integration Prasenhit Sarkar, IBM Almaden Research Labs GPFS FPO Dinesh Subhraveti, IBM Almaden Research Labs 11:45 SAMBA 4.0 & CTDB 2.0 Michael Adams, SAMBA Development Team 12:15 SAMBA & GPFS Integration Volker Lendecke, SAMBA Development Team 13:00 Lunch (Buffet provided) 14:00 GPFS Native RAID & LTFS Jim Roche, IBM 14:45 User Stories 15:45 Group discussion: Challenges, experiences and questions & Committee matters Led by Jez Tucker, Group Chairperson 16:00 Close We will be starting at 11:00am and concluding at 4pm but some of the speaker timings may alter slightly. I will be posting further details on what the presentations cover over the coming week or so. We hope you can make it for what will be a really interesting day of GPFS discussions. Please register with me if you would like to attend - registrations are based on a first come first served basis. Best regards, Claire Robson GPFS User Group Secreatry Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Mon Mar 25 15:15:16 2013 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Mon, 25 Mar 2013 15:15:16 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: Hi Claire, Please register me! Cheers, Luke. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Claire Robson Sent: 25 March 2013 14:39 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged Dear All, The next meeting date is set for Wednesday 24th April and will be taking place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4-6 Soho Square, London W1D 3PZ). Getting to Dolby Europe Limited, Soho Square, London Leave the Tottenham Court Road tube station by the South Oxford Street exit [Exit 1]. Turn left onto Oxford Street. After about 50m turn left into Soho Street. Turn right into Soho Square. 4-6 Soho Square is directly in front of you. Our tentative agenda is as follows: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 GPFS OpenStack Integration Prasenhit Sarkar, IBM Almaden Research Labs GPFS FPO Dinesh Subhraveti, IBM Almaden Research Labs 11:45 SAMBA 4.0 & CTDB 2.0 Michael Adams, SAMBA Development Team 12:15 SAMBA & GPFS Integration Volker Lendecke, SAMBA Development Team 13:00 Lunch (Buffet provided) 14:00 GPFS Native RAID & LTFS Jim Roche, IBM 14:45 User Stories 15:45 Group discussion: Challenges, experiences and questions & Committee matters Led by Jez Tucker, Group Chairperson 16:00 Close We will be starting at 11:00am and concluding at 4pm but some of the speaker timings may alter slightly. I will be posting further details on what the presentations cover over the coming week or so. We hope you can make it for what will be a really interesting day of GPFS discussions. Please register with me if you would like to attend - registrations are based on a first come first served basis. Best regards, Claire Robson GPFS User Group Secreatry Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Mon Mar 25 15:19:22 2013 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Mon, 25 Mar 2013 15:19:22 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: <51506AFA.5040507@ed.ac.uk> Hi Claire, Please add my name to the list! See you then, Orlando On 25/03/13 14:38, Claire Robson wrote: > Dear All, > > The next meeting date is set for *Wednesday 24^th April* and will be > taking place at the fantastic Dolby Studios in London (Dolby Europe > Limited, 4?6 Soho Square, London W1D 3PZ). > > *Getting to Dolby Europe Limited, Soho Square, London* > > Leave the Tottenham Court Road tube station by the South Oxford Street > exit [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > Our tentative agenda is as follows: > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences and questions & > Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > We will be starting at 11:00am and concluding at 4pm but some of the > speaker timings may alter slightly. I will be posting further details on > what the presentations cover over the coming week or so. > > We hope you can make it for what will be a really interesting day of > GPFS discussions. *Please register with me if you would like to attend* > ? registrations are based on a first come first served basis. > > Best regards, > > *Claire Robson* > > GPFS User Group Secreatry > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: _www.gpfsug.org _ > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From bdeluca at gmail.com Mon Mar 25 16:40:48 2013 From: bdeluca at gmail.com (Ben De Luca) Date: Mon, 25 Mar 2013 16:40:48 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: Hi Claire, Please add my name to the list! On Mon, Mar 25, 2013 at 2:38 PM, Claire Robson wrote: > Dear All, > > > > The next meeting date is set for Wednesday 24th April and will be taking > place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4?6 > Soho Square, London W1D 3PZ). > > > > Getting to Dolby Europe Limited, Soho Square, London > > Leave the Tottenham Court Road tube station by the South Oxford Street exit > [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > > > Our tentative agenda is as follows: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences and questions & > Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > > > We will be starting at 11:00am and concluding at 4pm but some of the speaker > timings may alter slightly. I will be posting further details on what the > presentations cover over the coming week or so. > > > > We hope you can make it for what will be a really interesting day of GPFS > discussions. Please register with me if you would like to attend ? > registrations are based on a first come first served basis. > > > > Best regards, > > > > Claire Robson > > GPFS User Group Secreatry > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: www.gpfsug.org > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From robert at strubi.ox.ac.uk Wed Mar 27 09:55:19 2013 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Wed, 27 Mar 2013 09:55:19 +0000 (GMT) Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: <201303270955.064911@mail.strubi.ox.ac.uk> Dear Claire, Please sign me up to. Sounds a great venue. Regards, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 ---- Original message ---- >Date: Mon, 25 Mar 2013 14:38:45 +0000 >From: gpfsug-discuss-bounces at gpfsug.org (on behalf of Claire Robson ) >Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged >To: "gpfsug-discuss at gpfsug.org" > > Dear All, > > > > The next meeting date is set for Wednesday 24^th > April and will be taking place at the fantastic > Dolby Studios in London (Dolby Europe Limited, 4-6 > Soho Square, London W1D 3PZ). > > > > Getting to Dolby Europe Limited, Soho Square, London > > Leave the Tottenham Court Road tube station by the > South Oxford Street exit [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > > > Our tentative agenda is as follows: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group > Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden > Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development > Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences > and questions & Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > > > We will be starting at 11:00am and concluding at 4pm > but some of the speaker timings may alter slightly. > I will be posting further details on what the > presentations cover over the coming week or so. > > > > We hope you can make it for what will be a really > interesting day of GPFS discussions. Please register > with me if you would like to attend - registrations > are based on a first come first served basis. > > > > Best regards, > > > > Claire Robson > > GPFS User Group Secreatry > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: www.gpfsug.org > > >________________ >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss From luke.raimbach at oerc.ox.ac.uk Fri Mar 1 09:13:35 2013 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Fri, 1 Mar 2013 09:13:35 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> Message-ID: I really hope this isn't a problem as I will want to end up doing this. Does it do in-line copy when you backup TSM HSMd data using TSM? Surely it does? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 28 February 2013 17:25 To: gpfsug main discussion list Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? Hello all, I have to ask Does anyone else do this? We have a problem and I'm told that "it's so rare that anyone would archive data which is HSMd". I.E. to create an archive whereby a project is entirely or partially HSMd to LTO - online data is archived to tape - offline data is copied from HSM tape to archive tape 'inline' Surely nobody pulls back all their data to disk before re-archiving back to tape? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jez.Tucker at rushes.co.uk Fri Mar 1 09:56:39 2013 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Mar 2013 09:56:39 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: Message-ID: <39571EA9316BE44899D59C7A640C13F5306EEFDE@WARVWEXC1.uk.deluxe-eu.com> AFAIK it does not do inline for backups. I may be entirely wrong. It might depend on our setup. It definitely does for archive, which is where we are seeing our issue. That said, it looks at present like a memory allocation bug which the deb team are working on fixing. We're limited to filelists no bigger than 4000 files at present as a work around. I was looking to archive 195K files so you can imagine how inefficient that is. Let me drag out the actual reference doc when I get into work. From: Luke Raimbach [mailto:luke.raimbach at oerc.ox.ac.uk] Sent: Friday, March 01, 2013 09:13 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? I really hope this isn?t a problem as I will want to end up doing this. Does it do in-line copy when you backup TSM HSMd data using TSM? Surely it does? From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Jez Tucker Sent: 28 February 2013 17:25 To: gpfsug main discussion list Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? Hello all, I have to ask Does anyone else do this? We have a problem and I?m told that ?it?s so rare that anyone would archive data which is HSMd?. I.E. to create an archive whereby a project is entirely or partially HSMd to LTO - online data is archived to tape - offline data is copied from HSM tape to archive tape ?inline? Surely nobody pulls back all their data to disk before re-archiving back to tape? --- Jez Tucker Senior Sysadmin Rushes GPFSUG Chairman (chair at gpfsug.org) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan at buzzard.me.uk Fri Mar 1 10:16:08 2013 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Fri, 01 Mar 2013 10:16:08 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> Message-ID: <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> On Fri, 2013-03-01 at 09:13 +0000, Luke Raimbach wrote: > I really hope this isn?t a problem as I will want to end up doing > this. I imagine the notion is that if you are using HSM what do you gain from archiving so why do it... The traditional answer would be to reduce the number of files in the file system, but with faster backup clients and now policy based reconciliation that requirement should be much reduced. > > Does it do in-line copy when you backup TSM HSMd data using TSM? > Surely it does? > That is not as useful as you might imagine. With the smart recalls that TSM 6.3 can do if you have the space you are probably better recalling them before the backup. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From Jez.Tucker at rushes.co.uk Fri Mar 1 12:43:54 2013 From: Jez.Tucker at rushes.co.uk (Jez Tucker) Date: Fri, 1 Mar 2013 12:43:54 +0000 Subject: [gpfsug-discuss] Who uses TSM to archive HSMd data (inline) ? In-Reply-To: <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> References: <39571EA9316BE44899D59C7A640C13F5306EED70@WARVWEXC1.uk.deluxe-eu.com> <1362132968.23736.11.camel@buzzard.phy.strath.ac.uk> Message-ID: <39571EA9316BE44899D59C7A640C13F5306EF0E3@WARVWEXC1.uk.deluxe-eu.com> Here's the relevant section of the manual regarding in-line archiving: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.hsmul.doc/t_arc_mig_premigs.html It looks like inline backup may be possible if you backup files after they have been migrated. However, for obviously sensible reasons, our mgmt. classes specify 'must be backed up before migration'. http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.hsmul.doc/c_bck_before.html We're using archiving and deleting of finalised projects as a means to reclaim valuable metadata space. Clearly if you're close to your threshold levels and you're recalling to archive again, you'll end up migrating other data. You can't worry about this too much - it should be 'auto-magical' but it will highly utilise your tape drives for some time. > -----Original Message----- > From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss- > bounces at gpfsug.org] On Behalf Of Jonathan Buzzard > Sent: 01 March 2013 10:16 > To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Who uses TSM to archive HSMd data > (inline) ? > > On Fri, 2013-03-01 at 09:13 +0000, Luke Raimbach wrote: > > I really hope this isn?t a problem as I will want to end up doing > > this. > > I imagine the notion is that if you are using HSM what do you gain > from > archiving so why do it... > > The traditional answer would be to reduce the number of files in the > file system, but with faster backup clients and now policy based > reconciliation that requirement should be much reduced. > > > > > Does it do in-line copy when you backup TSM HSMd data using > TSM? > > Surely it does? > > > > That is not as useful as you might imagine. With the smart recalls > that > TSM 6.3 can do if you have the space you are probably better > recalling > them before the backup. > > JAB. > > -- > Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk > Fife, United Kingdom. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From mark.bergman at uphs.upenn.edu Mon Mar 11 19:26:58 2013 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Mon, 11 Mar 2013 15:26:58 -0400 Subject: [gpfsug-discuss] GPFS architecture choice: large servers or directly-attached clients? Message-ID: <11099.1363030018@localhost> I'm in the process of planning a new HPC cluster, and I'd appreciate getting some feedback on different approaches to the GPFS architecture. The cluster will have about 25~50 nodes initially (up to 1000 CPU-cores), expected to grow to about 50~80 nodes. The jobs are primarily independent, single-threaded, with a mixture of small- to medium-sized IO, and a lot of random access. It is very common to have 100s or 1000s of jobs on different cores and nodes each accessing the same directories, often with an overlap of the same data files. For example, many jobs on different nodes will use the same executable and the same baseline data models, but will differ in individual data files to compare to the model. My goal is to ensure reasonable performance, particularly when there's a lot of contention from multiple jobs accessing the same meta-data and some of the same data. My question here is in a choice between two GPFS archicture designs (the storage array configurations, drive types, RAID types, etc. are also being examined separately). I'd really like to hear any suggestions about these (or other) configurations: [1] Large GPFS servers About 5 GPFS servers with significant RAM. Each GPFS server would be connected to storage via an 8Gb/s fibre SAN (multiple paths) to storage arrays. Each GPFS server would provide NSDs via 10Gb/s and 1Gb/s (for legacy servers) ethernet to GPFS clients (computational compute nodes). Questions: Since the GPFS clients would not be SAN attached with direct access to block storage, and many clients (~50) will access similar data (and the same directories) for many jobs, it seems like it would make sense to do a lot of caching on the GPFS servers. Multiple clients would benefit by reading from the same cached data on the servers. I'm thinking of sizing caches to handle 1~2GB per core in the compute nodes, divided by the number of GPFS servers. This would mean caching (maxFilesToCache, pagepool, maxStatCache) on the GPFS servers of about 200GB+ on each GPFS server. Is there any way to configure GPFS so that the GPFS servers can do a large amount of caching without requiring the same resources on the GPFS clients? Is there any way to configure the GPFS clients so that their RAM can be used primarily for computational jobs? [2] Direct-attached GPFS clients About 3~5 GPFS servers with modest resources (8CPU-cores, ~60GB RAM). Each GPFS server and client (HPC compute node) would be directly connected to the SAN (8Gb/s fibre, iSCSI over 10Gb/s ethernet, FCoE over 10Gb/s ethernet). Either 10Gb/s or 1Gb/s ethernet for communication between GPFS nodes. Since this is a relatively small cluster in terms of the total node count, the increased cost in terms of HBAs, switches, and cabling for direct-connecting all nodes to the storage shouldn't be excessive. Ideas? Suggestions? Things I'm overlooking? Thanks, Mark From erich at uw.edu Mon Mar 11 20:18:55 2013 From: erich at uw.edu (Eric Horst) Date: Mon, 11 Mar 2013 13:18:55 -0700 Subject: [gpfsug-discuss] GPFS architecture choice: large servers or directly-attached clients? In-Reply-To: <11099.1363030018@localhost> References: <11099.1363030018@localhost> Message-ID: GPFS NSD servers (the ones with the disks attached) do not do any caching. There is no benefit to configuring the NSD servers with significant amounts of memory and increasing pagepool will not provide caching. NSD servers with pagepool in the single digit GB is plenty. The NSD servers for our 4000 core cluster have 12GB RAM and pagepool of 4GB. The 500 clients have pagepool of 2GB. This is some info from the GPFS wiki regarding NSD servers: "Assuming no applications or Filesystem Manager services are running on the NSD servers, the pagepool is only used transiently by the NSD worker threads to gather data from client nodes and write the data to disk. The NSD server does not cache any of the data. Each NSD worker just needs one pagepool buffer per operation, and the buffer can be potentially as large as the largest filesystem blocksize that the disks belong to. With the default NSD configuration, there will be 3 NSD worker threads per LUN (nsdThreadsPerDisk) that the node services. So the amount of memory needed in the pagepool will be 3*#LUNS*maxBlockSize. The target amount of space in the pagepool for NSD workers is controlled by nsdBufSpace which defaults to 30%. So the pagepool should be large enough so that 30% of it has enough buffers." -Eric On Mon, Mar 11, 2013 at 12:26 PM, wrote: > [1] Large GPFS servers > About 5 GPFS servers with significant RAM. Each GPFS server would > be connected to storage via an 8Gb/s fibre SAN (multiple paths) > to storage arrays. > > Each GPFS server would provide NSDs via 10Gb/s and 1Gb/s (for legacy > servers) ethernet to GPFS clients (computational compute nodes). > > Questions: > > Since the GPFS clients would not be SAN attached > with direct access to block storage, and many > clients (~50) will access similar data (and the > same directories) for many jobs, it seems like it > would make sense to do a lot of caching on the > GPFS servers. Multiple clients would benefit by > reading from the same cached data on the servers. > > I'm thinking of sizing caches to handle 1~2GB > per core in the compute nodes, divided by the > number of GPFS servers. This would mean caching > (maxFilesToCache, pagepool, maxStatCache) on the > GPFS servers of about 200GB+ on each GPFS server. > > Is there any way to configure GPFS so that the > GPFS servers can do a large amount of caching > without requiring the same resources on the > GPFS clients? > > Is there any way to configure the GPFS clients > so that their RAM can be used primarily for > computational jobs? From ZEYNEP at de.ibm.com Mon Mar 25 11:12:25 2013 From: ZEYNEP at de.ibm.com (Zeynep Oeztuerk) Date: Mon, 25 Mar 2013 12:12:25 +0100 Subject: [gpfsug-discuss] Hello Message-ID: Hello together, I'm Zeynep Oeztuerk and I'm an computer science student at the University of Stuttgart/Germany. Now I'm writing my diploma thesis at IBM. My diploma thesis is about GPFS encryption and key management. It would be great, if I get more information about GPFS encryption. Thanks :-) Regards, Zeynep Oeztuerk Student Diplom Informatik Software Group E-mail: ZEYNEP at de.ibm.com Find me on: Schoenaicher Str. 220 Boeblingen, 71032 Germany IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Gesch?ftsf?hrung: Dirk Wittkopp Sitz der Gesellschaft: B?blingen Registergericht: Amtsgericht Stuttgart, HRB 243294 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 6398 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/jpeg Size: 453 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/gif Size: 2022 bytes Desc: not available URL: From AHMADYH at sa.ibm.com Mon Mar 25 12:02:59 2013 From: AHMADYH at sa.ibm.com (Ahmad Y Hussein) Date: Mon, 25 Mar 2013 16:02:59 +0400 Subject: [gpfsug-discuss] AUTO: Ahmad Y Hussein is out of the office (returning 03/30/2013) Message-ID: I am out of the office until 03/30/2013. Dear Sender; I am in a customer engagement with extremely limited email access, I will respond to your emails as soon as i can. For Urjent cases please call me on my mobile (+966542001289). Thank you for understanding. Regards; Ahmad Y Hussein Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 15, Issue 5" sent on 25/03/2013 16:00:03. This is the only notification you will receive while this person is away. From Tobias.Kuebler at sva.de Mon Mar 25 12:45:57 2013 From: Tobias.Kuebler at sva.de (Tobias.Kuebler at sva.de) Date: Mon, 25 Mar 2013 13:45:57 +0100 Subject: [gpfsug-discuss] =?iso-8859-1?q?AUTO=3A_Tobias_Kuebler_ist_au=DFe?= =?iso-8859-1?q?r_Haus_=28R=FCckkehr_am_Di=2C_04/02/2013=29?= Message-ID: Ich bin von Mo, 03/25/2013 bis Di, 04/02/2013 abwesend. Vielen Dank f?r Ihre Nachricht. Ankommende E-Mails werden w?hrend meiner Abwesenheit nicht weitergeleitet, ich versuche Sie jedoch m?glichst rasch nach meiner R?ckkehr zu beantworten. In dringenden F?llen wenden Sie sich bitte an Ihren zust?ndigen Vertriebsbeauftragten. Hinweis: Dies ist eine automatische Antwort auf Ihre Nachricht "[gpfsug-discuss] AUTO: Ahmad Y Hussein is out of the office (returning 03/30/2013)" gesendet am 25.03.2013 13:02:59. Diese ist die einzige Benachrichtigung, die Sie empfangen werden, w?hrend diese Person abwesend ist. -------------- next part -------------- An HTML attachment was scrubbed... URL: From crobson at ocf.co.uk Mon Mar 25 14:38:45 2013 From: crobson at ocf.co.uk (Claire Robson) Date: Mon, 25 Mar 2013 14:38:45 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged Message-ID: Dear All, The next meeting date is set for Wednesday 24th April and will be taking place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4-6 Soho Square, London W1D 3PZ). Getting to Dolby Europe Limited, Soho Square, London Leave the Tottenham Court Road tube station by the South Oxford Street exit [Exit 1]. Turn left onto Oxford Street. After about 50m turn left into Soho Street. Turn right into Soho Square. 4-6 Soho Square is directly in front of you. Our tentative agenda is as follows: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 GPFS OpenStack Integration Prasenhit Sarkar, IBM Almaden Research Labs GPFS FPO Dinesh Subhraveti, IBM Almaden Research Labs 11:45 SAMBA 4.0 & CTDB 2.0 Michael Adams, SAMBA Development Team 12:15 SAMBA & GPFS Integration Volker Lendecke, SAMBA Development Team 13:00 Lunch (Buffet provided) 14:00 GPFS Native RAID & LTFS Jim Roche, IBM 14:45 User Stories 15:45 Group discussion: Challenges, experiences and questions & Committee matters Led by Jez Tucker, Group Chairperson 16:00 Close We will be starting at 11:00am and concluding at 4pm but some of the speaker timings may alter slightly. I will be posting further details on what the presentations cover over the coming week or so. We hope you can make it for what will be a really interesting day of GPFS discussions. Please register with me if you would like to attend - registrations are based on a first come first served basis. Best regards, Claire Robson GPFS User Group Secreatry Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.raimbach at oerc.ox.ac.uk Mon Mar 25 15:15:16 2013 From: luke.raimbach at oerc.ox.ac.uk (Luke Raimbach) Date: Mon, 25 Mar 2013 15:15:16 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: Hi Claire, Please register me! Cheers, Luke. From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Claire Robson Sent: 25 March 2013 14:39 To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged Dear All, The next meeting date is set for Wednesday 24th April and will be taking place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4-6 Soho Square, London W1D 3PZ). Getting to Dolby Europe Limited, Soho Square, London Leave the Tottenham Court Road tube station by the South Oxford Street exit [Exit 1]. Turn left onto Oxford Street. After about 50m turn left into Soho Street. Turn right into Soho Square. 4-6 Soho Square is directly in front of you. Our tentative agenda is as follows: 10:30 Arrivals and refreshments 11:00 Introductions and committee updates Jez Tucker, Group Chair & Claire Robson, Group Secretary 11:05 GPFS OpenStack Integration Prasenhit Sarkar, IBM Almaden Research Labs GPFS FPO Dinesh Subhraveti, IBM Almaden Research Labs 11:45 SAMBA 4.0 & CTDB 2.0 Michael Adams, SAMBA Development Team 12:15 SAMBA & GPFS Integration Volker Lendecke, SAMBA Development Team 13:00 Lunch (Buffet provided) 14:00 GPFS Native RAID & LTFS Jim Roche, IBM 14:45 User Stories 15:45 Group discussion: Challenges, experiences and questions & Committee matters Led by Jez Tucker, Group Chairperson 16:00 Close We will be starting at 11:00am and concluding at 4pm but some of the speaker timings may alter slightly. I will be posting further details on what the presentations cover over the coming week or so. We hope you can make it for what will be a really interesting day of GPFS discussions. Please register with me if you would like to attend - registrations are based on a first come first served basis. Best regards, Claire Robson GPFS User Group Secreatry Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Mon Mar 25 15:19:22 2013 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Mon, 25 Mar 2013 15:19:22 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: <51506AFA.5040507@ed.ac.uk> Hi Claire, Please add my name to the list! See you then, Orlando On 25/03/13 14:38, Claire Robson wrote: > Dear All, > > The next meeting date is set for *Wednesday 24^th April* and will be > taking place at the fantastic Dolby Studios in London (Dolby Europe > Limited, 4?6 Soho Square, London W1D 3PZ). > > *Getting to Dolby Europe Limited, Soho Square, London* > > Leave the Tottenham Court Road tube station by the South Oxford Street > exit [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > Our tentative agenda is as follows: > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences and questions & > Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > We will be starting at 11:00am and concluding at 4pm but some of the > speaker timings may alter slightly. I will be posting further details on > what the presentations cover over the coming week or so. > > We hope you can make it for what will be a really interesting day of > GPFS discussions. *Please register with me if you would like to attend* > ? registrations are based on a first come first served basis. > > Best regards, > > *Claire Robson* > > GPFS User Group Secreatry > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: _www.gpfsug.org _ > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From bdeluca at gmail.com Mon Mar 25 16:40:48 2013 From: bdeluca at gmail.com (Ben De Luca) Date: Mon, 25 Mar 2013 16:40:48 +0000 Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: Hi Claire, Please add my name to the list! On Mon, Mar 25, 2013 at 2:38 PM, Claire Robson wrote: > Dear All, > > > > The next meeting date is set for Wednesday 24th April and will be taking > place at the fantastic Dolby Studios in London (Dolby Europe Limited, 4?6 > Soho Square, London W1D 3PZ). > > > > Getting to Dolby Europe Limited, Soho Square, London > > Leave the Tottenham Court Road tube station by the South Oxford Street exit > [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > > > Our tentative agenda is as follows: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences and questions & > Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > > > We will be starting at 11:00am and concluding at 4pm but some of the speaker > timings may alter slightly. I will be posting further details on what the > presentations cover over the coming week or so. > > > > We hope you can make it for what will be a really interesting day of GPFS > discussions. Please register with me if you would like to attend ? > registrations are based on a first come first served basis. > > > > Best regards, > > > > Claire Robson > > GPFS User Group Secreatry > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: www.gpfsug.org > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > From robert at strubi.ox.ac.uk Wed Mar 27 09:55:19 2013 From: robert at strubi.ox.ac.uk (Robert Esnouf) Date: Wed, 27 Mar 2013 09:55:19 +0000 (GMT) Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged In-Reply-To: References: Message-ID: <201303270955.064911@mail.strubi.ox.ac.uk> Dear Claire, Please sign me up to. Sounds a great venue. Regards, Robert Esnouf -- Dr. Robert Esnouf, University Research Lecturer and Head of Research Computing, Wellcome Trust Centre for Human Genetics, Old Road Campus, Roosevelt Drive, Oxford OX3 7BN, UK Emails: robert at strubi.ox.ac.uk Tel: (+44) - 1865 - 287783 and robert at well.ox.ac.uk Fax: (+44) - 1865 - 287547 ---- Original message ---- >Date: Mon, 25 Mar 2013 14:38:45 +0000 >From: gpfsug-discuss-bounces at gpfsug.org (on behalf of Claire Robson ) >Subject: [gpfsug-discuss] Register now: Spring GPFS User Group arranged >To: "gpfsug-discuss at gpfsug.org" > > Dear All, > > > > The next meeting date is set for Wednesday 24^th > April and will be taking place at the fantastic > Dolby Studios in London (Dolby Europe Limited, 4-6 > Soho Square, London W1D 3PZ). > > > > Getting to Dolby Europe Limited, Soho Square, London > > Leave the Tottenham Court Road tube station by the > South Oxford Street exit [Exit 1]. > > Turn left onto Oxford Street. > > After about 50m turn left into Soho Street. > > Turn right into Soho Square. > > 4-6 Soho Square is directly in front of you. > > > > Our tentative agenda is as follows: > > > > 10:30 Arrivals and refreshments > > 11:00 Introductions and committee updates > > Jez Tucker, Group Chair & Claire Robson, Group > Secretary > > 11:05 GPFS OpenStack Integration > > Prasenhit Sarkar, IBM Almaden Research Labs > > GPFS FPO > > Dinesh Subhraveti, IBM Almaden > Research Labs > > 11:45 SAMBA 4.0 & CTDB 2.0 > > Michael Adams, SAMBA Development Team > > 12:15 SAMBA & GPFS Integration > > Volker Lendecke, SAMBA Development > Team > > 13:00 Lunch (Buffet provided) > > 14:00 GPFS Native RAID & LTFS > > Jim Roche, IBM > > 14:45 User Stories > > 15:45 Group discussion: Challenges, experiences > and questions & Committee matters > > Led by Jez Tucker, Group Chairperson > > 16:00 Close > > > > We will be starting at 11:00am and concluding at 4pm > but some of the speaker timings may alter slightly. > I will be posting further details on what the > presentations cover over the coming week or so. > > > > We hope you can make it for what will be a really > interesting day of GPFS discussions. Please register > with me if you would like to attend - registrations > are based on a first come first served basis. > > > > Best regards, > > > > Claire Robson > > GPFS User Group Secreatry > > > > Tel: 0114 257 2200 > > Mob: 07508 033896 > > Fax: 0114 257 0022 > > Web: www.gpfsug.org > > >________________ >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss