From ckrafft at de.ibm.com Wed Sep 2 09:24:37 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Wed, 2 Sep 2015 10:24:37 +0200 Subject: [gpfsug-discuss] Any experiences with GSS/ESS and DB2 Message-ID: <201509020825.t828Ppho005861@d06av09.portsmouth.uk.ibm.com> Hi there, out of curiosity :-): Is anyone running a solution with DB2 and GPFS GNR based GSS/ESS? Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0E481995.gif Type: image/gif Size: 1851 bytes Desc: not available URL: From viccornell at gmail.com Wed Sep 2 13:36:08 2015 From: viccornell at gmail.com (Vic Cornell) Date: Wed, 2 Sep 2015 13:36:08 +0100 Subject: [gpfsug-discuss] $k drives and Multi-cluster Message-ID: <14DC8ADF-F1C2-40AB-B7B6-78791917B0C7@gmail.com> Hi All, Here?s one I can?t find in the documentation - I understand that you need GPFS 4.1 to support 4K disk sectors. Can I mount a 4.1 filesystem with 4k drives onto a GPFS 3.5 filesystem via multi cluster? Any experience or insight would be useful. Regards, Vic From zgiles at gmail.com Thu Sep 3 15:59:44 2015 From: zgiles at gmail.com (Zachary Giles) Date: Thu, 3 Sep 2015 10:59:44 -0400 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Message-ID: Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com From ewahl at osc.edu Thu Sep 3 18:59:42 2015 From: ewahl at osc.edu (Wahl, Edward) Date: Thu, 3 Sep 2015 17:59:42 +0000 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: <9DA9EC7A281AC7428A9618AFDC49049955ABC785@CIO-KRC-D1MBX02.osuad.osu.edu> Can't say I've tried this in so many years it's not relevant. But the IBM/TSM storage folks have a number of interesting reports over at their blog posted using Tivoli with DB2. I recall it says right out there that the DB was on the ESS/GSS but it's been a few months. Search for "storageneers tivoli" and/or "scale out tsm gss". I think there were two or three of them last year. Or poke Sven. ;) Something tells me he'll know. Ed Wahl OSC ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] Sent: Thursday, September 03, 2015 10:59 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Thu Sep 3 19:52:35 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 3 Sep 2015 18:52:35 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Just to follow up, I've been sent an efix today which hopefully will resolve this (and also the other LROC bugs), so I'm guessing this fix will make it out generally in 4.1.1-02 Will be testing the fix out over the next few days. Simon From: Dean Hildebrand > Reply-To: gpfsug main discussion list > Date: Thursday, 27 August 2015 20:24 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, This appears to be a mistake, as using clients for the System.log pool should not require a server license (should be similar to lroc).... thanks for opening the PMR... Dean Hildebrand IBM Almaden Research Center [Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/27/2015 12:42:47 AM---Hi Dean, Thanks. I wa]"Simon Thompson (Research Computing - IT Services)" ---08/27/2015 12:42:47 AM---Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "va From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/27/2015 12:42 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "valid" as they are essentially NSDs in a different cluster from where the storage cluster would be, but it sounds like it is. Now if I can just get it working ... Looking in mmfsfuncs: if [[ $diskUsage != "localCache" ]] then combinedList=${primaryAdminNodeList},${backupAdminNodeList} IFS="," for server in $combinedList do IFS="$IFS_sv" [[ -z $server ]] && continue $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1 if [[ $? -ne 0 ]] then # The node does not have a server license. printErrorMsg 118 $mmcmd $server return 1 fi IFS="," done # end for server in ${primaryAdminNodeList},${backupAdminNodeList} IFS="$IFS_sv" fi # end of if [[ $diskUsage != "localCache" ]] So unless the NSD device usage=localCache, then it requires a server License when you try and create the NSD, but localCache cannot have a storage pool assigned. I've opened a PMR with IBM. Simon From: Dean Hildebrand > Reply-To: gpfsug main discussion list > Date: Thursday, 27 August 2015 01:22 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center [Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other ques]"Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" > wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [attachment "graycol.gif" deleted by Dean Hildebrand/Almaden/IBM] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From kraemerf at de.ibm.com Thu Sep 3 20:16:31 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Thu, 3 Sep 2015 21:16:31 +0200 Subject: [gpfsug-discuss] GPFS for DBs & more In-Reply-To: References: Message-ID: <201509031917.t83JHtuU011299@d06av04.portsmouth.uk.ibm.com> Have a look here: > Looking for opinions on running database engines backed by GPFS. > Has anyone run any backed by GPFS and what did you think about it? IBM Spectrum Scale 4.1 is certified with Oracle Database 12cR1 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10853 IBM Spectrum Scale tuning guidelines for deploying SAS http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106348 IBM System Storage Architecture and Configuration Guide for SAP HANA TDI (tailored datacenter integration) V2.2 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102347 Microsoft SharePoint data management solution using IBM Spectrum Scale and AvePoint DocAve http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102580 Consolidated hardware for video solutions http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102576 On Premise File Sync and Share Solution Using IBM Spectrum Scale for Object Storage and ownCloud http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102581 For DB2 pureScale GPFS is a *must* http://www.ibm.com/software/data/db2/linux-unix-windows/purescale/ -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany **** Hello, a new whitepaper was published that describes the configuration of Spectrum Protect (and Spectrum Archive implicitly) in a Spectrum Scale Active File Management (AFM) environment. Beside an introduction to AFM three major user scenarios (disaster recovery, branch office, system migration) are explained. For each of the scenarios the combination of AFM functions with Spectrum Protect backup functions and Spectrum Protect and Spectrum Archive HSM functions are described in detail including challenges and recommendations for the specified setup. The paper was written to help technical sales teams and system architects/administrators to understand the mechanic behind the combination of these Spectrum Storage products. Please share this information. Find the paper here: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Configuring%20IBM%20Spectrum%20Scale%20Active%20File%20Management Greetings, Dominic. ______________________________________________________________________________________________________________ Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From DCR at ch.ibm.com Thu Sep 3 21:08:33 2015 From: DCR at ch.ibm.com (David Cremese) Date: Thu, 3 Sep 2015 22:08:33 +0200 Subject: [gpfsug-discuss] DBs over GPFS Message-ID: An HTML attachment was scrubbed... URL: From chair at gpfsug.org Thu Sep 3 21:42:30 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Thu, 03 Sep 2015 21:42:30 +0100 Subject: [gpfsug-discuss] GPFS UG Meeting at Computing Insight UK Message-ID: Hi, Our next UK based group meeting will be part of the agenda for Computing Insight UK which will be held on 8th/9th December at the Ricoh Arena, Coventry. The meeting will be a short (2 hour) breakout session at CIUK. More details on CIUK are at: http://www.stfc.ac.uk/news-events-and-publications/events/computing-insight -uk-2015/ Please note that you must be registered to attend CIUK to attend the GPFS UG meeting, during the registration process you will get the option to register for the workshops which includes the GPFS UG. I'm also looking for someone to give a short user presentation on your use of GPFS in your environment, so if this is something you are interested in, please let me know. We're hoping to have a few devs available at the group, and will be looking at some of the 4.2 features, we'll also be including the opportunity to discuss GPFS with any comments or areas for development you'd like to look at. Finally, we're already planning the May 2016 event, and I hope to be able to send our a save the date in the next few weeks. Simon (GPFS UG Chair) From dhildeb at us.ibm.com Thu Sep 3 22:32:20 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 3 Sep 2015 14:32:20 -0700 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: Hi Zachary, VMWare via NFS to GPFS is a great option as several new features have been added to GPFS to support VM workloads over the last couple years, including file-grained dirty bits (FGDB) for tracking updates at 4KB granularity and HAWC for buffering small synchronous writes in fast storage. Dean Hildebrand IBM Almaden Research Center From: Zachary Giles To: gpfsug main discussion list Date: 09/03/2015 08:00 AM Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Sent by: gpfsug-discuss-bounces at gpfsug.org Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ckrafft at de.ibm.com Fri Sep 4 07:43:09 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Fri, 4 Sep 2015 08:43:09 +0200 Subject: [gpfsug-discuss] GPFS for DBs & more In-Reply-To: <201509031917.t83JHtuU011299@d06av04.portsmouth.uk.ibm.com> References: <201509031917.t83JHtuU011299@d06av04.portsmouth.uk.ibm.com> Message-ID: <201509040644.t846iCFa007119@d06av05.portsmouth.uk.ibm.com> ... and what about classic DB2? Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From: Frank Kraemer/Germany/IBM at IBMDE To: gpfsug-discuss at gpfsug.org Date: 03.09.2015 21:19 Subject: [gpfsug-discuss] GPFS for DBs & more Sent by: gpfsug-discuss-bounces at gpfsug.org Have a look here: > Looking for opinions on running database engines backed by GPFS. > Has anyone run any backed by GPFS and what did you think about it? IBM Spectrum Scale 4.1 is certified with Oracle Database 12cR1 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10853 IBM Spectrum Scale tuning guidelines for deploying SAS http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106348 IBM System Storage Architecture and Configuration Guide for SAP HANA TDI (tailored datacenter integration) V2.2 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102347 Microsoft SharePoint data management solution using IBM Spectrum Scale and AvePoint DocAve http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102580 Consolidated hardware for video solutions http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102576 On Premise File Sync and Share Solution Using IBM Spectrum Scale for Object Storage and ownCloud http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102581 For DB2 pureScale GPFS is a *must* http://www.ibm.com/software/data/db2/linux-unix-windows/purescale/ -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany **** Hello, a new whitepaper was published that describes the configuration of Spectrum Protect (and Spectrum Archive implicitly) in a Spectrum Scale Active File Management (AFM) environment. Beside an introduction to AFM three major user scenarios (disaster recovery, branch office, system migration) are explained. For each of the scenarios the combination of AFM functions with Spectrum Protect backup functions and Spectrum Protect and Spectrum Archive HSM functions are described in detail including challenges and recommendations for the specified setup. The paper was written to help technical sales teams and system architects/administrators to understand the mechanic behind the combination of these Spectrum Storage products. Please share this information. Find the paper here: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Configuring%20IBM%20Spectrum%20Scale%20Active%20File%20Management Greetings, Dominic. ______________________________________________________________________________________________________________ Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 06252012.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ckrafft at de.ibm.com Fri Sep 4 07:46:34 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Fri, 4 Sep 2015 08:46:34 +0200 Subject: [gpfsug-discuss] DBs over GPFS In-Reply-To: References: Message-ID: <201509040647.t846lR86009262@d06av01.portsmouth.uk.ibm.com> ... forgive me and forget my previous email - started reading sequentially and did not see David's email early enough 8-) So "regular" DB2 seems also covered - although the information is a bit sparse ... Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From: David Cremese To: gpfsug-discuss at gpfsug.org Date: 03.09.2015 22:08 Subject: Re: [gpfsug-discuss] DBs over GPFS Sent by: gpfsug-discuss-bounces at gpfsug.org Hello Zach, There's a paper posted on IBM DeveloperWorks, describing best practices for running DB2 over GPFS: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Wc9a068d7f6a6_4434_aece_0d297ea80ab1/page/DB2%20databases%20and%20the%20IBM%20General%20Parallel%20File%20System All the best, David Cremese dcr at ch.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0D589989.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ckrafft at de.ibm.com Fri Sep 4 07:53:51 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Fri, 4 Sep 2015 08:53:51 +0200 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: <201509040554.t845skZ4014649@d06av10.portsmouth.uk.ibm.com> Hi Zach, VMware is covered via Pass-through Raw Device Mapping (RDM) with physical compatibility mode if you want direct disk access inside the VM. Otherwise works as a "normal" GPFS client Go to: "Table 28. VMware support matrix" @ http://www-01.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html Have a client using this in production with x86 RHEL running on top of VMware ... it works well. They use RDM since the VMs do have disk access directly. Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From: Zachary Giles To: gpfsug main discussion list Date: 03.09.2015 16:59 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Sent by: gpfsug-discuss-bounces at gpfsug.org Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0D771085.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Fri Sep 4 07:57:04 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 4 Sep 2015 06:57:04 +0000 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: When you say VMware, do you mean to the hypervisor or vms? Running vms can of course be gpfs clients. Protocol servers use nfs ganesha server, but I've only looked at smb support. Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] Sent: 03 September 2015 15:59 To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From zgiles at gmail.com Fri Sep 4 18:00:26 2015 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 4 Sep 2015 13:00:26 -0400 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: Frank, Edward, David, Christoph, The Oracle 12c certified with GPFS 4.1 looks like they only mention AIX with GPFS.. though it could apply to Linux too I believe. There's no tuning info in it... I do see the DB2 and SAS whitepapers. I've read those over and over trying to tune for Oracle and other things. They're OK, but I'm not really interested in DB2 (Though I'm sure lots of IBM people are.. ), and they also don't seem to say or "Show" much of tuning over different block sizes, direct writes vs not, read pattersn, data integrity, etc. They're still valuable though. I _did_ find moderate info on Oracle, Though it is fairly scattered. I'm doing a bunch of testing with Oracle right now and it's .. finicky .. with GPFS. Yes it works, and there are comments on data integrity here and there about Direct IO and ASync IO bypassing cache.. Oracle has latches, etc. So, seems like you could assume the data is good on GPFS. There's very little in terms of tuning. So far, it seems unhappy with large block sizes, even though it is recommended, but they're calling "512KB" large, so it's all from more than several years ago. Places to look: IBM GPFS 4.1 docs.. there's a section; Oracle 11g "Integration" docs.. probably still applies for 12, though it's removed; Random Blogs What I can't find, and am most interested in, is, info on MySQL and PostgreSQL. I see little blogs here and there saying it will work, and _some_ engines support DirectIO.. but I'm wondering if MySQL will Do The Right Thing (tm) and ensure writes are written and data is good over this "remote" file system. I worry that if it goes offline or we have waiters that it won't make MySQL very happy and there will be data loss. There's already enough stories about MySQL data loss online. I'm wondering if GPFS "feels" like a local disk enough to MySQL that it won't fail in the way NFS does for MySQL. I'm guessing the answer is that with some engines like InnoDB and direct io turned on, it'll be fine and for others it will be whatever you get.. but that's not very reassuring. PostgreSQL seems to have even less info. Dean, I'll look in to those. Thanks. Are those all in 4.1 and in the new protocol servers? Does HAWC work when the client is over NFS? I assume the server would take care of it.. Haven't read much yet. Christoph, Looks like that RDM is only for ESX (the older linux-based hypervisor), not ESXi. AFAIK there's no GPFS client that can run on ESXi yet, so the only options are remote mounting GPFS via NFS on the Hypervisor to store the VMs. Or, inside the VM, but that's not what I want. Simon, I'm talking about on the hypervisor. Looking for a way to use GPFS to store VMs instead of standing up a SAN, but want it to be safe and consistent. Thus my worry about backing VM disks by NFS backed by GPFS... -Zach On Fri, Sep 4, 2015 at 2:57 AM, Simon Thompson (Research Computing - IT Services) wrote: > When you say VMware, do you mean to the hypervisor or vms? Running vms can of course be gpfs clients. > > Protocol servers use nfs ganesha server, but I've only looked at smb support. > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] > Sent: 03 September 2015 15:59 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? > > On that same note... > How about VMware? > Obviously I guess really the only way would be via NFS export.. which > cNFS was .. not the best at (my opinion). Maybe Protocol Servers are > better? Maybe also a "don't do it"? > > Thanks, > -Zach > > > -- > Zach Giles > zgiles at gmail.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Zach Giles zgiles at gmail.com From jenocram at gmail.com Fri Sep 4 18:03:06 2015 From: jenocram at gmail.com (Jeno Cram) Date: Fri, 4 Sep 2015 10:03:06 -0700 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: A previous company that I worked for used DB2 Purescale which is basically HA DB2 with GPFS for the filesystem with crm for cluster management. On Sep 3, 2015 10:59 AM, "Zachary Giles" wrote: > Hello Everyone, > > Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized > system in production, hundreds of nodes, lots of tuning etc. Not a > newb. :) > Looking for opinions on running database engines backed by GPFS. > Has anyone run any backed by GPFS and what did you think about it? > > I realize there are tuning guides and guide-lines for running > different DBs on different file systems, but there seems to be a lack > of best-practices for doing so on GPFS. > > For example, usually you don't run DBs backed by NFS due to locking, > cacheing etc.. You can tune those out with sync, hard, etc, but, still > the best practice is to use a local file system. > As GPFS is hybrid, and used for many apps that do have hard > requirements such as Cinder block storage, science apps, etc, and has > proper byte-level locking.. it seems like it would be semi-equal to a > lock file system. > > Does anyone have any opinions, experiences, or recommendations for > running DBs backed by GPFS? > Also will accept horror stories, gotcha's, and "dont do it's". :) > > On that same note... > How about VMware? > Obviously I guess really the only way would be via NFS export.. which > cNFS was .. not the best at (my opinion). Maybe Protocol Servers are > better? Maybe also a "don't do it"? > > Thanks, > -Zach > > > -- > Zach Giles > zgiles at gmail.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Fri Sep 4 19:38:39 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Fri, 4 Sep 2015 18:38:39 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> Hi Kristy, Sorry to press, but when will you have the poll open to get the vote for the day of the GPFS US UG at SC'15? I really would like to get arrangements set as soon as possible, -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Kristy Kallback-Rose Sent: Saturday, August 29, 2015 3:24 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location OK, here's what I've heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you'll note the known conflicts on that date. What I'm asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I'll setup a poll for that, so I can quickly tally answers. I value your feedback, but don't want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG -email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I'll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A > wrote: It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC'15. However the SC'15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers" session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kallbac at iu.edu Fri Sep 4 19:44:12 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Fri, 4 Sep 2015 18:44:12 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <46C277BC-BE1A-42C3-A273-461A0CABC128@iu.edu> Bob sent a draft just a few minutes ago. Should be out yet today I think. -Kristy On Sep 4, 2015, at 2:38 PM, Bryan Banister > wrote: Hi Kristy, Sorry to press, but when will you have the poll open to get the vote for the day of the GPFS US UG at SC?15? I really would like to get arrangements set as soon as possible, -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Kristy Kallback-Rose Sent: Saturday, August 29, 2015 3:24 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location OK, here?s what I?ve heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you?ll note the known conflicts on that date. What I?m asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I?ll setup a poll for that, so I can quickly tally answers. I value your feedback, but don?t want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG ?email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I?ll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A > wrote: It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From kallbac at iu.edu Fri Sep 4 19:47:12 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Fri, 4 Sep 2015 18:47:12 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <46C277BC-BE1A-42C3-A273-461A0CABC128@iu.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> <46C277BC-BE1A-42C3-A273-461A0CABC128@iu.edu> Message-ID: PS - Applying pressure not an issue. Thanks for helping push this forward. -Kristy On Sep 4, 2015, at 2:44 PM, Kallback-Rose, Kristy A > wrote: Bob sent a draft just a few minutes ago. Should be out yet today I think. -Kristy On Sep 4, 2015, at 2:38 PM, Bryan Banister > wrote: Hi Kristy, Sorry to press, but when will you have the poll open to get the vote for the day of the GPFS US UG at SC?15? I really would like to get arrangements set as soon as possible, -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Kristy Kallback-Rose Sent: Saturday, August 29, 2015 3:24 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location OK, here?s what I?ve heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you?ll note the known conflicts on that date. What I?m asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I?ll setup a poll for that, so I can quickly tally answers. I value your feedback, but don?t want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG ?email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I?ll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A > wrote: It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Fri Sep 4 19:41:35 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 4 Sep 2015 11:41:35 -0700 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: > Dean, > I'll look in to those. Thanks. Are those all in 4.1 and in the new > protocol servers? Does HAWC work when the client is over NFS? I assume > the server would take care of it.. Haven't read much yet. FGDB was in 3.4 I believe, and HAWC is in 4.1.1 ptf1....but there are other items that helped performance for these environments, so using the latest is always best :) Yes, hawc is independent of nfs...its all in gpfs. > > Christoph, > Looks like that RDM is only for ESX (the older linux-based > hypervisor), not ESXi. AFAIK there's no GPFS client that can run on > ESXi yet, so the only options are remote mounting GPFS via NFS on the > Hypervisor to store the VMs. > Or, inside the VM, but that's not what I want. > > Simon, > I'm talking about on the hypervisor. Looking for a way to use GPFS to > store VMs instead of standing up a SAN, but want it to be safe and > consistent. Thus my worry about backing VM disks by NFS backed by > GPFS... >50% of VMWare deployments use NFS... and NFS+GPFS obeys nfs semantics, so together your VMs are just as safe as with a SAN. Dean > > -Zach > > > On Fri, Sep 4, 2015 at 2:57 AM, Simon Thompson (Research Computing - > IT Services) wrote: > > When you say VMware, do you mean to the hypervisor or vms? Running > vms can of course be gpfs clients. > > > > Protocol servers use nfs ganesha server, but I've only looked at > smb support. > > > > Simon > > ________________________________________ > > From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss- > bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] > > Sent: 03 September 2015 15:59 > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? > > > > On that same note... > > How about VMware? > > Obviously I guess really the only way would be via NFS export.. which > > cNFS was .. not the best at (my opinion). Maybe Protocol Servers are > > better? Maybe also a "don't do it"? > > > > Thanks, > > -Zach > > > > > > -- > > Zach Giles > > zgiles at gmail.com > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -- > Zach Giles > zgiles at gmail.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri Sep 4 19:57:47 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 4 Sep 2015 18:57:47 +0000 Subject: [gpfsug-discuss] POLL: Preferred Day/Time for the GPFS UG Meeting at SC15 Message-ID: If you are going to Supercomputing 2015 in Austin (November), let us know when you?d like to have a user group meeting. There are no ideal times ? please complete this survey with you preferred time and we?ll post the results. https://www.surveymonkey.com/r/6MKCHML Bob Oesterlin - gpfsug?ug USA co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Sep 8 12:07:11 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 8 Sep 2015 11:07:11 +0000 Subject: [gpfsug-discuss] Reminder - POLL: Preferred Day/Time for the GPFS UG Meeting at SC15 Message-ID: ** Poll closes at 6 PM US EST on Wed 9/9 ** If you are going to Supercomputing 2015 in Austin (November), let us know when you?d like to have a user group meeting. There are no ideal times ? please complete this survey with you preferred time and we?ll post the results. https://www.surveymonkey.com/r/6MKCHML Bob Oesterlin - gpfsug?ug USA co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Sep 9 20:52:04 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 9 Sep 2015 19:52:04 +0000 Subject: [gpfsug-discuss] Survey says! - SC15 User Group Meeting - Survey results Message-ID: <05C90986-CB0F-41C0-882F-4127F39F412F@nuance.com> Survey has been closed - Here are the survey results. There was a bit more spread in the results than I expected, but Sunday was the winner, with Sunday afternoon being the most preferred time. NOTE: This does not represent a *definitive* "we?ll have it on Sun Afternoon?. I fully expect this will be the case, but it will need to be confirmed by IBM and the other GPFSUG Chairs. For travel planning purposes, assume Sunday afternoon. I/We will post if anything changes. Answer Choices? Responses? ? Sunday November 15th: Morning 13.64% 3 ? Sunday November 15th: Afternoon 40.91% 9 ? Monday November 16th Morning (Will Overlap with PDSW) 22.73% 5 ? Monday November 16th Afternoon (Will Overlap with PDSW and/or DDN User Group 2:30-6 PM) 9.09% 2 ? Friday November 20th Afternoon (starting later so attendees can make it to the panels) 13.64% 3 Total 22 Bob Oesterlin GPFS-UG Co-Principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Thu Sep 10 11:33:29 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Thu, 10 Sep 2015 11:33:29 +0100 Subject: [gpfsug-discuss] Save the date - 2016 UK GPFS User Group! Message-ID: Save the date - 17th/18th May 2016! Following feedback from the previous groups, we're going for a two day GPFS UG event next year. We've now confirmed the booking at IBM South Bank for the two days, so please pencil 17th and 18th May 2016 into your diaries for the GPFS UG. Its a little early for us to think about the agenda in too much detail, though the first day is likely to follow the previous format with a mixture of IBM and User talks and the second day we're looking at breaking into groups to focus on specific areas or features. If there are topcis you'd like to see on the agenda, then please do let us know! And don't forget, the next mini-meet will be at Computing Insight UK in December, you must be registered for CIUK to attend the user group. And finally, we're also working the dates for the next meet the devs event which should be taking place in Edinburgh (thanks to Orlando for offering a venue). Once we've got the dates organised we'll open registration for the session. Simon UG Chair From josh.cullum at cfms.org.uk Thu Sep 10 12:34:16 2015 From: josh.cullum at cfms.org.uk (Josh Cullum) Date: Thu, 10 Sep 2015 11:34:16 +0000 Subject: [gpfsug-discuss] Setting Quota's Message-ID: Hi All, We're looking into 4.1.1 (finally got it setup) so that we can start to plan our integration and update of our existing GPFS systems, and we are looking to do something in line with the following. Our current setup looks something like this: (Running GPFS 3.4) mmlsfileset prgpfs Filesets in file system 'prgpfs': Name Status Path root Linked /gpfs services Linked /gpfs/services cfms Linked /gpfs/cfms where the fileset has a quota and nothing in that fileset can grow above it. The filesets contain a home directory, a working directory and an apps directory, all controlled by a particular unix(AD) group. In our new GPFS cluster, we would like to be able to create a fileset for each home directory within each organisation directory, for example the structure looks like the below: Filesets in file system 'prgpfs': Name Status Path root Linked /gpfs services Linked /gpfs/services cfms Linked /gpfs/cfms apps Linked /gpfs/apps cfms-home Linked /gpfs/cfms/home where the organisation fileset has a 10TB fileset quota, for working directory and an apps directory. The organisation-home has then got a quota of 500GB per user. I think this is all possible within 4.1.1 from reading the documentation, where a user's quota only applies to a particular fileset (using the mmdefedquota -u prgpfs:organisation-home command) and so does not affect the /gpfs/organisation working dir and apps dir. Can anyone confirm this? We would like to then use default quota's so that every organisation-home fileset has the 500GB per user rule applied. Does anyone know if it possible to wildcard the gpfs quota rule so it applies to all filesets with "-home" in the name? Kind Regards, Josh Cullum -- *Josh Cullum* // IT Systems Administrator *e: josh.cullum at cfms.org.uk * // *t: *0117 906 1106 // *w: *www.cfms.org.uk // [image: Linkedin grey icon scaled] CFMS Services Ltd // Bristol & Bath Science Park // Dirac Crescent // Emersons Green // Bristol // BS16 7FR -------------- next part -------------- An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Thu Sep 10 21:38:12 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Thu, 10 Sep 2015 16:38:12 -0400 Subject: [gpfsug-discuss] Reminder: Inaugural US "Meet the Developers" Message-ID: <3d0f058f40ae93d5d06eb3ea23f5e21e@webmail.gpfsug.org> Hello Everyone, Here is a reminder about our inaugural US "Meet the Developers" session. Details are below, and please send an e-mail to Janet Ellsworth (janetell at us.ibm.com) by next Friday September 18th if you wish to attend. Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface ***Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this !)*** Open Q&A with the development team We are happy to have heard from many of you so far who would like to attend. We still have room however, so please get in touch by the 9/18 date if you would like to attend. ***We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too.*** As you have likely seen, we are also working on the agenda and timing for day-long GPFS US UG event in Austin during November aligned with SC15 and there will be more details on that coming soon. From kraemerf at de.ibm.com Fri Sep 11 07:15:25 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Fri, 11 Sep 2015 08:15:25 +0200 Subject: [gpfsug-discuss] FYI: WP102585 - Veritas NetBackup with IBM Spectrum Scale Elastic Storage Server (ESS) Message-ID: <201509110616.t8B6GODX013654@d06av06.portsmouth.uk.ibm.com> Veritas NetBackup with IBM Spectrum Scale Elastic Storage Server (ESS) This white paper is a brief overview of the functional and performance proof of concept using Veritas NetBackup with IBM Elastic Storage Server (ESS) GL4 enabled by IBM Spectrum Scale formerly known as General Parallel File System (GPFS). The intended audience of this paper is technical but the paper also contains high-level non-technical content. This paper describes and documents some of the NetBackup disk target configuration steps as part of the functional testing performed. The paper also reports and analyzes the PoC performance results. http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102585 Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Mon Sep 14 15:46:04 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 14 Sep 2015 14:46:04 +0000 Subject: [gpfsug-discuss] FLASH: Security Bulletin: Vulnerability in OpenSSL affects IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 (CVE-2015-1788) (2015.09.12) In-Reply-To: <657532721.4156481442059132157.JavaMail.webinst@w30021> References: <657532721.4156481442059132157.JavaMail.webinst@w30021> Message-ID: I received this over the weekend ? for those of you not signed up for electronic distribution. It looks to be treated as ?moderate? - but I have no idea how worried I should be about it. Does anyone have more information? Bob Oesterlin Sr Storage Engineer, Nuance Communications From: IBM My Notifications Date: Saturday, September 12, 2015 at 6:58 AM IBM Spectrum Scale ? Security Bulletin: Vulnerability in OpenSSL affects IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 (CVE-2015-1788) An OpenSSL denial of service vulnerability disclosed by the OpenSSL Project affects GSKit. IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 use GSKit and addressed the applicable CVE. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Sep 15 18:16:00 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 15 Sep 2015 17:16:00 +0000 Subject: [gpfsug-discuss] GPFS UG Meeting at SC15 - Preliminary agenda Message-ID: <38EE0F09-7A2F-4031-B201-BA0CEE715A77@nuance.com> Here is the preliminary agenda for the user group meeting at SC15. We realize that the timing isn?t perfect for everyone. Hopefully all of you in attendance at SC15 can participate in some or all of these sessions. I?m sure we will all find time to get together outside of this to discuss topics. Thanks to IBM for helping to organize this. We are soliciting user presentations! (20 mins each) Talk about how you are using GPFS, challenges, etc. Please drop a note to: with submission or suggestions for topics. If you have comments on the agenda, let us know ASAP as time is short! ? Proposed Agenda ? Sunday 11/15 - Location TBD 1:00 - 1:15 Introductions, Logistics, GPFS-UG Overview 1:15 - 2:15 File, Object, HDF & a GUI! : the latest on IBM Spectrum Scale 2:15 - 2:30 Lightning Demo of Spectrum Control with invitation for free trial & more discussions during reception 2:30 - 2:45 Break 2:45 - 3:45 User Presentation(s): User #1 Nuance ? (20 mins) User #2 (20 mins) User #3 (20 mins) 3:45 ? 4:00 ESS Performance testing at the new open Ennovar lab at Wichita State University 4:00 ? 4:15 Break 4:15 - 5:30 Panel Discussion: "My favorite tool for managing Spectrum Scale is..." Panel: Nuance, DESY, +? +? 5:30 ? ? Reception Bob Oesterlin Sr Storage Engineer, Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Mon Sep 21 09:23:41 2015 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Mon, 21 Sep 2015 08:23:41 +0000 Subject: [gpfsug-discuss] Automatic Inode Expansion for Independent Filesets Message-ID: Hi All, Do independent filesets automatically expand the number of preallocated inodes as needed up to the maximum as the root fileset does? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From bevans at pixitmedia.com Mon Sep 21 12:06:45 2015 From: bevans at pixitmedia.com (Barry Evans) Date: Mon, 21 Sep 2015 12:06:45 +0100 Subject: [gpfsug-discuss] Automatic Inode Expansion for Independent Filesets In-Reply-To: References: Message-ID: <55FFE4C5.3040305@pixitmedia.com> Hi Luke, It does indeed expand automatically. It's a good idea to get quotas and callbacks in place for this or something to parse the semi regular polling of the allocated inodes as it has a tendency to sneak up on you and run out of space! Cheers, Barry On 21/09/2015 09:23, Luke Raimbach wrote: > Hi All, > > Do independent filesets automatically expand the number of preallocated inodes as needed up to the maximum as the root fileset does? > > Cheers, > Luke. > > Luke Raimbach? > Senior HPC Data and Storage Systems Engineer, > The Francis Crick Institute, > Gibbs Building, > 215 Euston Road, > London NW1 2BE. > > E: luke.raimbach at crick.ac.uk > W: www.crick.ac.uk > > The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media Mobile: +44 (0)7950 666 248 http://www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From secretary at gpfsug.org Mon Sep 28 13:49:22 2015 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Mon, 28 Sep 2015 13:49:22 +0100 Subject: [gpfsug-discuss] Meet the Devs comes to Edinburgh! Message-ID: <402938fb8bcfc79f8feee2c7d34e16b7@webmail.gpfsug.org> Hi all, We've arranged the next 'Meet the Devs' event to take place in Edinburgh on Friday 23rd October from 10:30/11am until 3/3:30pm. Location: Room 2009a, Information Services, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD Google maps link: https://goo.gl/maps/Ta7DQ Agenda: - GUI - 4.2 Updates/show and tell - Open conversation on any areas of interest attendees may have Lunch and refreshments will be provided. Please email me (secretary at gpfsug.org) if you would like to attend including any particular topics of interest you would like to discuss. We hope to see you there! Best wishes, -- Claire O'Toole GPFS User Group Secretary +44 (0)7508 033896 www.gpfsug.org From Robert.Oesterlin at nuance.com Wed Sep 30 18:05:37 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 30 Sep 2015 17:05:37 +0000 Subject: [gpfsug-discuss] User Group Meeting at SC15 - Call for user presentations Message-ID: <3CF518B4-5212-4187-A3D8-32270F6C06D9@nuance.com> We?re still looking for a few more user presentations for the SC15 user group meeting. They don?t need to be lengthy or complicated ? just tells what you are doing with Spectrum Scale (GPFS). If you could drop me a note to me: - Indicating you are coming to SC15 and if you are attending the user group meeting - If you are willing to do a short presentation on your use of Spectrum Scale (GPFS) My email is robert.oesterlin @ nuance.com Bob Oesterlin GPFS-UG USA Co-principal Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Wed Sep 30 19:56:38 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Wed, 30 Sep 2015 20:56:38 +0200 Subject: [gpfsug-discuss] User Group Meeting at SC15 - Call for user presentations In-Reply-To: <3CF518B4-5212-4187-A3D8-32270F6C06D9@nuance.com> References: <3CF518B4-5212-4187-A3D8-32270F6C06D9@nuance.com> Message-ID: Hi Robert, i will attend the meeting and (if i read the agenda correctly ;-) will also give a presentation about out GPFS setup for data taking and analysis in photon science @DESY. best regards, Martin > On 30 Sep, 2015, at 19:05, Oesterlin, Robert wrote: > > We?re still looking for a few more user presentations for the SC15 user group meeting. They don?t need to be lengthy or complicated ? just tells what you are doing with Spectrum Scale (GPFS). > > If you could drop me a note to me: > > - Indicating you are coming to SC15 and if you are attending the user group meeting > - If you are willing to do a short presentation on your use of Spectrum Scale (GPFS) > > My email is robert.oesterlin @ nuance.com > > Bob Oesterlin > GPFS-UG USA Co-principal > Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From ckrafft at de.ibm.com Wed Sep 2 09:24:37 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Wed, 2 Sep 2015 10:24:37 +0200 Subject: [gpfsug-discuss] Any experiences with GSS/ESS and DB2 Message-ID: <201509020825.t828Ppho005861@d06av09.portsmouth.uk.ibm.com> Hi there, out of curiosity :-): Is anyone running a solution with DB2 and GPFS GNR based GSS/ESS? Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0E481995.gif Type: image/gif Size: 1851 bytes Desc: not available URL: From viccornell at gmail.com Wed Sep 2 13:36:08 2015 From: viccornell at gmail.com (Vic Cornell) Date: Wed, 2 Sep 2015 13:36:08 +0100 Subject: [gpfsug-discuss] $k drives and Multi-cluster Message-ID: <14DC8ADF-F1C2-40AB-B7B6-78791917B0C7@gmail.com> Hi All, Here?s one I can?t find in the documentation - I understand that you need GPFS 4.1 to support 4K disk sectors. Can I mount a 4.1 filesystem with 4k drives onto a GPFS 3.5 filesystem via multi cluster? Any experience or insight would be useful. Regards, Vic From zgiles at gmail.com Thu Sep 3 15:59:44 2015 From: zgiles at gmail.com (Zachary Giles) Date: Thu, 3 Sep 2015 10:59:44 -0400 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Message-ID: Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com From ewahl at osc.edu Thu Sep 3 18:59:42 2015 From: ewahl at osc.edu (Wahl, Edward) Date: Thu, 3 Sep 2015 17:59:42 +0000 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: <9DA9EC7A281AC7428A9618AFDC49049955ABC785@CIO-KRC-D1MBX02.osuad.osu.edu> Can't say I've tried this in so many years it's not relevant. But the IBM/TSM storage folks have a number of interesting reports over at their blog posted using Tivoli with DB2. I recall it says right out there that the DB was on the ESS/GSS but it's been a few months. Search for "storageneers tivoli" and/or "scale out tsm gss". I think there were two or three of them last year. Or poke Sven. ;) Something tells me he'll know. Ed Wahl OSC ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] Sent: Thursday, September 03, 2015 10:59 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Thu Sep 3 19:52:35 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 3 Sep 2015 18:52:35 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Just to follow up, I've been sent an efix today which hopefully will resolve this (and also the other LROC bugs), so I'm guessing this fix will make it out generally in 4.1.1-02 Will be testing the fix out over the next few days. Simon From: Dean Hildebrand > Reply-To: gpfsug main discussion list > Date: Thursday, 27 August 2015 20:24 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, This appears to be a mistake, as using clients for the System.log pool should not require a server license (should be similar to lroc).... thanks for opening the PMR... Dean Hildebrand IBM Almaden Research Center [Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/27/2015 12:42:47 AM---Hi Dean, Thanks. I wa]"Simon Thompson (Research Computing - IT Services)" ---08/27/2015 12:42:47 AM---Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "va From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/27/2015 12:42 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "valid" as they are essentially NSDs in a different cluster from where the storage cluster would be, but it sounds like it is. Now if I can just get it working ... Looking in mmfsfuncs: if [[ $diskUsage != "localCache" ]] then combinedList=${primaryAdminNodeList},${backupAdminNodeList} IFS="," for server in $combinedList do IFS="$IFS_sv" [[ -z $server ]] && continue $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1 if [[ $? -ne 0 ]] then # The node does not have a server license. printErrorMsg 118 $mmcmd $server return 1 fi IFS="," done # end for server in ${primaryAdminNodeList},${backupAdminNodeList} IFS="$IFS_sv" fi # end of if [[ $diskUsage != "localCache" ]] So unless the NSD device usage=localCache, then it requires a server License when you try and create the NSD, but localCache cannot have a storage pool assigned. I've opened a PMR with IBM. Simon From: Dean Hildebrand > Reply-To: gpfsug main discussion list > Date: Thursday, 27 August 2015 01:22 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center [Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other ques]"Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" > wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [attachment "graycol.gif" deleted by Dean Hildebrand/Almaden/IBM] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From kraemerf at de.ibm.com Thu Sep 3 20:16:31 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Thu, 3 Sep 2015 21:16:31 +0200 Subject: [gpfsug-discuss] GPFS for DBs & more In-Reply-To: References: Message-ID: <201509031917.t83JHtuU011299@d06av04.portsmouth.uk.ibm.com> Have a look here: > Looking for opinions on running database engines backed by GPFS. > Has anyone run any backed by GPFS and what did you think about it? IBM Spectrum Scale 4.1 is certified with Oracle Database 12cR1 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10853 IBM Spectrum Scale tuning guidelines for deploying SAS http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106348 IBM System Storage Architecture and Configuration Guide for SAP HANA TDI (tailored datacenter integration) V2.2 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102347 Microsoft SharePoint data management solution using IBM Spectrum Scale and AvePoint DocAve http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102580 Consolidated hardware for video solutions http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102576 On Premise File Sync and Share Solution Using IBM Spectrum Scale for Object Storage and ownCloud http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102581 For DB2 pureScale GPFS is a *must* http://www.ibm.com/software/data/db2/linux-unix-windows/purescale/ -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany **** Hello, a new whitepaper was published that describes the configuration of Spectrum Protect (and Spectrum Archive implicitly) in a Spectrum Scale Active File Management (AFM) environment. Beside an introduction to AFM three major user scenarios (disaster recovery, branch office, system migration) are explained. For each of the scenarios the combination of AFM functions with Spectrum Protect backup functions and Spectrum Protect and Spectrum Archive HSM functions are described in detail including challenges and recommendations for the specified setup. The paper was written to help technical sales teams and system architects/administrators to understand the mechanic behind the combination of these Spectrum Storage products. Please share this information. Find the paper here: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Configuring%20IBM%20Spectrum%20Scale%20Active%20File%20Management Greetings, Dominic. ______________________________________________________________________________________________________________ Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From DCR at ch.ibm.com Thu Sep 3 21:08:33 2015 From: DCR at ch.ibm.com (David Cremese) Date: Thu, 3 Sep 2015 22:08:33 +0200 Subject: [gpfsug-discuss] DBs over GPFS Message-ID: An HTML attachment was scrubbed... URL: From chair at gpfsug.org Thu Sep 3 21:42:30 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Thu, 03 Sep 2015 21:42:30 +0100 Subject: [gpfsug-discuss] GPFS UG Meeting at Computing Insight UK Message-ID: Hi, Our next UK based group meeting will be part of the agenda for Computing Insight UK which will be held on 8th/9th December at the Ricoh Arena, Coventry. The meeting will be a short (2 hour) breakout session at CIUK. More details on CIUK are at: http://www.stfc.ac.uk/news-events-and-publications/events/computing-insight -uk-2015/ Please note that you must be registered to attend CIUK to attend the GPFS UG meeting, during the registration process you will get the option to register for the workshops which includes the GPFS UG. I'm also looking for someone to give a short user presentation on your use of GPFS in your environment, so if this is something you are interested in, please let me know. We're hoping to have a few devs available at the group, and will be looking at some of the 4.2 features, we'll also be including the opportunity to discuss GPFS with any comments or areas for development you'd like to look at. Finally, we're already planning the May 2016 event, and I hope to be able to send our a save the date in the next few weeks. Simon (GPFS UG Chair) From dhildeb at us.ibm.com Thu Sep 3 22:32:20 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 3 Sep 2015 14:32:20 -0700 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: Hi Zachary, VMWare via NFS to GPFS is a great option as several new features have been added to GPFS to support VM workloads over the last couple years, including file-grained dirty bits (FGDB) for tracking updates at 4KB granularity and HAWC for buffering small synchronous writes in fast storage. Dean Hildebrand IBM Almaden Research Center From: Zachary Giles To: gpfsug main discussion list Date: 09/03/2015 08:00 AM Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Sent by: gpfsug-discuss-bounces at gpfsug.org Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ckrafft at de.ibm.com Fri Sep 4 07:43:09 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Fri, 4 Sep 2015 08:43:09 +0200 Subject: [gpfsug-discuss] GPFS for DBs & more In-Reply-To: <201509031917.t83JHtuU011299@d06av04.portsmouth.uk.ibm.com> References: <201509031917.t83JHtuU011299@d06av04.portsmouth.uk.ibm.com> Message-ID: <201509040644.t846iCFa007119@d06av05.portsmouth.uk.ibm.com> ... and what about classic DB2? Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From: Frank Kraemer/Germany/IBM at IBMDE To: gpfsug-discuss at gpfsug.org Date: 03.09.2015 21:19 Subject: [gpfsug-discuss] GPFS for DBs & more Sent by: gpfsug-discuss-bounces at gpfsug.org Have a look here: > Looking for opinions on running database engines backed by GPFS. > Has anyone run any backed by GPFS and what did you think about it? IBM Spectrum Scale 4.1 is certified with Oracle Database 12cR1 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10853 IBM Spectrum Scale tuning guidelines for deploying SAS http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106348 IBM System Storage Architecture and Configuration Guide for SAP HANA TDI (tailored datacenter integration) V2.2 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102347 Microsoft SharePoint data management solution using IBM Spectrum Scale and AvePoint DocAve http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102580 Consolidated hardware for video solutions http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102576 On Premise File Sync and Share Solution Using IBM Spectrum Scale for Object Storage and ownCloud http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102581 For DB2 pureScale GPFS is a *must* http://www.ibm.com/software/data/db2/linux-unix-windows/purescale/ -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany **** Hello, a new whitepaper was published that describes the configuration of Spectrum Protect (and Spectrum Archive implicitly) in a Spectrum Scale Active File Management (AFM) environment. Beside an introduction to AFM three major user scenarios (disaster recovery, branch office, system migration) are explained. For each of the scenarios the combination of AFM functions with Spectrum Protect backup functions and Spectrum Protect and Spectrum Archive HSM functions are described in detail including challenges and recommendations for the specified setup. The paper was written to help technical sales teams and system architects/administrators to understand the mechanic behind the combination of these Spectrum Storage products. Please share this information. Find the paper here: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Configuring%20IBM%20Spectrum%20Scale%20Active%20File%20Management Greetings, Dominic. ______________________________________________________________________________________________________________ Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 06252012.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ckrafft at de.ibm.com Fri Sep 4 07:46:34 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Fri, 4 Sep 2015 08:46:34 +0200 Subject: [gpfsug-discuss] DBs over GPFS In-Reply-To: References: Message-ID: <201509040647.t846lR86009262@d06av01.portsmouth.uk.ibm.com> ... forgive me and forget my previous email - started reading sequentially and did not see David's email early enough 8-) So "regular" DB2 seems also covered - although the information is a bit sparse ... Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From: David Cremese To: gpfsug-discuss at gpfsug.org Date: 03.09.2015 22:08 Subject: Re: [gpfsug-discuss] DBs over GPFS Sent by: gpfsug-discuss-bounces at gpfsug.org Hello Zach, There's a paper posted on IBM DeveloperWorks, describing best practices for running DB2 over GPFS: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Wc9a068d7f6a6_4434_aece_0d297ea80ab1/page/DB2%20databases%20and%20the%20IBM%20General%20Parallel%20File%20System All the best, David Cremese dcr at ch.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0D589989.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ckrafft at de.ibm.com Fri Sep 4 07:53:51 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Fri, 4 Sep 2015 08:53:51 +0200 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: <201509040554.t845skZ4014649@d06av10.portsmouth.uk.ibm.com> Hi Zach, VMware is covered via Pass-through Raw Device Mapping (RDM) with physical compatibility mode if you want direct disk access inside the VM. Otherwise works as a "normal" GPFS client Go to: "Table 28. VMware support matrix" @ http://www-01.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html Have a client using this in production with x86 RHEL running on top of VMware ... it works well. They use RDM since the VMs do have disk access directly. Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From: Zachary Giles To: gpfsug main discussion list Date: 03.09.2015 16:59 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Sent by: gpfsug-discuss-bounces at gpfsug.org Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0D771085.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Fri Sep 4 07:57:04 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 4 Sep 2015 06:57:04 +0000 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: When you say VMware, do you mean to the hypervisor or vms? Running vms can of course be gpfs clients. Protocol servers use nfs ganesha server, but I've only looked at smb support. Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] Sent: 03 September 2015 15:59 To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From zgiles at gmail.com Fri Sep 4 18:00:26 2015 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 4 Sep 2015 13:00:26 -0400 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: Frank, Edward, David, Christoph, The Oracle 12c certified with GPFS 4.1 looks like they only mention AIX with GPFS.. though it could apply to Linux too I believe. There's no tuning info in it... I do see the DB2 and SAS whitepapers. I've read those over and over trying to tune for Oracle and other things. They're OK, but I'm not really interested in DB2 (Though I'm sure lots of IBM people are.. ), and they also don't seem to say or "Show" much of tuning over different block sizes, direct writes vs not, read pattersn, data integrity, etc. They're still valuable though. I _did_ find moderate info on Oracle, Though it is fairly scattered. I'm doing a bunch of testing with Oracle right now and it's .. finicky .. with GPFS. Yes it works, and there are comments on data integrity here and there about Direct IO and ASync IO bypassing cache.. Oracle has latches, etc. So, seems like you could assume the data is good on GPFS. There's very little in terms of tuning. So far, it seems unhappy with large block sizes, even though it is recommended, but they're calling "512KB" large, so it's all from more than several years ago. Places to look: IBM GPFS 4.1 docs.. there's a section; Oracle 11g "Integration" docs.. probably still applies for 12, though it's removed; Random Blogs What I can't find, and am most interested in, is, info on MySQL and PostgreSQL. I see little blogs here and there saying it will work, and _some_ engines support DirectIO.. but I'm wondering if MySQL will Do The Right Thing (tm) and ensure writes are written and data is good over this "remote" file system. I worry that if it goes offline or we have waiters that it won't make MySQL very happy and there will be data loss. There's already enough stories about MySQL data loss online. I'm wondering if GPFS "feels" like a local disk enough to MySQL that it won't fail in the way NFS does for MySQL. I'm guessing the answer is that with some engines like InnoDB and direct io turned on, it'll be fine and for others it will be whatever you get.. but that's not very reassuring. PostgreSQL seems to have even less info. Dean, I'll look in to those. Thanks. Are those all in 4.1 and in the new protocol servers? Does HAWC work when the client is over NFS? I assume the server would take care of it.. Haven't read much yet. Christoph, Looks like that RDM is only for ESX (the older linux-based hypervisor), not ESXi. AFAIK there's no GPFS client that can run on ESXi yet, so the only options are remote mounting GPFS via NFS on the Hypervisor to store the VMs. Or, inside the VM, but that's not what I want. Simon, I'm talking about on the hypervisor. Looking for a way to use GPFS to store VMs instead of standing up a SAN, but want it to be safe and consistent. Thus my worry about backing VM disks by NFS backed by GPFS... -Zach On Fri, Sep 4, 2015 at 2:57 AM, Simon Thompson (Research Computing - IT Services) wrote: > When you say VMware, do you mean to the hypervisor or vms? Running vms can of course be gpfs clients. > > Protocol servers use nfs ganesha server, but I've only looked at smb support. > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] > Sent: 03 September 2015 15:59 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? > > On that same note... > How about VMware? > Obviously I guess really the only way would be via NFS export.. which > cNFS was .. not the best at (my opinion). Maybe Protocol Servers are > better? Maybe also a "don't do it"? > > Thanks, > -Zach > > > -- > Zach Giles > zgiles at gmail.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Zach Giles zgiles at gmail.com From jenocram at gmail.com Fri Sep 4 18:03:06 2015 From: jenocram at gmail.com (Jeno Cram) Date: Fri, 4 Sep 2015 10:03:06 -0700 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: A previous company that I worked for used DB2 Purescale which is basically HA DB2 with GPFS for the filesystem with crm for cluster management. On Sep 3, 2015 10:59 AM, "Zachary Giles" wrote: > Hello Everyone, > > Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized > system in production, hundreds of nodes, lots of tuning etc. Not a > newb. :) > Looking for opinions on running database engines backed by GPFS. > Has anyone run any backed by GPFS and what did you think about it? > > I realize there are tuning guides and guide-lines for running > different DBs on different file systems, but there seems to be a lack > of best-practices for doing so on GPFS. > > For example, usually you don't run DBs backed by NFS due to locking, > cacheing etc.. You can tune those out with sync, hard, etc, but, still > the best practice is to use a local file system. > As GPFS is hybrid, and used for many apps that do have hard > requirements such as Cinder block storage, science apps, etc, and has > proper byte-level locking.. it seems like it would be semi-equal to a > lock file system. > > Does anyone have any opinions, experiences, or recommendations for > running DBs backed by GPFS? > Also will accept horror stories, gotcha's, and "dont do it's". :) > > On that same note... > How about VMware? > Obviously I guess really the only way would be via NFS export.. which > cNFS was .. not the best at (my opinion). Maybe Protocol Servers are > better? Maybe also a "don't do it"? > > Thanks, > -Zach > > > -- > Zach Giles > zgiles at gmail.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Fri Sep 4 19:38:39 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Fri, 4 Sep 2015 18:38:39 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> Hi Kristy, Sorry to press, but when will you have the poll open to get the vote for the day of the GPFS US UG at SC'15? I really would like to get arrangements set as soon as possible, -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Kristy Kallback-Rose Sent: Saturday, August 29, 2015 3:24 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location OK, here's what I've heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you'll note the known conflicts on that date. What I'm asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I'll setup a poll for that, so I can quickly tally answers. I value your feedback, but don't want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG -email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I'll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A > wrote: It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC'15. However the SC'15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers" session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kallbac at iu.edu Fri Sep 4 19:44:12 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Fri, 4 Sep 2015 18:44:12 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <46C277BC-BE1A-42C3-A273-461A0CABC128@iu.edu> Bob sent a draft just a few minutes ago. Should be out yet today I think. -Kristy On Sep 4, 2015, at 2:38 PM, Bryan Banister > wrote: Hi Kristy, Sorry to press, but when will you have the poll open to get the vote for the day of the GPFS US UG at SC?15? I really would like to get arrangements set as soon as possible, -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Kristy Kallback-Rose Sent: Saturday, August 29, 2015 3:24 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location OK, here?s what I?ve heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you?ll note the known conflicts on that date. What I?m asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I?ll setup a poll for that, so I can quickly tally answers. I value your feedback, but don?t want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG ?email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I?ll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A > wrote: It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From kallbac at iu.edu Fri Sep 4 19:47:12 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Fri, 4 Sep 2015 18:47:12 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <46C277BC-BE1A-42C3-A273-461A0CABC128@iu.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> <46C277BC-BE1A-42C3-A273-461A0CABC128@iu.edu> Message-ID: PS - Applying pressure not an issue. Thanks for helping push this forward. -Kristy On Sep 4, 2015, at 2:44 PM, Kallback-Rose, Kristy A > wrote: Bob sent a draft just a few minutes ago. Should be out yet today I think. -Kristy On Sep 4, 2015, at 2:38 PM, Bryan Banister > wrote: Hi Kristy, Sorry to press, but when will you have the poll open to get the vote for the day of the GPFS US UG at SC?15? I really would like to get arrangements set as soon as possible, -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Kristy Kallback-Rose Sent: Saturday, August 29, 2015 3:24 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location OK, here?s what I?ve heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you?ll note the known conflicts on that date. What I?m asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I?ll setup a poll for that, so I can quickly tally answers. I value your feedback, but don?t want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG ?email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I?ll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A > wrote: It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Fri Sep 4 19:41:35 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 4 Sep 2015 11:41:35 -0700 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: > Dean, > I'll look in to those. Thanks. Are those all in 4.1 and in the new > protocol servers? Does HAWC work when the client is over NFS? I assume > the server would take care of it.. Haven't read much yet. FGDB was in 3.4 I believe, and HAWC is in 4.1.1 ptf1....but there are other items that helped performance for these environments, so using the latest is always best :) Yes, hawc is independent of nfs...its all in gpfs. > > Christoph, > Looks like that RDM is only for ESX (the older linux-based > hypervisor), not ESXi. AFAIK there's no GPFS client that can run on > ESXi yet, so the only options are remote mounting GPFS via NFS on the > Hypervisor to store the VMs. > Or, inside the VM, but that's not what I want. > > Simon, > I'm talking about on the hypervisor. Looking for a way to use GPFS to > store VMs instead of standing up a SAN, but want it to be safe and > consistent. Thus my worry about backing VM disks by NFS backed by > GPFS... >50% of VMWare deployments use NFS... and NFS+GPFS obeys nfs semantics, so together your VMs are just as safe as with a SAN. Dean > > -Zach > > > On Fri, Sep 4, 2015 at 2:57 AM, Simon Thompson (Research Computing - > IT Services) wrote: > > When you say VMware, do you mean to the hypervisor or vms? Running > vms can of course be gpfs clients. > > > > Protocol servers use nfs ganesha server, but I've only looked at > smb support. > > > > Simon > > ________________________________________ > > From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss- > bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] > > Sent: 03 September 2015 15:59 > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? > > > > On that same note... > > How about VMware? > > Obviously I guess really the only way would be via NFS export.. which > > cNFS was .. not the best at (my opinion). Maybe Protocol Servers are > > better? Maybe also a "don't do it"? > > > > Thanks, > > -Zach > > > > > > -- > > Zach Giles > > zgiles at gmail.com > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -- > Zach Giles > zgiles at gmail.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri Sep 4 19:57:47 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 4 Sep 2015 18:57:47 +0000 Subject: [gpfsug-discuss] POLL: Preferred Day/Time for the GPFS UG Meeting at SC15 Message-ID: If you are going to Supercomputing 2015 in Austin (November), let us know when you?d like to have a user group meeting. There are no ideal times ? please complete this survey with you preferred time and we?ll post the results. https://www.surveymonkey.com/r/6MKCHML Bob Oesterlin - gpfsug?ug USA co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Sep 8 12:07:11 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 8 Sep 2015 11:07:11 +0000 Subject: [gpfsug-discuss] Reminder - POLL: Preferred Day/Time for the GPFS UG Meeting at SC15 Message-ID: ** Poll closes at 6 PM US EST on Wed 9/9 ** If you are going to Supercomputing 2015 in Austin (November), let us know when you?d like to have a user group meeting. There are no ideal times ? please complete this survey with you preferred time and we?ll post the results. https://www.surveymonkey.com/r/6MKCHML Bob Oesterlin - gpfsug?ug USA co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Sep 9 20:52:04 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 9 Sep 2015 19:52:04 +0000 Subject: [gpfsug-discuss] Survey says! - SC15 User Group Meeting - Survey results Message-ID: <05C90986-CB0F-41C0-882F-4127F39F412F@nuance.com> Survey has been closed - Here are the survey results. There was a bit more spread in the results than I expected, but Sunday was the winner, with Sunday afternoon being the most preferred time. NOTE: This does not represent a *definitive* "we?ll have it on Sun Afternoon?. I fully expect this will be the case, but it will need to be confirmed by IBM and the other GPFSUG Chairs. For travel planning purposes, assume Sunday afternoon. I/We will post if anything changes. Answer Choices? Responses? ? Sunday November 15th: Morning 13.64% 3 ? Sunday November 15th: Afternoon 40.91% 9 ? Monday November 16th Morning (Will Overlap with PDSW) 22.73% 5 ? Monday November 16th Afternoon (Will Overlap with PDSW and/or DDN User Group 2:30-6 PM) 9.09% 2 ? Friday November 20th Afternoon (starting later so attendees can make it to the panels) 13.64% 3 Total 22 Bob Oesterlin GPFS-UG Co-Principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Thu Sep 10 11:33:29 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Thu, 10 Sep 2015 11:33:29 +0100 Subject: [gpfsug-discuss] Save the date - 2016 UK GPFS User Group! Message-ID: Save the date - 17th/18th May 2016! Following feedback from the previous groups, we're going for a two day GPFS UG event next year. We've now confirmed the booking at IBM South Bank for the two days, so please pencil 17th and 18th May 2016 into your diaries for the GPFS UG. Its a little early for us to think about the agenda in too much detail, though the first day is likely to follow the previous format with a mixture of IBM and User talks and the second day we're looking at breaking into groups to focus on specific areas or features. If there are topcis you'd like to see on the agenda, then please do let us know! And don't forget, the next mini-meet will be at Computing Insight UK in December, you must be registered for CIUK to attend the user group. And finally, we're also working the dates for the next meet the devs event which should be taking place in Edinburgh (thanks to Orlando for offering a venue). Once we've got the dates organised we'll open registration for the session. Simon UG Chair From josh.cullum at cfms.org.uk Thu Sep 10 12:34:16 2015 From: josh.cullum at cfms.org.uk (Josh Cullum) Date: Thu, 10 Sep 2015 11:34:16 +0000 Subject: [gpfsug-discuss] Setting Quota's Message-ID: Hi All, We're looking into 4.1.1 (finally got it setup) so that we can start to plan our integration and update of our existing GPFS systems, and we are looking to do something in line with the following. Our current setup looks something like this: (Running GPFS 3.4) mmlsfileset prgpfs Filesets in file system 'prgpfs': Name Status Path root Linked /gpfs services Linked /gpfs/services cfms Linked /gpfs/cfms where the fileset has a quota and nothing in that fileset can grow above it. The filesets contain a home directory, a working directory and an apps directory, all controlled by a particular unix(AD) group. In our new GPFS cluster, we would like to be able to create a fileset for each home directory within each organisation directory, for example the structure looks like the below: Filesets in file system 'prgpfs': Name Status Path root Linked /gpfs services Linked /gpfs/services cfms Linked /gpfs/cfms apps Linked /gpfs/apps cfms-home Linked /gpfs/cfms/home where the organisation fileset has a 10TB fileset quota, for working directory and an apps directory. The organisation-home has then got a quota of 500GB per user. I think this is all possible within 4.1.1 from reading the documentation, where a user's quota only applies to a particular fileset (using the mmdefedquota -u prgpfs:organisation-home command) and so does not affect the /gpfs/organisation working dir and apps dir. Can anyone confirm this? We would like to then use default quota's so that every organisation-home fileset has the 500GB per user rule applied. Does anyone know if it possible to wildcard the gpfs quota rule so it applies to all filesets with "-home" in the name? Kind Regards, Josh Cullum -- *Josh Cullum* // IT Systems Administrator *e: josh.cullum at cfms.org.uk * // *t: *0117 906 1106 // *w: *www.cfms.org.uk // [image: Linkedin grey icon scaled] CFMS Services Ltd // Bristol & Bath Science Park // Dirac Crescent // Emersons Green // Bristol // BS16 7FR -------------- next part -------------- An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Thu Sep 10 21:38:12 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Thu, 10 Sep 2015 16:38:12 -0400 Subject: [gpfsug-discuss] Reminder: Inaugural US "Meet the Developers" Message-ID: <3d0f058f40ae93d5d06eb3ea23f5e21e@webmail.gpfsug.org> Hello Everyone, Here is a reminder about our inaugural US "Meet the Developers" session. Details are below, and please send an e-mail to Janet Ellsworth (janetell at us.ibm.com) by next Friday September 18th if you wish to attend. Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface ***Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this !)*** Open Q&A with the development team We are happy to have heard from many of you so far who would like to attend. We still have room however, so please get in touch by the 9/18 date if you would like to attend. ***We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too.*** As you have likely seen, we are also working on the agenda and timing for day-long GPFS US UG event in Austin during November aligned with SC15 and there will be more details on that coming soon. From kraemerf at de.ibm.com Fri Sep 11 07:15:25 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Fri, 11 Sep 2015 08:15:25 +0200 Subject: [gpfsug-discuss] FYI: WP102585 - Veritas NetBackup with IBM Spectrum Scale Elastic Storage Server (ESS) Message-ID: <201509110616.t8B6GODX013654@d06av06.portsmouth.uk.ibm.com> Veritas NetBackup with IBM Spectrum Scale Elastic Storage Server (ESS) This white paper is a brief overview of the functional and performance proof of concept using Veritas NetBackup with IBM Elastic Storage Server (ESS) GL4 enabled by IBM Spectrum Scale formerly known as General Parallel File System (GPFS). The intended audience of this paper is technical but the paper also contains high-level non-technical content. This paper describes and documents some of the NetBackup disk target configuration steps as part of the functional testing performed. The paper also reports and analyzes the PoC performance results. http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102585 Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Mon Sep 14 15:46:04 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 14 Sep 2015 14:46:04 +0000 Subject: [gpfsug-discuss] FLASH: Security Bulletin: Vulnerability in OpenSSL affects IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 (CVE-2015-1788) (2015.09.12) In-Reply-To: <657532721.4156481442059132157.JavaMail.webinst@w30021> References: <657532721.4156481442059132157.JavaMail.webinst@w30021> Message-ID: I received this over the weekend ? for those of you not signed up for electronic distribution. It looks to be treated as ?moderate? - but I have no idea how worried I should be about it. Does anyone have more information? Bob Oesterlin Sr Storage Engineer, Nuance Communications From: IBM My Notifications Date: Saturday, September 12, 2015 at 6:58 AM IBM Spectrum Scale ? Security Bulletin: Vulnerability in OpenSSL affects IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 (CVE-2015-1788) An OpenSSL denial of service vulnerability disclosed by the OpenSSL Project affects GSKit. IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 use GSKit and addressed the applicable CVE. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Sep 15 18:16:00 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 15 Sep 2015 17:16:00 +0000 Subject: [gpfsug-discuss] GPFS UG Meeting at SC15 - Preliminary agenda Message-ID: <38EE0F09-7A2F-4031-B201-BA0CEE715A77@nuance.com> Here is the preliminary agenda for the user group meeting at SC15. We realize that the timing isn?t perfect for everyone. Hopefully all of you in attendance at SC15 can participate in some or all of these sessions. I?m sure we will all find time to get together outside of this to discuss topics. Thanks to IBM for helping to organize this. We are soliciting user presentations! (20 mins each) Talk about how you are using GPFS, challenges, etc. Please drop a note to: with submission or suggestions for topics. If you have comments on the agenda, let us know ASAP as time is short! ? Proposed Agenda ? Sunday 11/15 - Location TBD 1:00 - 1:15 Introductions, Logistics, GPFS-UG Overview 1:15 - 2:15 File, Object, HDF & a GUI! : the latest on IBM Spectrum Scale 2:15 - 2:30 Lightning Demo of Spectrum Control with invitation for free trial & more discussions during reception 2:30 - 2:45 Break 2:45 - 3:45 User Presentation(s): User #1 Nuance ? (20 mins) User #2 (20 mins) User #3 (20 mins) 3:45 ? 4:00 ESS Performance testing at the new open Ennovar lab at Wichita State University 4:00 ? 4:15 Break 4:15 - 5:30 Panel Discussion: "My favorite tool for managing Spectrum Scale is..." Panel: Nuance, DESY, +? +? 5:30 ? ? Reception Bob Oesterlin Sr Storage Engineer, Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Mon Sep 21 09:23:41 2015 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Mon, 21 Sep 2015 08:23:41 +0000 Subject: [gpfsug-discuss] Automatic Inode Expansion for Independent Filesets Message-ID: Hi All, Do independent filesets automatically expand the number of preallocated inodes as needed up to the maximum as the root fileset does? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From bevans at pixitmedia.com Mon Sep 21 12:06:45 2015 From: bevans at pixitmedia.com (Barry Evans) Date: Mon, 21 Sep 2015 12:06:45 +0100 Subject: [gpfsug-discuss] Automatic Inode Expansion for Independent Filesets In-Reply-To: References: Message-ID: <55FFE4C5.3040305@pixitmedia.com> Hi Luke, It does indeed expand automatically. It's a good idea to get quotas and callbacks in place for this or something to parse the semi regular polling of the allocated inodes as it has a tendency to sneak up on you and run out of space! Cheers, Barry On 21/09/2015 09:23, Luke Raimbach wrote: > Hi All, > > Do independent filesets automatically expand the number of preallocated inodes as needed up to the maximum as the root fileset does? > > Cheers, > Luke. > > Luke Raimbach? > Senior HPC Data and Storage Systems Engineer, > The Francis Crick Institute, > Gibbs Building, > 215 Euston Road, > London NW1 2BE. > > E: luke.raimbach at crick.ac.uk > W: www.crick.ac.uk > > The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media Mobile: +44 (0)7950 666 248 http://www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From secretary at gpfsug.org Mon Sep 28 13:49:22 2015 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Mon, 28 Sep 2015 13:49:22 +0100 Subject: [gpfsug-discuss] Meet the Devs comes to Edinburgh! Message-ID: <402938fb8bcfc79f8feee2c7d34e16b7@webmail.gpfsug.org> Hi all, We've arranged the next 'Meet the Devs' event to take place in Edinburgh on Friday 23rd October from 10:30/11am until 3/3:30pm. Location: Room 2009a, Information Services, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD Google maps link: https://goo.gl/maps/Ta7DQ Agenda: - GUI - 4.2 Updates/show and tell - Open conversation on any areas of interest attendees may have Lunch and refreshments will be provided. Please email me (secretary at gpfsug.org) if you would like to attend including any particular topics of interest you would like to discuss. We hope to see you there! Best wishes, -- Claire O'Toole GPFS User Group Secretary +44 (0)7508 033896 www.gpfsug.org From Robert.Oesterlin at nuance.com Wed Sep 30 18:05:37 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 30 Sep 2015 17:05:37 +0000 Subject: [gpfsug-discuss] User Group Meeting at SC15 - Call for user presentations Message-ID: <3CF518B4-5212-4187-A3D8-32270F6C06D9@nuance.com> We?re still looking for a few more user presentations for the SC15 user group meeting. They don?t need to be lengthy or complicated ? just tells what you are doing with Spectrum Scale (GPFS). If you could drop me a note to me: - Indicating you are coming to SC15 and if you are attending the user group meeting - If you are willing to do a short presentation on your use of Spectrum Scale (GPFS) My email is robert.oesterlin @ nuance.com Bob Oesterlin GPFS-UG USA Co-principal Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Wed Sep 30 19:56:38 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Wed, 30 Sep 2015 20:56:38 +0200 Subject: [gpfsug-discuss] User Group Meeting at SC15 - Call for user presentations In-Reply-To: <3CF518B4-5212-4187-A3D8-32270F6C06D9@nuance.com> References: <3CF518B4-5212-4187-A3D8-32270F6C06D9@nuance.com> Message-ID: Hi Robert, i will attend the meeting and (if i read the agenda correctly ;-) will also give a presentation about out GPFS setup for data taking and analysis in photon science @DESY. best regards, Martin > On 30 Sep, 2015, at 19:05, Oesterlin, Robert wrote: > > We?re still looking for a few more user presentations for the SC15 user group meeting. They don?t need to be lengthy or complicated ? just tells what you are doing with Spectrum Scale (GPFS). > > If you could drop me a note to me: > > - Indicating you are coming to SC15 and if you are attending the user group meeting > - If you are willing to do a short presentation on your use of Spectrum Scale (GPFS) > > My email is robert.oesterlin @ nuance.com > > Bob Oesterlin > GPFS-UG USA Co-principal > Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From ckrafft at de.ibm.com Wed Sep 2 09:24:37 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Wed, 2 Sep 2015 10:24:37 +0200 Subject: [gpfsug-discuss] Any experiences with GSS/ESS and DB2 Message-ID: <201509020825.t828Ppho005861@d06av09.portsmouth.uk.ibm.com> Hi there, out of curiosity :-): Is anyone running a solution with DB2 and GPFS GNR based GSS/ESS? Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0E481995.gif Type: image/gif Size: 1851 bytes Desc: not available URL: From viccornell at gmail.com Wed Sep 2 13:36:08 2015 From: viccornell at gmail.com (Vic Cornell) Date: Wed, 2 Sep 2015 13:36:08 +0100 Subject: [gpfsug-discuss] $k drives and Multi-cluster Message-ID: <14DC8ADF-F1C2-40AB-B7B6-78791917B0C7@gmail.com> Hi All, Here?s one I can?t find in the documentation - I understand that you need GPFS 4.1 to support 4K disk sectors. Can I mount a 4.1 filesystem with 4k drives onto a GPFS 3.5 filesystem via multi cluster? Any experience or insight would be useful. Regards, Vic From zgiles at gmail.com Thu Sep 3 15:59:44 2015 From: zgiles at gmail.com (Zachary Giles) Date: Thu, 3 Sep 2015 10:59:44 -0400 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Message-ID: Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com From ewahl at osc.edu Thu Sep 3 18:59:42 2015 From: ewahl at osc.edu (Wahl, Edward) Date: Thu, 3 Sep 2015 17:59:42 +0000 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: <9DA9EC7A281AC7428A9618AFDC49049955ABC785@CIO-KRC-D1MBX02.osuad.osu.edu> Can't say I've tried this in so many years it's not relevant. But the IBM/TSM storage folks have a number of interesting reports over at their blog posted using Tivoli with DB2. I recall it says right out there that the DB was on the ESS/GSS but it's been a few months. Search for "storageneers tivoli" and/or "scale out tsm gss". I think there were two or three of them last year. Or poke Sven. ;) Something tells me he'll know. Ed Wahl OSC ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] Sent: Thursday, September 03, 2015 10:59 AM To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From S.J.Thompson at bham.ac.uk Thu Sep 3 19:52:35 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Thu, 3 Sep 2015 18:52:35 +0000 Subject: [gpfsug-discuss] Using HAWC (write cache) In-Reply-To: References: Message-ID: Just to follow up, I've been sent an efix today which hopefully will resolve this (and also the other LROC bugs), so I'm guessing this fix will make it out generally in 4.1.1-02 Will be testing the fix out over the next few days. Simon From: Dean Hildebrand > Reply-To: gpfsug main discussion list > Date: Thursday, 27 August 2015 20:24 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, This appears to be a mistake, as using clients for the System.log pool should not require a server license (should be similar to lroc).... thanks for opening the PMR... Dean Hildebrand IBM Almaden Research Center [Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/27/2015 12:42:47 AM---Hi Dean, Thanks. I wa]"Simon Thompson (Research Computing - IT Services)" ---08/27/2015 12:42:47 AM---Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "va From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/27/2015 12:42 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Dean, Thanks. I wasn't sure if the system.log disks on clients in the remote cluster would be "valid" as they are essentially NSDs in a different cluster from where the storage cluster would be, but it sounds like it is. Now if I can just get it working ... Looking in mmfsfuncs: if [[ $diskUsage != "localCache" ]] then combinedList=${primaryAdminNodeList},${backupAdminNodeList} IFS="," for server in $combinedList do IFS="$IFS_sv" [[ -z $server ]] && continue $grep -q -e "^${server}$" $serverLicensedNodes > /dev/null 2>&1 if [[ $? -ne 0 ]] then # The node does not have a server license. printErrorMsg 118 $mmcmd $server return 1 fi IFS="," done # end for server in ${primaryAdminNodeList},${backupAdminNodeList} IFS="$IFS_sv" fi # end of if [[ $diskUsage != "localCache" ]] So unless the NSD device usage=localCache, then it requires a server License when you try and create the NSD, but localCache cannot have a storage pool assigned. I've opened a PMR with IBM. Simon From: Dean Hildebrand > Reply-To: gpfsug main discussion list > Date: Thursday, 27 August 2015 01:22 To: gpfsug main discussion list > Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Hi Simon, HAWC leverages the System.log (or metadata pool if no special log pool is defined) pool.... so its independent of local or multi-cluster modes... small writes will be 'hardened' whereever those pools are defined for the file system. Dean Hildebrand IBM Master Inventor and Manager | Cloud Storage Software IBM Almaden Research Center [Inactive hide details for "Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other ques]"Simon Thompson (Research Computing - IT Services)" ---08/26/2015 05:58:12 AM---Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a From: "Simon Thompson (Research Computing - IT Services)" > To: gpfsug main discussion list > Date: 08/26/2015 05:58 AM Subject: Re: [gpfsug-discuss] Using HAWC (write cache) Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Oh and one other question about HAWC, does it work when running multi-cluster? I.e. Can clients in a remote cluster have HAWC devices? Simon On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)" > wrote: >Hi, > >I was wondering if anyone knows how to configure HAWC which was added in >the 4.1.1 release (this is the hardened write cache) >(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect >r >um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm) > >In particular I'm interested in running it on my client systems which have >SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD >for HAWC on our hypervisors as it buffers small IO writes, which sounds >like what we want for running VMs which are doing small IO updates to the >VM disk images stored on GPFS. > >The docs are a little lacking in detail of how you create NSD disks on >clients, I've tried using: >%nsd: device=sdb2 > nsd=cl0901u17_hawc_sdb2 > servers=cl0901u17 > pool=system.log > failureGroup=90117 > >(and also with usage=metadataOnly as well), however mmcrsnd -F tells me >"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license >designation" > > >Which is correct as its a client system, though HAWC is supposed to be >able to run on client systems. I know for LROC you have to set >usage=localCache, is there a new value for using HAWC? > >I'm also a little unclear about failureGroups for this. The docs suggest >setting the HAWC to be replicated for client systems, so I guess that >means putting each client node into its own failure group? > >Thanks > >Simon > >_______________________________________________ >gpfsug-discuss mailing list >gpfsug-discuss at gpfsug.org >http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss [attachment "graycol.gif" deleted by Dean Hildebrand/Almaden/IBM] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: graycol.gif URL: From kraemerf at de.ibm.com Thu Sep 3 20:16:31 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Thu, 3 Sep 2015 21:16:31 +0200 Subject: [gpfsug-discuss] GPFS for DBs & more In-Reply-To: References: Message-ID: <201509031917.t83JHtuU011299@d06av04.portsmouth.uk.ibm.com> Have a look here: > Looking for opinions on running database engines backed by GPFS. > Has anyone run any backed by GPFS and what did you think about it? IBM Spectrum Scale 4.1 is certified with Oracle Database 12cR1 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10853 IBM Spectrum Scale tuning guidelines for deploying SAS http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106348 IBM System Storage Architecture and Configuration Guide for SAP HANA TDI (tailored datacenter integration) V2.2 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102347 Microsoft SharePoint data management solution using IBM Spectrum Scale and AvePoint DocAve http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102580 Consolidated hardware for video solutions http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102576 On Premise File Sync and Share Solution Using IBM Spectrum Scale for Object Storage and ownCloud http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102581 For DB2 pureScale GPFS is a *must* http://www.ibm.com/software/data/db2/linux-unix-windows/purescale/ -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany **** Hello, a new whitepaper was published that describes the configuration of Spectrum Protect (and Spectrum Archive implicitly) in a Spectrum Scale Active File Management (AFM) environment. Beside an introduction to AFM three major user scenarios (disaster recovery, branch office, system migration) are explained. For each of the scenarios the combination of AFM functions with Spectrum Protect backup functions and Spectrum Protect and Spectrum Archive HSM functions are described in detail including challenges and recommendations for the specified setup. The paper was written to help technical sales teams and system architects/administrators to understand the mechanic behind the combination of these Spectrum Storage products. Please share this information. Find the paper here: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Configuring%20IBM%20Spectrum%20Scale%20Active%20File%20Management Greetings, Dominic. ______________________________________________________________________________________________________________ Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From DCR at ch.ibm.com Thu Sep 3 21:08:33 2015 From: DCR at ch.ibm.com (David Cremese) Date: Thu, 3 Sep 2015 22:08:33 +0200 Subject: [gpfsug-discuss] DBs over GPFS Message-ID: An HTML attachment was scrubbed... URL: From chair at gpfsug.org Thu Sep 3 21:42:30 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Thu, 03 Sep 2015 21:42:30 +0100 Subject: [gpfsug-discuss] GPFS UG Meeting at Computing Insight UK Message-ID: Hi, Our next UK based group meeting will be part of the agenda for Computing Insight UK which will be held on 8th/9th December at the Ricoh Arena, Coventry. The meeting will be a short (2 hour) breakout session at CIUK. More details on CIUK are at: http://www.stfc.ac.uk/news-events-and-publications/events/computing-insight -uk-2015/ Please note that you must be registered to attend CIUK to attend the GPFS UG meeting, during the registration process you will get the option to register for the workshops which includes the GPFS UG. I'm also looking for someone to give a short user presentation on your use of GPFS in your environment, so if this is something you are interested in, please let me know. We're hoping to have a few devs available at the group, and will be looking at some of the 4.2 features, we'll also be including the opportunity to discuss GPFS with any comments or areas for development you'd like to look at. Finally, we're already planning the May 2016 event, and I hope to be able to send our a save the date in the next few weeks. Simon (GPFS UG Chair) From dhildeb at us.ibm.com Thu Sep 3 22:32:20 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Thu, 3 Sep 2015 14:32:20 -0700 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: Hi Zachary, VMWare via NFS to GPFS is a great option as several new features have been added to GPFS to support VM workloads over the last couple years, including file-grained dirty bits (FGDB) for tracking updates at 4KB granularity and HAWC for buffering small synchronous writes in fast storage. Dean Hildebrand IBM Almaden Research Center From: Zachary Giles To: gpfsug main discussion list Date: 09/03/2015 08:00 AM Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Sent by: gpfsug-discuss-bounces at gpfsug.org Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ckrafft at de.ibm.com Fri Sep 4 07:43:09 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Fri, 4 Sep 2015 08:43:09 +0200 Subject: [gpfsug-discuss] GPFS for DBs & more In-Reply-To: <201509031917.t83JHtuU011299@d06av04.portsmouth.uk.ibm.com> References: <201509031917.t83JHtuU011299@d06av04.portsmouth.uk.ibm.com> Message-ID: <201509040644.t846iCFa007119@d06av05.portsmouth.uk.ibm.com> ... and what about classic DB2? Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From: Frank Kraemer/Germany/IBM at IBMDE To: gpfsug-discuss at gpfsug.org Date: 03.09.2015 21:19 Subject: [gpfsug-discuss] GPFS for DBs & more Sent by: gpfsug-discuss-bounces at gpfsug.org Have a look here: > Looking for opinions on running database engines backed by GPFS. > Has anyone run any backed by GPFS and what did you think about it? IBM Spectrum Scale 4.1 is certified with Oracle Database 12cR1 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10853 IBM Spectrum Scale tuning guidelines for deploying SAS http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD106348 IBM System Storage Architecture and Configuration Guide for SAP HANA TDI (tailored datacenter integration) V2.2 http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102347 Microsoft SharePoint data management solution using IBM Spectrum Scale and AvePoint DocAve http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102580 Consolidated hardware for video solutions http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102576 On Premise File Sync and Share Solution Using IBM Spectrum Scale for Object Storage and ownCloud http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102581 For DB2 pureScale GPFS is a *must* http://www.ibm.com/software/data/db2/linux-unix-windows/purescale/ -frank- Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany **** Hello, a new whitepaper was published that describes the configuration of Spectrum Protect (and Spectrum Archive implicitly) in a Spectrum Scale Active File Management (AFM) environment. Beside an introduction to AFM three major user scenarios (disaster recovery, branch office, system migration) are explained. For each of the scenarios the combination of AFM functions with Spectrum Protect backup functions and Spectrum Protect and Spectrum Archive HSM functions are described in detail including challenges and recommendations for the specified setup. The paper was written to help technical sales teams and system architects/administrators to understand the mechanic behind the combination of these Spectrum Storage products. Please share this information. Find the paper here: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Configuring%20IBM%20Spectrum%20Scale%20Active%20File%20Management Greetings, Dominic. ______________________________________________________________________________________________________________ Dominic Mueller-Wicke | IBM Spectrum Protect Development | Technical Lead | +49 7034 64 32794 | dominic.mueller at de.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 06252012.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ckrafft at de.ibm.com Fri Sep 4 07:46:34 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Fri, 4 Sep 2015 08:46:34 +0200 Subject: [gpfsug-discuss] DBs over GPFS In-Reply-To: References: Message-ID: <201509040647.t846lR86009262@d06av01.portsmouth.uk.ibm.com> ... forgive me and forget my previous email - started reading sequentially and did not see David's email early enough 8-) So "regular" DB2 seems also covered - although the information is a bit sparse ... Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From: David Cremese To: gpfsug-discuss at gpfsug.org Date: 03.09.2015 22:08 Subject: Re: [gpfsug-discuss] DBs over GPFS Sent by: gpfsug-discuss-bounces at gpfsug.org Hello Zach, There's a paper posted on IBM DeveloperWorks, describing best practices for running DB2 over GPFS: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Wc9a068d7f6a6_4434_aece_0d297ea80ab1/page/DB2%20databases%20and%20the%20IBM%20General%20Parallel%20File%20System All the best, David Cremese dcr at ch.ibm.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0D589989.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From ckrafft at de.ibm.com Fri Sep 4 07:53:51 2015 From: ckrafft at de.ibm.com (Christoph Krafft) Date: Fri, 4 Sep 2015 08:53:51 +0200 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: <201509040554.t845skZ4014649@d06av10.portsmouth.uk.ibm.com> Hi Zach, VMware is covered via Pass-through Raw Device Mapping (RDM) with physical compatibility mode if you want direct disk access inside the VM. Otherwise works as a "normal" GPFS client Go to: "Table 28. VMware support matrix" @ http://www-01.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html Have a client using this in production with x86 RHEL running on top of VMware ... it works well. They use RDM since the VMs do have disk access directly. Mit freundlichen Gr??en / Sincerely Christoph Krafft Client Technical Specialist - Power Systems, IBM Systems Certified IT Specialist @ The Open Group Phone: +49 (0) 7034 643 2171 IBM Deutschland GmbH Mobile: +49 (0) 160 97 81 86 12 Hechtsheimer Str. 2 Email: ckrafft at de.ibm.com 55131 Mainz Germany IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter Gesch?ftsf?hrung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, HRB 14562 / WEEE-Reg.-Nr. DE 99369940 From: Zachary Giles To: gpfsug main discussion list Date: 03.09.2015 16:59 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? Sent by: gpfsug-discuss-bounces at gpfsug.org Hello Everyone, Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized system in production, hundreds of nodes, lots of tuning etc. Not a newb. :) Looking for opinions on running database engines backed by GPFS. Has anyone run any backed by GPFS and what did you think about it? I realize there are tuning guides and guide-lines for running different DBs on different file systems, but there seems to be a lack of best-practices for doing so on GPFS. For example, usually you don't run DBs backed by NFS due to locking, cacheing etc.. You can tune those out with sync, hard, etc, but, still the best practice is to use a local file system. As GPFS is hybrid, and used for many apps that do have hard requirements such as Cinder block storage, science apps, etc, and has proper byte-level locking.. it seems like it would be semi-equal to a lock file system. Does anyone have any opinions, experiences, or recommendations for running DBs backed by GPFS? Also will accept horror stories, gotcha's, and "dont do it's". :) On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ecblank.gif Type: image/gif Size: 45 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0D771085.gif Type: image/gif Size: 1851 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From S.J.Thompson at bham.ac.uk Fri Sep 4 07:57:04 2015 From: S.J.Thompson at bham.ac.uk (Simon Thompson (Research Computing - IT Services)) Date: Fri, 4 Sep 2015 06:57:04 +0000 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: When you say VMware, do you mean to the hypervisor or vms? Running vms can of course be gpfs clients. Protocol servers use nfs ganesha server, but I've only looked at smb support. Simon ________________________________________ From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] Sent: 03 September 2015 15:59 To: gpfsug main discussion list Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? On that same note... How about VMware? Obviously I guess really the only way would be via NFS export.. which cNFS was .. not the best at (my opinion). Maybe Protocol Servers are better? Maybe also a "don't do it"? Thanks, -Zach -- Zach Giles zgiles at gmail.com _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss From zgiles at gmail.com Fri Sep 4 18:00:26 2015 From: zgiles at gmail.com (Zachary Giles) Date: Fri, 4 Sep 2015 13:00:26 -0400 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: Frank, Edward, David, Christoph, The Oracle 12c certified with GPFS 4.1 looks like they only mention AIX with GPFS.. though it could apply to Linux too I believe. There's no tuning info in it... I do see the DB2 and SAS whitepapers. I've read those over and over trying to tune for Oracle and other things. They're OK, but I'm not really interested in DB2 (Though I'm sure lots of IBM people are.. ), and they also don't seem to say or "Show" much of tuning over different block sizes, direct writes vs not, read pattersn, data integrity, etc. They're still valuable though. I _did_ find moderate info on Oracle, Though it is fairly scattered. I'm doing a bunch of testing with Oracle right now and it's .. finicky .. with GPFS. Yes it works, and there are comments on data integrity here and there about Direct IO and ASync IO bypassing cache.. Oracle has latches, etc. So, seems like you could assume the data is good on GPFS. There's very little in terms of tuning. So far, it seems unhappy with large block sizes, even though it is recommended, but they're calling "512KB" large, so it's all from more than several years ago. Places to look: IBM GPFS 4.1 docs.. there's a section; Oracle 11g "Integration" docs.. probably still applies for 12, though it's removed; Random Blogs What I can't find, and am most interested in, is, info on MySQL and PostgreSQL. I see little blogs here and there saying it will work, and _some_ engines support DirectIO.. but I'm wondering if MySQL will Do The Right Thing (tm) and ensure writes are written and data is good over this "remote" file system. I worry that if it goes offline or we have waiters that it won't make MySQL very happy and there will be data loss. There's already enough stories about MySQL data loss online. I'm wondering if GPFS "feels" like a local disk enough to MySQL that it won't fail in the way NFS does for MySQL. I'm guessing the answer is that with some engines like InnoDB and direct io turned on, it'll be fine and for others it will be whatever you get.. but that's not very reassuring. PostgreSQL seems to have even less info. Dean, I'll look in to those. Thanks. Are those all in 4.1 and in the new protocol servers? Does HAWC work when the client is over NFS? I assume the server would take care of it.. Haven't read much yet. Christoph, Looks like that RDM is only for ESX (the older linux-based hypervisor), not ESXi. AFAIK there's no GPFS client that can run on ESXi yet, so the only options are remote mounting GPFS via NFS on the Hypervisor to store the VMs. Or, inside the VM, but that's not what I want. Simon, I'm talking about on the hypervisor. Looking for a way to use GPFS to store VMs instead of standing up a SAN, but want it to be safe and consistent. Thus my worry about backing VM disks by NFS backed by GPFS... -Zach On Fri, Sep 4, 2015 at 2:57 AM, Simon Thompson (Research Computing - IT Services) wrote: > When you say VMware, do you mean to the hypervisor or vms? Running vms can of course be gpfs clients. > > Protocol servers use nfs ganesha server, but I've only looked at smb support. > > Simon > ________________________________________ > From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss-bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] > Sent: 03 September 2015 15:59 > To: gpfsug main discussion list > Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? > > On that same note... > How about VMware? > Obviously I guess really the only way would be via NFS export.. which > cNFS was .. not the best at (my opinion). Maybe Protocol Servers are > better? Maybe also a "don't do it"? > > Thanks, > -Zach > > > -- > Zach Giles > zgiles at gmail.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Zach Giles zgiles at gmail.com From jenocram at gmail.com Fri Sep 4 18:03:06 2015 From: jenocram at gmail.com (Jeno Cram) Date: Fri, 4 Sep 2015 10:03:06 -0700 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: A previous company that I worked for used DB2 Purescale which is basically HA DB2 with GPFS for the filesystem with crm for cluster management. On Sep 3, 2015 10:59 AM, "Zachary Giles" wrote: > Hello Everyone, > > Medium-time user of GPFS, MySQL, PostgreSQL, etc here.. Decent sized > system in production, hundreds of nodes, lots of tuning etc. Not a > newb. :) > Looking for opinions on running database engines backed by GPFS. > Has anyone run any backed by GPFS and what did you think about it? > > I realize there are tuning guides and guide-lines for running > different DBs on different file systems, but there seems to be a lack > of best-practices for doing so on GPFS. > > For example, usually you don't run DBs backed by NFS due to locking, > cacheing etc.. You can tune those out with sync, hard, etc, but, still > the best practice is to use a local file system. > As GPFS is hybrid, and used for many apps that do have hard > requirements such as Cinder block storage, science apps, etc, and has > proper byte-level locking.. it seems like it would be semi-equal to a > lock file system. > > Does anyone have any opinions, experiences, or recommendations for > running DBs backed by GPFS? > Also will accept horror stories, gotcha's, and "dont do it's". :) > > On that same note... > How about VMware? > Obviously I guess really the only way would be via NFS export.. which > cNFS was .. not the best at (my opinion). Maybe Protocol Servers are > better? Maybe also a "don't do it"? > > Thanks, > -Zach > > > -- > Zach Giles > zgiles at gmail.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bbanister at jumptrading.com Fri Sep 4 19:38:39 2015 From: bbanister at jumptrading.com (Bryan Banister) Date: Fri, 4 Sep 2015 18:38:39 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> Message-ID: <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> Hi Kristy, Sorry to press, but when will you have the poll open to get the vote for the day of the GPFS US UG at SC'15? I really would like to get arrangements set as soon as possible, -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Kristy Kallback-Rose Sent: Saturday, August 29, 2015 3:24 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location OK, here's what I've heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you'll note the known conflicts on that date. What I'm asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I'll setup a poll for that, so I can quickly tally answers. I value your feedback, but don't want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG -email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I'll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A > wrote: It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC'15. However the SC'15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers" session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kallbac at iu.edu Fri Sep 4 19:44:12 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Fri, 4 Sep 2015 18:44:12 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> Message-ID: <46C277BC-BE1A-42C3-A273-461A0CABC128@iu.edu> Bob sent a draft just a few minutes ago. Should be out yet today I think. -Kristy On Sep 4, 2015, at 2:38 PM, Bryan Banister > wrote: Hi Kristy, Sorry to press, but when will you have the poll open to get the vote for the day of the GPFS US UG at SC?15? I really would like to get arrangements set as soon as possible, -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Kristy Kallback-Rose Sent: Saturday, August 29, 2015 3:24 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location OK, here?s what I?ve heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you?ll note the known conflicts on that date. What I?m asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I?ll setup a poll for that, so I can quickly tally answers. I value your feedback, but don?t want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG ?email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I?ll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A > wrote: It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From kallbac at iu.edu Fri Sep 4 19:47:12 2015 From: kallbac at iu.edu (Kallback-Rose, Kristy A) Date: Fri, 4 Sep 2015 18:47:12 +0000 Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location In-Reply-To: <46C277BC-BE1A-42C3-A273-461A0CABC128@iu.edu> References: <3c455d1f950453a7c7fba248639f74f7@webmail.gpfsug.org> <21BC488F0AEA2245B2C3E83FC0B33DBB05BC27A3@CHI-EXCHANGEW1.w2k.jumptrading.com> <55E8B841-8D5F-4468-8BFF-B0A94C6E5A6A@iu.edu> <52BC1221-0DEA-4BD2-B3CA-80C40404C65F@iu.edu> <21BC488F0AEA2245B2C3E83FC0B33DBB05C1A7AE@CHI-EXCHANGEW1.w2k.jumptrading.com> <46C277BC-BE1A-42C3-A273-461A0CABC128@iu.edu> Message-ID: PS - Applying pressure not an issue. Thanks for helping push this forward. -Kristy On Sep 4, 2015, at 2:44 PM, Kallback-Rose, Kristy A > wrote: Bob sent a draft just a few minutes ago. Should be out yet today I think. -Kristy On Sep 4, 2015, at 2:38 PM, Bryan Banister > wrote: Hi Kristy, Sorry to press, but when will you have the poll open to get the vote for the day of the GPFS US UG at SC?15? I really would like to get arrangements set as soon as possible, -Bryan From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of Kristy Kallback-Rose Sent: Saturday, August 29, 2015 3:24 AM To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location OK, here?s what I?ve heard back from Pallavi (IBM) regarding availability of the folks from IBM relevant for the UG meeting. In square brackets you?ll note the known conflicts on that date. What I?m asking for here is, if you know of any other conflicts on a given day, please let me know by responding via email. Knowing what the conflicts are will help people vote for a preferred day. Please to not respond via email with a vote for a day, I?ll setup a poll for that, so I can quickly tally answers. I value your feedback, but don?t want to tally a poll via email :-) 1. All Day Sunday November 15th [Tutorial Day] 2. Monday November 16th [PDSW Day, possible conflict with DDN UG ?email out to DDN to confirm] 3. Friday November 20th [Start later in the day] I?ll wait a few days to hear back and get the poll out next week. Best, Kristy On Aug 20, 2015, at 3:00 PM, Kallback-Rose, Kristy A > wrote: It sounds like an availability poll would be helpful here. Let me confer with Bob (co-principal) and Janet and see what we can come up with. Best, Kristy On Aug 20, 2015, at 12:12 PM, Dean Hildebrand > wrote: Hi Bryan, Sounds great. My only request is that it not conflict with the 10th Parallel Data Storage Workshop (PDSW15), which is held this year on the Monday, Nov. 16th. (www.pdsw.org). I encourage everyone to attend both this work shop and the GPFS UG Meeting :) Dean Hildebrand IBM Almaden Research Center Bryan Banister ---08/20/2015 08:42:23 AM---Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attendi From: Bryan Banister > To: gpfsug main discussion list > Date: 08/20/2015 08:42 AM Subject: Re: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Sent by: gpfsug-discuss-bounces at gpfsug.org ________________________________ Hi Kristy, Thanks for getting the dates secured for the MTD session. I'm looking forward to attending it and continuing the communal discussions! I am also excited about the one day GPFS US UG Meeting at SC?15. However the SC?15 schedule is already packed! http://sc15.supercomputing.org/schedule I would like to have the discussion regarding the schedule of this meeting opened up to the mailing list to help ensure that it meets the needs of those planning to attend as much as possible. Scheduling this one day meeting is going to be very difficult, and some issues I foresee are the following: 1) It will be hard to avoid conflicts with scheduled SC'15 sessions/tutorials/workshops, and also the other user group meetings, such as the DDN User Group and the expected full day Intel session (which I guess will be on the Sunday, Nov 15th?) 2) Have people already booked their flights and hotels such that attending the meeting on the Saturday before or Saturday after the conference is not feasible? 2) Will IBM presenters be available on the Saturday before or after? 3) Is having the full day meeting on the Saturday before or after too much to ask given how long the conference already is? 3) Would we rather have the GPFS US UG meeting usurp the last day of the conference that is sometimes considered not worth attending? 4) Do attendees have other obligations to their booths at the show that would prevent them from attending the Saturday before, or the Friday Nov 20th, or possibly Saturday Nov 21st? As for me, I would prefer to have this first SC'15 US UG meeting on the Friday, Nov 20th, starting at roughly 1PM (after lunch) and maybe have it go for 6 hours. Or if we still want/need a full day, then start it at 9AM on Friday. I encourage others to provide their feedback as quickly as possible so that appropriate hotels and other travel arrangements can be made. The conference hotels are all but booked already! Cheers, -Bryan -----Original Message----- From: gpfsug-discuss-bounces at gpfsug.org [mailto:gpfsug-discuss-bounces at gpfsug.org] On Behalf Of usa-principal-gpfsug.org Sent: Thursday, August 20, 2015 8:24 AM To: gpfsug-discuss at gpfsug.org Subject: [gpfsug-discuss] Inaugural US "Meet the Developers" - Confirmed Date/Location Greetings all. I am very happy to announce that we have a confirmed date and location for our inaugural US "Meet the Developers? session. Details are below. Many thanks to Janet for her efforts in organizing the venue and speakers. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this!) Open Q&A with the development team We have heard from many of you so far who would like to attend, which is great! However, we still have plenty of room, so if you would like to attend please just e-mail Janet Ellsworth (janetell at us.ibm.com). Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too. As mentioned before, there are ongoing discussions regarding a day-long GPFS US UG event in Austin during November aligned with SC15. We will keep you posted on this and also welcome thoughts you may have planning that agenda. Best, Kristy GPFS UG - USA Principal _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ Note: This email is for the confidential use of the named addressee(s) only and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you are hereby notified that any review, dissemination or copying of this email is strictly prohibited, and to please notify the sender immediately and destroy this email and any attachments. Email transmission cannot be guaranteed to be secure or error-free. The Company, therefore, does not make any guarantees as to the completeness or accuracy of this email or any attachments. This email is for informational purposes only and does not constitute a recommendation, offer, request or solicitation of any kind to buy, sell, subscribe, redeem or perform any type of transaction of a financial product. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dhildeb at us.ibm.com Fri Sep 4 19:41:35 2015 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 4 Sep 2015 11:41:35 -0700 Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? In-Reply-To: References: Message-ID: > Dean, > I'll look in to those. Thanks. Are those all in 4.1 and in the new > protocol servers? Does HAWC work when the client is over NFS? I assume > the server would take care of it.. Haven't read much yet. FGDB was in 3.4 I believe, and HAWC is in 4.1.1 ptf1....but there are other items that helped performance for these environments, so using the latest is always best :) Yes, hawc is independent of nfs...its all in gpfs. > > Christoph, > Looks like that RDM is only for ESX (the older linux-based > hypervisor), not ESXi. AFAIK there's no GPFS client that can run on > ESXi yet, so the only options are remote mounting GPFS via NFS on the > Hypervisor to store the VMs. > Or, inside the VM, but that's not what I want. > > Simon, > I'm talking about on the hypervisor. Looking for a way to use GPFS to > store VMs instead of standing up a SAN, but want it to be safe and > consistent. Thus my worry about backing VM disks by NFS backed by > GPFS... >50% of VMWare deployments use NFS... and NFS+GPFS obeys nfs semantics, so together your VMs are just as safe as with a SAN. Dean > > -Zach > > > On Fri, Sep 4, 2015 at 2:57 AM, Simon Thompson (Research Computing - > IT Services) wrote: > > When you say VMware, do you mean to the hypervisor or vms? Running > vms can of course be gpfs clients. > > > > Protocol servers use nfs ganesha server, but I've only looked at > smb support. > > > > Simon > > ________________________________________ > > From: gpfsug-discuss-bounces at gpfsug.org [gpfsug-discuss- > bounces at gpfsug.org] on behalf of Zachary Giles [zgiles at gmail.com] > > Sent: 03 September 2015 15:59 > > To: gpfsug main discussion list > > Subject: [gpfsug-discuss] GPFS for DBs..MySQL, PGSQL, etc; How about VMware? > > > > On that same note... > > How about VMware? > > Obviously I guess really the only way would be via NFS export.. which > > cNFS was .. not the best at (my opinion). Maybe Protocol Servers are > > better? Maybe also a "don't do it"? > > > > Thanks, > > -Zach > > > > > > -- > > Zach Giles > > zgiles at gmail.com > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > _______________________________________________ > > gpfsug-discuss mailing list > > gpfsug-discuss at gpfsug.org > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > > -- > Zach Giles > zgiles at gmail.com > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Fri Sep 4 19:57:47 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Fri, 4 Sep 2015 18:57:47 +0000 Subject: [gpfsug-discuss] POLL: Preferred Day/Time for the GPFS UG Meeting at SC15 Message-ID: If you are going to Supercomputing 2015 in Austin (November), let us know when you?d like to have a user group meeting. There are no ideal times ? please complete this survey with you preferred time and we?ll post the results. https://www.surveymonkey.com/r/6MKCHML Bob Oesterlin - gpfsug?ug USA co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Sep 8 12:07:11 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 8 Sep 2015 11:07:11 +0000 Subject: [gpfsug-discuss] Reminder - POLL: Preferred Day/Time for the GPFS UG Meeting at SC15 Message-ID: ** Poll closes at 6 PM US EST on Wed 9/9 ** If you are going to Supercomputing 2015 in Austin (November), let us know when you?d like to have a user group meeting. There are no ideal times ? please complete this survey with you preferred time and we?ll post the results. https://www.surveymonkey.com/r/6MKCHML Bob Oesterlin - gpfsug?ug USA co-principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Wed Sep 9 20:52:04 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 9 Sep 2015 19:52:04 +0000 Subject: [gpfsug-discuss] Survey says! - SC15 User Group Meeting - Survey results Message-ID: <05C90986-CB0F-41C0-882F-4127F39F412F@nuance.com> Survey has been closed - Here are the survey results. There was a bit more spread in the results than I expected, but Sunday was the winner, with Sunday afternoon being the most preferred time. NOTE: This does not represent a *definitive* "we?ll have it on Sun Afternoon?. I fully expect this will be the case, but it will need to be confirmed by IBM and the other GPFSUG Chairs. For travel planning purposes, assume Sunday afternoon. I/We will post if anything changes. Answer Choices? Responses? ? Sunday November 15th: Morning 13.64% 3 ? Sunday November 15th: Afternoon 40.91% 9 ? Monday November 16th Morning (Will Overlap with PDSW) 22.73% 5 ? Monday November 16th Afternoon (Will Overlap with PDSW and/or DDN User Group 2:30-6 PM) 9.09% 2 ? Friday November 20th Afternoon (starting later so attendees can make it to the panels) 13.64% 3 Total 22 Bob Oesterlin GPFS-UG Co-Principal -------------- next part -------------- An HTML attachment was scrubbed... URL: From chair at gpfsug.org Thu Sep 10 11:33:29 2015 From: chair at gpfsug.org (GPFS UG Chair (Simon Thompson)) Date: Thu, 10 Sep 2015 11:33:29 +0100 Subject: [gpfsug-discuss] Save the date - 2016 UK GPFS User Group! Message-ID: Save the date - 17th/18th May 2016! Following feedback from the previous groups, we're going for a two day GPFS UG event next year. We've now confirmed the booking at IBM South Bank for the two days, so please pencil 17th and 18th May 2016 into your diaries for the GPFS UG. Its a little early for us to think about the agenda in too much detail, though the first day is likely to follow the previous format with a mixture of IBM and User talks and the second day we're looking at breaking into groups to focus on specific areas or features. If there are topcis you'd like to see on the agenda, then please do let us know! And don't forget, the next mini-meet will be at Computing Insight UK in December, you must be registered for CIUK to attend the user group. And finally, we're also working the dates for the next meet the devs event which should be taking place in Edinburgh (thanks to Orlando for offering a venue). Once we've got the dates organised we'll open registration for the session. Simon UG Chair From josh.cullum at cfms.org.uk Thu Sep 10 12:34:16 2015 From: josh.cullum at cfms.org.uk (Josh Cullum) Date: Thu, 10 Sep 2015 11:34:16 +0000 Subject: [gpfsug-discuss] Setting Quota's Message-ID: Hi All, We're looking into 4.1.1 (finally got it setup) so that we can start to plan our integration and update of our existing GPFS systems, and we are looking to do something in line with the following. Our current setup looks something like this: (Running GPFS 3.4) mmlsfileset prgpfs Filesets in file system 'prgpfs': Name Status Path root Linked /gpfs services Linked /gpfs/services cfms Linked /gpfs/cfms where the fileset has a quota and nothing in that fileset can grow above it. The filesets contain a home directory, a working directory and an apps directory, all controlled by a particular unix(AD) group. In our new GPFS cluster, we would like to be able to create a fileset for each home directory within each organisation directory, for example the structure looks like the below: Filesets in file system 'prgpfs': Name Status Path root Linked /gpfs services Linked /gpfs/services cfms Linked /gpfs/cfms apps Linked /gpfs/apps cfms-home Linked /gpfs/cfms/home where the organisation fileset has a 10TB fileset quota, for working directory and an apps directory. The organisation-home has then got a quota of 500GB per user. I think this is all possible within 4.1.1 from reading the documentation, where a user's quota only applies to a particular fileset (using the mmdefedquota -u prgpfs:organisation-home command) and so does not affect the /gpfs/organisation working dir and apps dir. Can anyone confirm this? We would like to then use default quota's so that every organisation-home fileset has the 500GB per user rule applied. Does anyone know if it possible to wildcard the gpfs quota rule so it applies to all filesets with "-home" in the name? Kind Regards, Josh Cullum -- *Josh Cullum* // IT Systems Administrator *e: josh.cullum at cfms.org.uk * // *t: *0117 906 1106 // *w: *www.cfms.org.uk // [image: Linkedin grey icon scaled] CFMS Services Ltd // Bristol & Bath Science Park // Dirac Crescent // Emersons Green // Bristol // BS16 7FR -------------- next part -------------- An HTML attachment was scrubbed... URL: From usa-principal at gpfsug.org Thu Sep 10 21:38:12 2015 From: usa-principal at gpfsug.org (usa-principal-gpfsug.org) Date: Thu, 10 Sep 2015 16:38:12 -0400 Subject: [gpfsug-discuss] Reminder: Inaugural US "Meet the Developers" Message-ID: <3d0f058f40ae93d5d06eb3ea23f5e21e@webmail.gpfsug.org> Hello Everyone, Here is a reminder about our inaugural US "Meet the Developers" session. Details are below, and please send an e-mail to Janet Ellsworth (janetell at us.ibm.com) by next Friday September 18th if you wish to attend. Janet is on the product management team for Spectrum Scale and is helping with the logistics for this first event. Date: Wednesday, October 7th Place: IBM building at 590 Madison Avenue, New York City Time: 12:30 to 5 PM (Lunch will be served at 12:30, and sessions will start between 1 and 1:30 PM. Afternoon snacks will be served as well :-) Agenda IBM development architect to present the new protocols support that was released with Spectrum Scale 4.1.1 in June. IBM developer to demo future Graphical User Interface ***Member of user community to present an experience with using Spectrum Scale (still seeking volunteers for this !)*** Open Q&A with the development team We are happy to have heard from many of you so far who would like to attend. We still have room however, so please get in touch by the 9/18 date if you would like to attend. ***We also need someone to share an experience or use case scenario with Spectrum Scale for this event, so please let Janet know if you are willing to do that too.*** As you have likely seen, we are also working on the agenda and timing for day-long GPFS US UG event in Austin during November aligned with SC15 and there will be more details on that coming soon. From kraemerf at de.ibm.com Fri Sep 11 07:15:25 2015 From: kraemerf at de.ibm.com (Frank Kraemer) Date: Fri, 11 Sep 2015 08:15:25 +0200 Subject: [gpfsug-discuss] FYI: WP102585 - Veritas NetBackup with IBM Spectrum Scale Elastic Storage Server (ESS) Message-ID: <201509110616.t8B6GODX013654@d06av06.portsmouth.uk.ibm.com> Veritas NetBackup with IBM Spectrum Scale Elastic Storage Server (ESS) This white paper is a brief overview of the functional and performance proof of concept using Veritas NetBackup with IBM Elastic Storage Server (ESS) GL4 enabled by IBM Spectrum Scale formerly known as General Parallel File System (GPFS). The intended audience of this paper is technical but the paper also contains high-level non-technical content. This paper describes and documents some of the NetBackup disk target configuration steps as part of the functional testing performed. The paper also reports and analyzes the PoC performance results. http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102585 Frank Kraemer IBM Consulting IT Specialist / Client Technical Architect Hechtsheimer Str. 2, 55131 Mainz mailto:kraemerf at de.ibm.com voice: +49171-3043699 IBM Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Mon Sep 14 15:46:04 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Mon, 14 Sep 2015 14:46:04 +0000 Subject: [gpfsug-discuss] FLASH: Security Bulletin: Vulnerability in OpenSSL affects IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 (CVE-2015-1788) (2015.09.12) In-Reply-To: <657532721.4156481442059132157.JavaMail.webinst@w30021> References: <657532721.4156481442059132157.JavaMail.webinst@w30021> Message-ID: I received this over the weekend ? for those of you not signed up for electronic distribution. It looks to be treated as ?moderate? - but I have no idea how worried I should be about it. Does anyone have more information? Bob Oesterlin Sr Storage Engineer, Nuance Communications From: IBM My Notifications Date: Saturday, September 12, 2015 at 6:58 AM IBM Spectrum Scale ? Security Bulletin: Vulnerability in OpenSSL affects IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 (CVE-2015-1788) An OpenSSL denial of service vulnerability disclosed by the OpenSSL Project affects GSKit. IBM GPFS V4.1 and IBM Spectrum Scale V4.1.1 use GSKit and addressed the applicable CVE. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Oesterlin at nuance.com Tue Sep 15 18:16:00 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Tue, 15 Sep 2015 17:16:00 +0000 Subject: [gpfsug-discuss] GPFS UG Meeting at SC15 - Preliminary agenda Message-ID: <38EE0F09-7A2F-4031-B201-BA0CEE715A77@nuance.com> Here is the preliminary agenda for the user group meeting at SC15. We realize that the timing isn?t perfect for everyone. Hopefully all of you in attendance at SC15 can participate in some or all of these sessions. I?m sure we will all find time to get together outside of this to discuss topics. Thanks to IBM for helping to organize this. We are soliciting user presentations! (20 mins each) Talk about how you are using GPFS, challenges, etc. Please drop a note to: with submission or suggestions for topics. If you have comments on the agenda, let us know ASAP as time is short! ? Proposed Agenda ? Sunday 11/15 - Location TBD 1:00 - 1:15 Introductions, Logistics, GPFS-UG Overview 1:15 - 2:15 File, Object, HDF & a GUI! : the latest on IBM Spectrum Scale 2:15 - 2:30 Lightning Demo of Spectrum Control with invitation for free trial & more discussions during reception 2:30 - 2:45 Break 2:45 - 3:45 User Presentation(s): User #1 Nuance ? (20 mins) User #2 (20 mins) User #3 (20 mins) 3:45 ? 4:00 ESS Performance testing at the new open Ennovar lab at Wichita State University 4:00 ? 4:15 Break 4:15 - 5:30 Panel Discussion: "My favorite tool for managing Spectrum Scale is..." Panel: Nuance, DESY, +? +? 5:30 ? ? Reception Bob Oesterlin Sr Storage Engineer, Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From Luke.Raimbach at crick.ac.uk Mon Sep 21 09:23:41 2015 From: Luke.Raimbach at crick.ac.uk (Luke Raimbach) Date: Mon, 21 Sep 2015 08:23:41 +0000 Subject: [gpfsug-discuss] Automatic Inode Expansion for Independent Filesets Message-ID: Hi All, Do independent filesets automatically expand the number of preallocated inodes as needed up to the maximum as the root fileset does? Cheers, Luke. Luke Raimbach? Senior HPC Data and Storage Systems Engineer, The Francis Crick Institute, Gibbs Building, 215 Euston Road, London NW1 2BE. E: luke.raimbach at crick.ac.uk W: www.crick.ac.uk The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. From bevans at pixitmedia.com Mon Sep 21 12:06:45 2015 From: bevans at pixitmedia.com (Barry Evans) Date: Mon, 21 Sep 2015 12:06:45 +0100 Subject: [gpfsug-discuss] Automatic Inode Expansion for Independent Filesets In-Reply-To: References: Message-ID: <55FFE4C5.3040305@pixitmedia.com> Hi Luke, It does indeed expand automatically. It's a good idea to get quotas and callbacks in place for this or something to parse the semi regular polling of the allocated inodes as it has a tendency to sneak up on you and run out of space! Cheers, Barry On 21/09/2015 09:23, Luke Raimbach wrote: > Hi All, > > Do independent filesets automatically expand the number of preallocated inodes as needed up to the maximum as the root fileset does? > > Cheers, > Luke. > > Luke Raimbach? > Senior HPC Data and Storage Systems Engineer, > The Francis Crick Institute, > Gibbs Building, > 215 Euston Road, > London NW1 2BE. > > E: luke.raimbach at crick.ac.uk > W: www.crick.ac.uk > > The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- Barry Evans Technical Director & Co-Founder Pixit Media Mobile: +44 (0)7950 666 248 http://www.pixitmedia.com -- This email is confidential in that it is intended for the exclusive attention of the addressee(s) indicated. If you are not the intended recipient, this email should not be read or disclosed to any other person. Please notify the sender immediately and delete this email from your computer system. Any opinions expressed are not necessarily those of the company from which this email was sent and, whilst to the best of our knowledge no viruses or defects exist, no responsibility can be accepted for any loss or damage arising from its receipt or subsequent use of this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From secretary at gpfsug.org Mon Sep 28 13:49:22 2015 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Mon, 28 Sep 2015 13:49:22 +0100 Subject: [gpfsug-discuss] Meet the Devs comes to Edinburgh! Message-ID: <402938fb8bcfc79f8feee2c7d34e16b7@webmail.gpfsug.org> Hi all, We've arranged the next 'Meet the Devs' event to take place in Edinburgh on Friday 23rd October from 10:30/11am until 3/3:30pm. Location: Room 2009a, Information Services, James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD Google maps link: https://goo.gl/maps/Ta7DQ Agenda: - GUI - 4.2 Updates/show and tell - Open conversation on any areas of interest attendees may have Lunch and refreshments will be provided. Please email me (secretary at gpfsug.org) if you would like to attend including any particular topics of interest you would like to discuss. We hope to see you there! Best wishes, -- Claire O'Toole GPFS User Group Secretary +44 (0)7508 033896 www.gpfsug.org From Robert.Oesterlin at nuance.com Wed Sep 30 18:05:37 2015 From: Robert.Oesterlin at nuance.com (Oesterlin, Robert) Date: Wed, 30 Sep 2015 17:05:37 +0000 Subject: [gpfsug-discuss] User Group Meeting at SC15 - Call for user presentations Message-ID: <3CF518B4-5212-4187-A3D8-32270F6C06D9@nuance.com> We?re still looking for a few more user presentations for the SC15 user group meeting. They don?t need to be lengthy or complicated ? just tells what you are doing with Spectrum Scale (GPFS). If you could drop me a note to me: - Indicating you are coming to SC15 and if you are attending the user group meeting - If you are willing to do a short presentation on your use of Spectrum Scale (GPFS) My email is robert.oesterlin @ nuance.com Bob Oesterlin GPFS-UG USA Co-principal Nuance Communications -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.gasthuber at desy.de Wed Sep 30 19:56:38 2015 From: martin.gasthuber at desy.de (Martin Gasthuber) Date: Wed, 30 Sep 2015 20:56:38 +0200 Subject: [gpfsug-discuss] User Group Meeting at SC15 - Call for user presentations In-Reply-To: <3CF518B4-5212-4187-A3D8-32270F6C06D9@nuance.com> References: <3CF518B4-5212-4187-A3D8-32270F6C06D9@nuance.com> Message-ID: Hi Robert, i will attend the meeting and (if i read the agenda correctly ;-) will also give a presentation about out GPFS setup for data taking and analysis in photon science @DESY. best regards, Martin > On 30 Sep, 2015, at 19:05, Oesterlin, Robert wrote: > > We?re still looking for a few more user presentations for the SC15 user group meeting. They don?t need to be lengthy or complicated ? just tells what you are doing with Spectrum Scale (GPFS). > > If you could drop me a note to me: > > - Indicating you are coming to SC15 and if you are attending the user group meeting > - If you are willing to do a short presentation on your use of Spectrum Scale (GPFS) > > My email is robert.oesterlin @ nuance.com > > Bob Oesterlin > GPFS-UG USA Co-principal > Nuance Communications > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss