From dhildeb at us.ibm.com Fri Nov 2 19:57:52 2012 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 2 Nov 2012 12:57:52 -0700 Subject: [gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated Message-ID: Hi Orlando, Thanks for all of your feedback, many great suggestions. Sorry for the late response, I've been trying to go through and digest all the comments from the user group meeting. I'll do my best to forward your suggestions internally. The one thing I wanted to comment on was that "hot file" identification was shipped in gpfs 3.5.0.3. Here is a link to the docs discussing it: http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r50-3.gpfs200.doc%2Fbl1adv_userpool.htm&resultof=%22file%22%20%22heat%22%20 Dean Hildebrand Research Staff Member - Storage Systems IBM Almaden Research Center On 25/09/12 14:05, Jez Tucker wrote: > Hello all > > Firstly can I thank all who attended UG #6. We had a great turn out and the opportunity to network with more people from IBM was most welcome. > > I have uploaded the presentations from UG to this small, catchy URL: http://goo.gl/n1in1 > [Bar the SCCS presentation, awaiting clearance]. > > Please have a read of the presentations. > > IBM Almaden Labs welcome your feedback regarding pNFS and Panache as well as FRQs etc. > > For instance, one FRQ idea banded around was a GRIO/QoS implementation for GPFS : E.G.: http://goo.gl/zjkN8 > It would be most helpful if a couple of lines use-case was alongside each of these. > > If these are messaged back to the list for healthy debate or sent to me directly I'll put them on the UG website for Almaden Labs to peruse/discuss with us. > > Regards, > > Jez > > p.s. I'll also start to solicit previous presentations for UG < #5, so if you were a speaker, please get in touch. > --- Thanks for the great meeting Jez, and Claire et al at OCF. On feature requests, I think one desirable feature request discussed at the meeting was for "better" performance monitoring tools. A quick think through the things on my plate which would be eased with new/changed features in GPFS led me to this wishlist: - ability to change the designated NSD servers for an NSD without unmounting the filesystem everywhere - expansion of the AFM toolchain, including the following to assist with migration of data between filesystems: - ability to set a pre-existing fileset as a "cache" of an empty 'home' fileset with AFM, allowing for a push of the data from the "cache" fileset/filesystem to the "home" target fileset/filesystem as a data migration strategy - ability to remove an AFM relationship between filesets, preserving data in the 'cache' fileset (and making it, independently, a 'live' fileset) - ability to "flip" the 'home'<->'cache' relationship, resulting in a flush from the new 'cache' fileset to the new 'home' fileset - better documentation (and, indeed, automation/automagic) on making best use of available memory within NSD servers - read caching of data blocks within an NSD server's memory (when acting in "server" mode in a multi-cluster environment where the client nodes do not have direct block access to the disks) - "hot file" identification tools/data for policy based HSM migration - some easy and non-invasive method for logging file and folder deletions (for the purposes of expiring backup data without using a separate database of files, in my case) - better licensing model (dare I say it - capacity based?) I'd love to be able to change the blocksize on an existing filesystem too, but I imagine that's not possible. -- Orlando -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Sat Nov 3 16:11:22 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Sat, 3 Nov 2012 16:11:22 +0000 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 05/11/2012) Message-ID: I am out of the office until 05/11/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 11, Issue 1" sent on 03/11/2012 12:00:02. This is the only notification you will receive while this person is away. From orlando.richards at ed.ac.uk Tue Nov 6 09:21:09 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 06 Nov 2012 09:21:09 +0000 Subject: [gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated In-Reply-To: References: Message-ID: <5098D685.2020007@ed.ac.uk> Excellent stuff - thanks Dean! On 02/11/12 19:57, Dean Hildebrand wrote: > Hi Orlando, > > Thanks for all of your feedback, many great suggestions. Sorry for the > late response, I've been trying to go through and digest all the > comments from the user group meeting. I'll do my best to forward your > suggestions internally. > > The one thing I wanted to comment on was that "hot file" identification > was shipped in gpfs 3.5.0.3. > > Here is a link to the docs discussing it: > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r50-3.gpfs200.doc%2Fbl1adv_userpool.htm&resultof=%22file%22%20%22heat%22%20 > > > Dean Hildebrand > Research Staff Member - Storage Systems > IBM Almaden Research Center > > On 25/09/12 14:05, Jez Tucker wrote: > >/ Hello all > />/ > />/ Firstly can I thank all who attended UG #6. We had a great turn > out and the opportunity to network with more people from IBM was most > welcome. > />/ > />/ I have uploaded the presentations from UG to this small, catchy URL: > //_http://goo.gl/n1in1_// > />/ [Bar the SCCS presentation, awaiting clearance]. > />/ > />/ Please have a read of the presentations. > />/ > />/ IBM Almaden Labs welcome your feedback regarding pNFS and Panache as > well as FRQs etc. > />/ > />/ For instance, one FRQ idea banded around was a GRIO/QoS > implementation for GPFS : E.G.: //_http://goo.gl/zjkN8_// > />/ It would be most helpful if a couple of lines use-case was alongside > each of these. > />/ > />/ If these are messaged back to the list for healthy debate or sent to > me directly I'll put them on the UG website for Almaden Labs to > peruse/discuss with us. > />/ > />/ Regards, > />/ > />/ Jez > />/ > />/ p.s. I'll also start to solicit previous presentations for UG < #5, > so if you were a speaker, please get in touch. > />/ --- > / > Thanks for the great meeting Jez, and Claire et al at OCF. > > On feature requests, I think one desirable feature request discussed at > the meeting was for "better" performance monitoring tools. > > A quick think through the things on my plate which would be eased with > new/changed features in GPFS led me to this wishlist: > > - ability to change the designated NSD servers for an NSD without > unmounting the filesystem everywhere > > - expansion of the AFM toolchain, including the following to assist > with migration of data between filesystems: > - ability to set a pre-existing fileset as a "cache" of an empty > 'home' fileset with AFM, allowing for a push of the data from the > "cache" fileset/filesystem to the "home" target fileset/filesystem as a > data migration strategy > - ability to remove an AFM relationship between filesets, preserving > data in the 'cache' fileset (and making it, independently, a 'live' fileset) > - ability to "flip" the 'home'<->'cache' relationship, resulting in > a flush from the new 'cache' fileset to the new 'home' fileset > > - better documentation (and, indeed, automation/automagic) on making > best use of available memory within NSD servers > > - read caching of data blocks within an NSD server's memory (when > acting in "server" mode in a multi-cluster environment where the client > nodes do not have direct block access to the disks) > > - "hot file" identification tools/data for policy based HSM migration > > - some easy and non-invasive method for logging file and folder > deletions (for the purposes of expiring backup data without using a > separate database of files, in my case) > > - better licensing model (dare I say it - capacity based?) > > > I'd love to be able to change the blocksize on an existing filesystem > too, but I imagine that's not possible. > > > -- > Orlando > > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From crobson at ocf.co.uk Fri Nov 16 09:48:04 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Fri, 16 Nov 2012 09:48:04 +0000 Subject: [gpfsug-discuss] Meeting at MEW Message-ID: Dear All, There will be an informal GPFS User Group meeting/networking session taking place at this year's Machine Evaluation Workshop in Liverpool on Wednesday 28th November from 2-3:30pm. We will be reviewing what the group has achieved and discussed since its inception at MEW in 2010 as well as discuss future topics for discussion at the next formal meeting in Spring 2013. We will hopefully also have an update on GPFS announcements from this week's SC'12 in Salt Lake City, USA. If you would like to attend, please register with me (email secretary at gpfsug.org or call 0114 257 2204) so I have an idea on numbers. Details for MEW can be found https://eventbooking.stfc.ac.uk/news-events/mew23 Note: You must be registered for MEW23 in order to attend this session. Many thanks, Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.milsted at uk.ibm.com Fri Nov 16 16:08:13 2012 From: chris.milsted at uk.ibm.com (Chris Milsted5) Date: Fri, 16 Nov 2012 16:08:13 +0000 Subject: [gpfsug-discuss] AUTO: Chris Milsted is out of the office (returning 19/11/2012) Message-ID: I am out of the office until 19/11/2012. I am out of the office attending SC12 with limited access to email. I will respond where possible or SMS me on +44 7795 316 723 if urgent. Alternatively, if appropriate, I will respond to your email upon my return. regards Chris Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 11, Issue 4" sent on 16/11/2012 12:00:01. This is the only notification you will receive while this person is away. From dhildeb at us.ibm.com Fri Nov 2 19:57:52 2012 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 2 Nov 2012 12:57:52 -0700 Subject: [gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated Message-ID: Hi Orlando, Thanks for all of your feedback, many great suggestions. Sorry for the late response, I've been trying to go through and digest all the comments from the user group meeting. I'll do my best to forward your suggestions internally. The one thing I wanted to comment on was that "hot file" identification was shipped in gpfs 3.5.0.3. Here is a link to the docs discussing it: http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r50-3.gpfs200.doc%2Fbl1adv_userpool.htm&resultof=%22file%22%20%22heat%22%20 Dean Hildebrand Research Staff Member - Storage Systems IBM Almaden Research Center On 25/09/12 14:05, Jez Tucker wrote: > Hello all > > Firstly can I thank all who attended UG #6. We had a great turn out and the opportunity to network with more people from IBM was most welcome. > > I have uploaded the presentations from UG to this small, catchy URL: http://goo.gl/n1in1 > [Bar the SCCS presentation, awaiting clearance]. > > Please have a read of the presentations. > > IBM Almaden Labs welcome your feedback regarding pNFS and Panache as well as FRQs etc. > > For instance, one FRQ idea banded around was a GRIO/QoS implementation for GPFS : E.G.: http://goo.gl/zjkN8 > It would be most helpful if a couple of lines use-case was alongside each of these. > > If these are messaged back to the list for healthy debate or sent to me directly I'll put them on the UG website for Almaden Labs to peruse/discuss with us. > > Regards, > > Jez > > p.s. I'll also start to solicit previous presentations for UG < #5, so if you were a speaker, please get in touch. > --- Thanks for the great meeting Jez, and Claire et al at OCF. On feature requests, I think one desirable feature request discussed at the meeting was for "better" performance monitoring tools. A quick think through the things on my plate which would be eased with new/changed features in GPFS led me to this wishlist: - ability to change the designated NSD servers for an NSD without unmounting the filesystem everywhere - expansion of the AFM toolchain, including the following to assist with migration of data between filesystems: - ability to set a pre-existing fileset as a "cache" of an empty 'home' fileset with AFM, allowing for a push of the data from the "cache" fileset/filesystem to the "home" target fileset/filesystem as a data migration strategy - ability to remove an AFM relationship between filesets, preserving data in the 'cache' fileset (and making it, independently, a 'live' fileset) - ability to "flip" the 'home'<->'cache' relationship, resulting in a flush from the new 'cache' fileset to the new 'home' fileset - better documentation (and, indeed, automation/automagic) on making best use of available memory within NSD servers - read caching of data blocks within an NSD server's memory (when acting in "server" mode in a multi-cluster environment where the client nodes do not have direct block access to the disks) - "hot file" identification tools/data for policy based HSM migration - some easy and non-invasive method for logging file and folder deletions (for the purposes of expiring backup data without using a separate database of files, in my case) - better licensing model (dare I say it - capacity based?) I'd love to be able to change the blocksize on an existing filesystem too, but I imagine that's not possible. -- Orlando -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Sat Nov 3 16:11:22 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Sat, 3 Nov 2012 16:11:22 +0000 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 05/11/2012) Message-ID: I am out of the office until 05/11/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 11, Issue 1" sent on 03/11/2012 12:00:02. This is the only notification you will receive while this person is away. From orlando.richards at ed.ac.uk Tue Nov 6 09:21:09 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 06 Nov 2012 09:21:09 +0000 Subject: [gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated In-Reply-To: References: Message-ID: <5098D685.2020007@ed.ac.uk> Excellent stuff - thanks Dean! On 02/11/12 19:57, Dean Hildebrand wrote: > Hi Orlando, > > Thanks for all of your feedback, many great suggestions. Sorry for the > late response, I've been trying to go through and digest all the > comments from the user group meeting. I'll do my best to forward your > suggestions internally. > > The one thing I wanted to comment on was that "hot file" identification > was shipped in gpfs 3.5.0.3. > > Here is a link to the docs discussing it: > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r50-3.gpfs200.doc%2Fbl1adv_userpool.htm&resultof=%22file%22%20%22heat%22%20 > > > Dean Hildebrand > Research Staff Member - Storage Systems > IBM Almaden Research Center > > On 25/09/12 14:05, Jez Tucker wrote: > >/ Hello all > />/ > />/ Firstly can I thank all who attended UG #6. We had a great turn > out and the opportunity to network with more people from IBM was most > welcome. > />/ > />/ I have uploaded the presentations from UG to this small, catchy URL: > //_http://goo.gl/n1in1_// > />/ [Bar the SCCS presentation, awaiting clearance]. > />/ > />/ Please have a read of the presentations. > />/ > />/ IBM Almaden Labs welcome your feedback regarding pNFS and Panache as > well as FRQs etc. > />/ > />/ For instance, one FRQ idea banded around was a GRIO/QoS > implementation for GPFS : E.G.: //_http://goo.gl/zjkN8_// > />/ It would be most helpful if a couple of lines use-case was alongside > each of these. > />/ > />/ If these are messaged back to the list for healthy debate or sent to > me directly I'll put them on the UG website for Almaden Labs to > peruse/discuss with us. > />/ > />/ Regards, > />/ > />/ Jez > />/ > />/ p.s. I'll also start to solicit previous presentations for UG < #5, > so if you were a speaker, please get in touch. > />/ --- > / > Thanks for the great meeting Jez, and Claire et al at OCF. > > On feature requests, I think one desirable feature request discussed at > the meeting was for "better" performance monitoring tools. > > A quick think through the things on my plate which would be eased with > new/changed features in GPFS led me to this wishlist: > > - ability to change the designated NSD servers for an NSD without > unmounting the filesystem everywhere > > - expansion of the AFM toolchain, including the following to assist > with migration of data between filesystems: > - ability to set a pre-existing fileset as a "cache" of an empty > 'home' fileset with AFM, allowing for a push of the data from the > "cache" fileset/filesystem to the "home" target fileset/filesystem as a > data migration strategy > - ability to remove an AFM relationship between filesets, preserving > data in the 'cache' fileset (and making it, independently, a 'live' fileset) > - ability to "flip" the 'home'<->'cache' relationship, resulting in > a flush from the new 'cache' fileset to the new 'home' fileset > > - better documentation (and, indeed, automation/automagic) on making > best use of available memory within NSD servers > > - read caching of data blocks within an NSD server's memory (when > acting in "server" mode in a multi-cluster environment where the client > nodes do not have direct block access to the disks) > > - "hot file" identification tools/data for policy based HSM migration > > - some easy and non-invasive method for logging file and folder > deletions (for the purposes of expiring backup data without using a > separate database of files, in my case) > > - better licensing model (dare I say it - capacity based?) > > > I'd love to be able to change the blocksize on an existing filesystem > too, but I imagine that's not possible. > > > -- > Orlando > > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From crobson at ocf.co.uk Fri Nov 16 09:48:04 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Fri, 16 Nov 2012 09:48:04 +0000 Subject: [gpfsug-discuss] Meeting at MEW Message-ID: Dear All, There will be an informal GPFS User Group meeting/networking session taking place at this year's Machine Evaluation Workshop in Liverpool on Wednesday 28th November from 2-3:30pm. We will be reviewing what the group has achieved and discussed since its inception at MEW in 2010 as well as discuss future topics for discussion at the next formal meeting in Spring 2013. We will hopefully also have an update on GPFS announcements from this week's SC'12 in Salt Lake City, USA. If you would like to attend, please register with me (email secretary at gpfsug.org or call 0114 257 2204) so I have an idea on numbers. Details for MEW can be found https://eventbooking.stfc.ac.uk/news-events/mew23 Note: You must be registered for MEW23 in order to attend this session. Many thanks, Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.milsted at uk.ibm.com Fri Nov 16 16:08:13 2012 From: chris.milsted at uk.ibm.com (Chris Milsted5) Date: Fri, 16 Nov 2012 16:08:13 +0000 Subject: [gpfsug-discuss] AUTO: Chris Milsted is out of the office (returning 19/11/2012) Message-ID: I am out of the office until 19/11/2012. I am out of the office attending SC12 with limited access to email. I will respond where possible or SMS me on +44 7795 316 723 if urgent. Alternatively, if appropriate, I will respond to your email upon my return. regards Chris Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 11, Issue 4" sent on 16/11/2012 12:00:01. This is the only notification you will receive while this person is away. From dhildeb at us.ibm.com Fri Nov 2 19:57:52 2012 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 2 Nov 2012 12:57:52 -0700 Subject: [gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated Message-ID: Hi Orlando, Thanks for all of your feedback, many great suggestions. Sorry for the late response, I've been trying to go through and digest all the comments from the user group meeting. I'll do my best to forward your suggestions internally. The one thing I wanted to comment on was that "hot file" identification was shipped in gpfs 3.5.0.3. Here is a link to the docs discussing it: http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r50-3.gpfs200.doc%2Fbl1adv_userpool.htm&resultof=%22file%22%20%22heat%22%20 Dean Hildebrand Research Staff Member - Storage Systems IBM Almaden Research Center On 25/09/12 14:05, Jez Tucker wrote: > Hello all > > Firstly can I thank all who attended UG #6. We had a great turn out and the opportunity to network with more people from IBM was most welcome. > > I have uploaded the presentations from UG to this small, catchy URL: http://goo.gl/n1in1 > [Bar the SCCS presentation, awaiting clearance]. > > Please have a read of the presentations. > > IBM Almaden Labs welcome your feedback regarding pNFS and Panache as well as FRQs etc. > > For instance, one FRQ idea banded around was a GRIO/QoS implementation for GPFS : E.G.: http://goo.gl/zjkN8 > It would be most helpful if a couple of lines use-case was alongside each of these. > > If these are messaged back to the list for healthy debate or sent to me directly I'll put them on the UG website for Almaden Labs to peruse/discuss with us. > > Regards, > > Jez > > p.s. I'll also start to solicit previous presentations for UG < #5, so if you were a speaker, please get in touch. > --- Thanks for the great meeting Jez, and Claire et al at OCF. On feature requests, I think one desirable feature request discussed at the meeting was for "better" performance monitoring tools. A quick think through the things on my plate which would be eased with new/changed features in GPFS led me to this wishlist: - ability to change the designated NSD servers for an NSD without unmounting the filesystem everywhere - expansion of the AFM toolchain, including the following to assist with migration of data between filesystems: - ability to set a pre-existing fileset as a "cache" of an empty 'home' fileset with AFM, allowing for a push of the data from the "cache" fileset/filesystem to the "home" target fileset/filesystem as a data migration strategy - ability to remove an AFM relationship between filesets, preserving data in the 'cache' fileset (and making it, independently, a 'live' fileset) - ability to "flip" the 'home'<->'cache' relationship, resulting in a flush from the new 'cache' fileset to the new 'home' fileset - better documentation (and, indeed, automation/automagic) on making best use of available memory within NSD servers - read caching of data blocks within an NSD server's memory (when acting in "server" mode in a multi-cluster environment where the client nodes do not have direct block access to the disks) - "hot file" identification tools/data for policy based HSM migration - some easy and non-invasive method for logging file and folder deletions (for the purposes of expiring backup data without using a separate database of files, in my case) - better licensing model (dare I say it - capacity based?) I'd love to be able to change the blocksize on an existing filesystem too, but I imagine that's not possible. -- Orlando -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Sat Nov 3 16:11:22 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Sat, 3 Nov 2012 16:11:22 +0000 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 05/11/2012) Message-ID: I am out of the office until 05/11/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 11, Issue 1" sent on 03/11/2012 12:00:02. This is the only notification you will receive while this person is away. From orlando.richards at ed.ac.uk Tue Nov 6 09:21:09 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 06 Nov 2012 09:21:09 +0000 Subject: [gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated In-Reply-To: References: Message-ID: <5098D685.2020007@ed.ac.uk> Excellent stuff - thanks Dean! On 02/11/12 19:57, Dean Hildebrand wrote: > Hi Orlando, > > Thanks for all of your feedback, many great suggestions. Sorry for the > late response, I've been trying to go through and digest all the > comments from the user group meeting. I'll do my best to forward your > suggestions internally. > > The one thing I wanted to comment on was that "hot file" identification > was shipped in gpfs 3.5.0.3. > > Here is a link to the docs discussing it: > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r50-3.gpfs200.doc%2Fbl1adv_userpool.htm&resultof=%22file%22%20%22heat%22%20 > > > Dean Hildebrand > Research Staff Member - Storage Systems > IBM Almaden Research Center > > On 25/09/12 14:05, Jez Tucker wrote: > >/ Hello all > />/ > />/ Firstly can I thank all who attended UG #6. We had a great turn > out and the opportunity to network with more people from IBM was most > welcome. > />/ > />/ I have uploaded the presentations from UG to this small, catchy URL: > //_http://goo.gl/n1in1_// > />/ [Bar the SCCS presentation, awaiting clearance]. > />/ > />/ Please have a read of the presentations. > />/ > />/ IBM Almaden Labs welcome your feedback regarding pNFS and Panache as > well as FRQs etc. > />/ > />/ For instance, one FRQ idea banded around was a GRIO/QoS > implementation for GPFS : E.G.: //_http://goo.gl/zjkN8_// > />/ It would be most helpful if a couple of lines use-case was alongside > each of these. > />/ > />/ If these are messaged back to the list for healthy debate or sent to > me directly I'll put them on the UG website for Almaden Labs to > peruse/discuss with us. > />/ > />/ Regards, > />/ > />/ Jez > />/ > />/ p.s. I'll also start to solicit previous presentations for UG < #5, > so if you were a speaker, please get in touch. > />/ --- > / > Thanks for the great meeting Jez, and Claire et al at OCF. > > On feature requests, I think one desirable feature request discussed at > the meeting was for "better" performance monitoring tools. > > A quick think through the things on my plate which would be eased with > new/changed features in GPFS led me to this wishlist: > > - ability to change the designated NSD servers for an NSD without > unmounting the filesystem everywhere > > - expansion of the AFM toolchain, including the following to assist > with migration of data between filesystems: > - ability to set a pre-existing fileset as a "cache" of an empty > 'home' fileset with AFM, allowing for a push of the data from the > "cache" fileset/filesystem to the "home" target fileset/filesystem as a > data migration strategy > - ability to remove an AFM relationship between filesets, preserving > data in the 'cache' fileset (and making it, independently, a 'live' fileset) > - ability to "flip" the 'home'<->'cache' relationship, resulting in > a flush from the new 'cache' fileset to the new 'home' fileset > > - better documentation (and, indeed, automation/automagic) on making > best use of available memory within NSD servers > > - read caching of data blocks within an NSD server's memory (when > acting in "server" mode in a multi-cluster environment where the client > nodes do not have direct block access to the disks) > > - "hot file" identification tools/data for policy based HSM migration > > - some easy and non-invasive method for logging file and folder > deletions (for the purposes of expiring backup data without using a > separate database of files, in my case) > > - better licensing model (dare I say it - capacity based?) > > > I'd love to be able to change the blocksize on an existing filesystem > too, but I imagine that's not possible. > > > -- > Orlando > > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From crobson at ocf.co.uk Fri Nov 16 09:48:04 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Fri, 16 Nov 2012 09:48:04 +0000 Subject: [gpfsug-discuss] Meeting at MEW Message-ID: Dear All, There will be an informal GPFS User Group meeting/networking session taking place at this year's Machine Evaluation Workshop in Liverpool on Wednesday 28th November from 2-3:30pm. We will be reviewing what the group has achieved and discussed since its inception at MEW in 2010 as well as discuss future topics for discussion at the next formal meeting in Spring 2013. We will hopefully also have an update on GPFS announcements from this week's SC'12 in Salt Lake City, USA. If you would like to attend, please register with me (email secretary at gpfsug.org or call 0114 257 2204) so I have an idea on numbers. Details for MEW can be found https://eventbooking.stfc.ac.uk/news-events/mew23 Note: You must be registered for MEW23 in order to attend this session. Many thanks, Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.milsted at uk.ibm.com Fri Nov 16 16:08:13 2012 From: chris.milsted at uk.ibm.com (Chris Milsted5) Date: Fri, 16 Nov 2012 16:08:13 +0000 Subject: [gpfsug-discuss] AUTO: Chris Milsted is out of the office (returning 19/11/2012) Message-ID: I am out of the office until 19/11/2012. I am out of the office attending SC12 with limited access to email. I will respond where possible or SMS me on +44 7795 316 723 if urgent. Alternatively, if appropriate, I will respond to your email upon my return. regards Chris Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 11, Issue 4" sent on 16/11/2012 12:00:01. This is the only notification you will receive while this person is away. From dhildeb at us.ibm.com Fri Nov 2 19:57:52 2012 From: dhildeb at us.ibm.com (Dean Hildebrand) Date: Fri, 2 Nov 2012 12:57:52 -0700 Subject: [gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated Message-ID: Hi Orlando, Thanks for all of your feedback, many great suggestions. Sorry for the late response, I've been trying to go through and digest all the comments from the user group meeting. I'll do my best to forward your suggestions internally. The one thing I wanted to comment on was that "hot file" identification was shipped in gpfs 3.5.0.3. Here is a link to the docs discussing it: http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r50-3.gpfs200.doc%2Fbl1adv_userpool.htm&resultof=%22file%22%20%22heat%22%20 Dean Hildebrand Research Staff Member - Storage Systems IBM Almaden Research Center On 25/09/12 14:05, Jez Tucker wrote: > Hello all > > Firstly can I thank all who attended UG #6. We had a great turn out and the opportunity to network with more people from IBM was most welcome. > > I have uploaded the presentations from UG to this small, catchy URL: http://goo.gl/n1in1 > [Bar the SCCS presentation, awaiting clearance]. > > Please have a read of the presentations. > > IBM Almaden Labs welcome your feedback regarding pNFS and Panache as well as FRQs etc. > > For instance, one FRQ idea banded around was a GRIO/QoS implementation for GPFS : E.G.: http://goo.gl/zjkN8 > It would be most helpful if a couple of lines use-case was alongside each of these. > > If these are messaged back to the list for healthy debate or sent to me directly I'll put them on the UG website for Almaden Labs to peruse/discuss with us. > > Regards, > > Jez > > p.s. I'll also start to solicit previous presentations for UG < #5, so if you were a speaker, please get in touch. > --- Thanks for the great meeting Jez, and Claire et al at OCF. On feature requests, I think one desirable feature request discussed at the meeting was for "better" performance monitoring tools. A quick think through the things on my plate which would be eased with new/changed features in GPFS led me to this wishlist: - ability to change the designated NSD servers for an NSD without unmounting the filesystem everywhere - expansion of the AFM toolchain, including the following to assist with migration of data between filesystems: - ability to set a pre-existing fileset as a "cache" of an empty 'home' fileset with AFM, allowing for a push of the data from the "cache" fileset/filesystem to the "home" target fileset/filesystem as a data migration strategy - ability to remove an AFM relationship between filesets, preserving data in the 'cache' fileset (and making it, independently, a 'live' fileset) - ability to "flip" the 'home'<->'cache' relationship, resulting in a flush from the new 'cache' fileset to the new 'home' fileset - better documentation (and, indeed, automation/automagic) on making best use of available memory within NSD servers - read caching of data blocks within an NSD server's memory (when acting in "server" mode in a multi-cluster environment where the client nodes do not have direct block access to the disks) - "hot file" identification tools/data for policy based HSM migration - some easy and non-invasive method for logging file and folder deletions (for the purposes of expiring backup data without using a separate database of files, in my case) - better licensing model (dare I say it - capacity based?) I'd love to be able to change the blocksize on an existing filesystem too, but I imagine that's not possible. -- Orlando -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ANDREWD at uk.ibm.com Sat Nov 3 16:11:22 2012 From: ANDREWD at uk.ibm.com (Andrew Downes1) Date: Sat, 3 Nov 2012 16:11:22 +0000 Subject: [gpfsug-discuss] AUTO: Andrew Downes is out of the office (returning 05/11/2012) Message-ID: I am out of the office until 05/11/2012. In my absence please contact Matt Ayres mailto:m_ayres at uk.ibm.com 07710-981527 In case of urgency, please contact our manager Andy Jenkins mailto:JENKINSA at uk.ibm.com 07921-108940 Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 11, Issue 1" sent on 03/11/2012 12:00:02. This is the only notification you will receive while this person is away. From orlando.richards at ed.ac.uk Tue Nov 6 09:21:09 2012 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 06 Nov 2012 09:21:09 +0000 Subject: [gpfsug-discuss] Presentations from UG #6 - Feedback for IBM appreciated In-Reply-To: References: Message-ID: <5098D685.2020007@ed.ac.uk> Excellent stuff - thanks Dean! On 02/11/12 19:57, Dean Hildebrand wrote: > Hi Orlando, > > Thanks for all of your feedback, many great suggestions. Sorry for the > late response, I've been trying to go through and digest all the > comments from the user group meeting. I'll do my best to forward your > suggestions internally. > > The one thing I wanted to comment on was that "hot file" identification > was shipped in gpfs 3.5.0.3. > > Here is a link to the docs discussing it: > http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=%2Fcom.ibm.cluster.gpfs.v3r50-3.gpfs200.doc%2Fbl1adv_userpool.htm&resultof=%22file%22%20%22heat%22%20 > > > Dean Hildebrand > Research Staff Member - Storage Systems > IBM Almaden Research Center > > On 25/09/12 14:05, Jez Tucker wrote: > >/ Hello all > />/ > />/ Firstly can I thank all who attended UG #6. We had a great turn > out and the opportunity to network with more people from IBM was most > welcome. > />/ > />/ I have uploaded the presentations from UG to this small, catchy URL: > //_http://goo.gl/n1in1_// > />/ [Bar the SCCS presentation, awaiting clearance]. > />/ > />/ Please have a read of the presentations. > />/ > />/ IBM Almaden Labs welcome your feedback regarding pNFS and Panache as > well as FRQs etc. > />/ > />/ For instance, one FRQ idea banded around was a GRIO/QoS > implementation for GPFS : E.G.: //_http://goo.gl/zjkN8_// > />/ It would be most helpful if a couple of lines use-case was alongside > each of these. > />/ > />/ If these are messaged back to the list for healthy debate or sent to > me directly I'll put them on the UG website for Almaden Labs to > peruse/discuss with us. > />/ > />/ Regards, > />/ > />/ Jez > />/ > />/ p.s. I'll also start to solicit previous presentations for UG < #5, > so if you were a speaker, please get in touch. > />/ --- > / > Thanks for the great meeting Jez, and Claire et al at OCF. > > On feature requests, I think one desirable feature request discussed at > the meeting was for "better" performance monitoring tools. > > A quick think through the things on my plate which would be eased with > new/changed features in GPFS led me to this wishlist: > > - ability to change the designated NSD servers for an NSD without > unmounting the filesystem everywhere > > - expansion of the AFM toolchain, including the following to assist > with migration of data between filesystems: > - ability to set a pre-existing fileset as a "cache" of an empty > 'home' fileset with AFM, allowing for a push of the data from the > "cache" fileset/filesystem to the "home" target fileset/filesystem as a > data migration strategy > - ability to remove an AFM relationship between filesets, preserving > data in the 'cache' fileset (and making it, independently, a 'live' fileset) > - ability to "flip" the 'home'<->'cache' relationship, resulting in > a flush from the new 'cache' fileset to the new 'home' fileset > > - better documentation (and, indeed, automation/automagic) on making > best use of available memory within NSD servers > > - read caching of data blocks within an NSD server's memory (when > acting in "server" mode in a multi-cluster environment where the client > nodes do not have direct block access to the disks) > > - "hot file" identification tools/data for policy based HSM migration > > - some easy and non-invasive method for logging file and folder > deletions (for the purposes of expiring backup data without using a > separate database of files, in my case) > > - better licensing model (dare I say it - capacity based?) > > > I'd love to be able to change the blocksize on an existing filesystem > too, but I imagine that's not possible. > > > -- > Orlando > > > > -- > -- > Dr Orlando Richards > Information Services > IT Infrastructure Division > Unix Section > Tel: 0131 650 4994 > > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- -- Dr Orlando Richards Information Services IT Infrastructure Division Unix Section Tel: 0131 650 4994 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From crobson at ocf.co.uk Fri Nov 16 09:48:04 2012 From: crobson at ocf.co.uk (Claire Robson) Date: Fri, 16 Nov 2012 09:48:04 +0000 Subject: [gpfsug-discuss] Meeting at MEW Message-ID: Dear All, There will be an informal GPFS User Group meeting/networking session taking place at this year's Machine Evaluation Workshop in Liverpool on Wednesday 28th November from 2-3:30pm. We will be reviewing what the group has achieved and discussed since its inception at MEW in 2010 as well as discuss future topics for discussion at the next formal meeting in Spring 2013. We will hopefully also have an update on GPFS announcements from this week's SC'12 in Salt Lake City, USA. If you would like to attend, please register with me (email secretary at gpfsug.org or call 0114 257 2204) so I have an idea on numbers. Details for MEW can be found https://eventbooking.stfc.ac.uk/news-events/mew23 Note: You must be registered for MEW23 in order to attend this session. Many thanks, Claire Robson GPFS User Group Secretary Tel: 0114 257 2200 Mob: 07508 033896 Fax: 0114 257 0022 Web: www.gpfsug.org OCF plc is a company registered in England and Wales. Registered number 4132533, VAT number GB 780 6803 14. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG This message is private and confidential. If you have received this message in error, please notify us immediately and remove it from your system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.milsted at uk.ibm.com Fri Nov 16 16:08:13 2012 From: chris.milsted at uk.ibm.com (Chris Milsted5) Date: Fri, 16 Nov 2012 16:08:13 +0000 Subject: [gpfsug-discuss] AUTO: Chris Milsted is out of the office (returning 19/11/2012) Message-ID: I am out of the office until 19/11/2012. I am out of the office attending SC12 with limited access to email. I will respond where possible or SMS me on +44 7795 316 723 if urgent. Alternatively, if appropriate, I will respond to your email upon my return. regards Chris Note: This is an automated response to your message "gpfsug-discuss Digest, Vol 11, Issue 4" sent on 16/11/2012 12:00:01. This is the only notification you will receive while this person is away.