From pavel.pokorny at datera.cz Mon Feb 10 13:06:47 2014 From: pavel.pokorny at datera.cz (Pavel Pokorny) Date: Mon, 10 Feb 2014 14:06:47 +0100 Subject: [gpfsug-discuss] Next release of GPFS - Native RAID? Message-ID: Hello to all of you, my name is Pavel and I am from DATERA company. We are IBM business company using GPFS as a internal "brain" for our products solutions we are offering to the customers and using it for internal usage. I wanted to ask you whether there is any expecting term when will be released new version of GPFS? Is there going to be support for Native RAID for GPFS implementations, not just GSS and Power p775? Will the native RAID support FPO extension / licenses? Thank you very much. Pavel -- Ing. Pavel Pokorn? DATERA s.r.o. | Ovocn? trh 580/2 | Praha | Czech Republic www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz -------------- next part -------------- An HTML attachment was scrubbed... URL: From asgeir at twingine.no Tue Feb 11 17:23:18 2014 From: asgeir at twingine.no (Asgeir Storesund Nilsen) Date: Tue, 11 Feb 2014 19:23:18 +0200 Subject: [gpfsug-discuss] Metadata block size Message-ID: Hi, I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. Regards, Asgeir -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Tue Feb 11 22:34:49 2014 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 11 Feb 2014 22:34:49 +0000 Subject: [gpfsug-discuss] Metadata block size In-Reply-To: References: Message-ID: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> Hi Asgeir, >From memory, you need to have the data disks going in as a separate storage pool to have the split block size - so metadata disks in the "system" pool and data disks in , say, the "data" pool. Have you got that split here? ---- Orlando Sent from my phone > On 11 Feb 2014, at 17:23, Asgeir Storesund Nilsen wrote: > > Hi, > > I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. > > However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. > > This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. > > > Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. > > Regards, > Asgeir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From asgeir at twingine.no Wed Feb 12 06:56:19 2014 From: asgeir at twingine.no (Asgeir Storesund Nilsen) Date: Wed, 12 Feb 2014 08:56:19 +0200 Subject: [gpfsug-discuss] Metadata block size In-Reply-To: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> References: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> Message-ID: Orlando, Thanks, that proved to be exactly the cause of my hiccups. I only realized this after reading some more in the mmcrfs manual page and source. But has GPFS' behavior on this actually changed between 3.5.0.0 and 3.5.0.7, or is it only mmcrfs which has become more strict in enforcing what tscrfs actually does? Asgeir On Wed, Feb 12, 2014 at 12:34 AM, Orlando Richards wrote: > Hi Asgeir, > > From memory, you need to have the data disks going in as a separate storage pool to have the split block size - so metadata disks in the "system" pool and data disks in , say, the "data" pool. Have you got that split here? > > ---- > Orlando > > Sent from my phone > >> On 11 Feb 2014, at 17:23, Asgeir Storesund Nilsen wrote: >> >> Hi, >> >> I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. >> >> However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. >> >> This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. >> >> >> Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. >> >> Regards, >> Asgeir >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From secretary at gpfsug.org Wed Feb 12 11:29:33 2014 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Wed, 12 Feb 2014 11:29:33 +0000 Subject: [gpfsug-discuss] GPFS User Group #10 April 29th 2014 Message-ID: Dear members, Come and join us for the *10th GPFS User Group* Date: *Tuesday **29th April 2014* Location: IBM Southbank Client Centre, London, UK With technical presentations to include: *- GPFS 4.1* *- Performance Tuning* Please register for a place via email to: secretary at gpfsug.org Places are likely to be in high demand so register early! Thanks, Claire GPFS User Group Secretary -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.jones at ucl.ac.uk Wed Feb 12 16:43:45 2014 From: thomas.jones at ucl.ac.uk (Jones, Thomas) Date: Wed, 12 Feb 2014 16:43:45 +0000 Subject: [gpfsug-discuss] GPFS User Group #10 April 29th 2014 Message-ID: Dear Clare, I was wondering if there is still places for me to come to the 10th GPFS User group. Regards Thomas Jones Research Platforms Team Leader Data Centre Services Information Services Division University College London 1st Floor The Podium 1 Eversholt Street NW1 2DN phone: +44 20 3108 9859 internal-phone: 59859 mobile: 07580144349 -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederik.ferner at diamond.ac.uk Wed Feb 12 17:10:33 2014 From: frederik.ferner at diamond.ac.uk (Frederik Ferner) Date: Wed, 12 Feb 2014 17:10:33 +0000 Subject: [gpfsug-discuss] UG10 Registrations [was: GPFS User Group #10 April 29th 2014] In-Reply-To: References: Message-ID: <52FBAB09.4050404@diamond.ac.uk> Hi Claire, I'd like to register for the 10th GPFS User Group. Kind regards, Frederik On 12/02/14 11:29, Secretary GPFS UG wrote: > Dear members, > > Come and join us for the *10th GPFS User Group* > > Date: *Tuesday **29th April 2014* > > Location: IBM Southbank Client Centre, London, UK > > > With technical presentations to include: > * > **- GPFS 4.1** > ** > **- Performance Tuning* > > Please register for a place via email to: secretary at gpfsug.org > > > Places are likely to be in high demand so register early! > > Thanks, > Claire > > GPFS User Group Secretary > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Frederik Ferner Senior Computer Systems Administrator phone: +44 1235 77 8624 Diamond Light Source Ltd. mob: +44 7917 08 5110 (Apologies in advance for the lines below. Some bits are a legal requirement and I have no control over them.) -- This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd. Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message. Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom From viccornell at gmail.com Tue Feb 25 14:12:46 2014 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 25 Feb 2014 15:12:46 +0100 Subject: [gpfsug-discuss] vfs_acl_xattr Message-ID: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> Hi - is anyone using vfs_acl_xattr for windows shares on GPFS? Can you say how well it works? Disclaimer: I work for DDN and will use the information to help a customer. Thanks, Vic Vic Cornell viccornell at gmail.com From jonathan at buzzard.me.uk Tue Feb 25 14:23:09 2014 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 25 Feb 2014 14:23:09 +0000 Subject: [gpfsug-discuss] vfs_acl_xattr In-Reply-To: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> References: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> Message-ID: <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> On Tue, 2014-02-25 at 15:12 +0100, Vic Cornell wrote: > Hi - is anyone using vfs_acl_xattr for windows shares on GPFS? > I doubt it. The normal thing to do is to use NFSv4 ACL's in combination with vfs_gpfs. As this gives you 99% of what you might want and is well tested why are you considering vfs_acl_xattr? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From viccornell at gmail.com Tue Feb 25 14:42:53 2014 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 25 Feb 2014 15:42:53 +0100 Subject: [gpfsug-discuss] vfs_acl_xattr In-Reply-To: <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> References: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> Message-ID: <68CE4E15-9B6A-4773-A7AA-8F4977E64714@gmail.com> I suspect ignorance - thanks for the pointer - I'll look at the differences. Vic Cornell viccornell at gmail.com On 25 Feb 2014, at 15:23, Jonathan Buzzard wrote: > vfs_gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.bergman at uphs.upenn.edu Tue Feb 25 20:17:07 2014 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Tue, 25 Feb 2014 15:17:07 -0500 Subject: [gpfsug-discuss] excessive lowDiskSpace events (how is threshold triggered?) Message-ID: <3616.1393359427@localhost> I'm running GPFS 3.5.0.9 under Linux, and I'm seeing what seem to be an excessive number of lowDiskSpace events on the "system" pool. I've got an mmcallback set up, including a log report of which pool is triggering the lowDiskSpace callback. The part that is confusing me is that the "system" pool doesn't seem to be above the policy thresholds. For example, 'mmdf' shows that there is about 26% free in the 'system' pool: ------------------------- disk disk size failure holds holds free free name group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 33 TB) dx80_rg16_vol1 546G -1 yes yes 125.1G ( 23%) 23.96G ( 4%) dx80_rg4_vol1 546G 1 yes yes 108.1G ( 20%) 33.84G ( 6%) dx80_rg13_vol1 546G 1 yes yes 109G ( 20%) 32.78G ( 6%) dx80_rg6_vol1 546G 1 yes yes 104.4G ( 19%) 35.61G ( 7%) dx80_rg3_vol1 546G 1 yes yes 105.6G ( 19%) 35.29G ( 6%) ------------- -------------------- ------------------- (pool total) 2.666T 552.1G ( 20%) 161.5G ( 6%) ------------------------- The current policy has several rules related to the "system" pool: ------------------------- RULE 'move large files (50MB+) in the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(77,70) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE FILE_SIZE >= 52428800 /* highest threshold = least free space, move newest files greater than 1MB */ RULE 'move files that have not been changed in 3 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(76,70) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 3 ) AND KB_ALLOCATED >= 1024 /* next threshold: some free space, move middle-aged files */ RULE 'move files that have not been changed in 7 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(75,65) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 7 ) AND KB_ALLOCATED >= 1024 ------------------------- As I understand it, none of those rules should trigger a lowDiskSpace event when the pool is 74% full, as it is now. Is the threshold in a file migration policy based on the %free (or used) in full blocks only, or in the sum of full blocks plus fragments? Thanks, Mark From jonathan at buzzard.me.uk Tue Feb 25 21:29:43 2014 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 25 Feb 2014 21:29:43 +0000 Subject: [gpfsug-discuss] excessive lowDiskSpace events (how is threshold triggered?) In-Reply-To: <3616.1393359427@localhost> References: <3616.1393359427@localhost> Message-ID: <530D0B47.8060101@buzzard.me.uk> On 25/02/14 20:17, mark.bergman at uphs.upenn.edu wrote: > > I'm running GPFS 3.5.0.9 under Linux, and I'm seeing what seem to be an > excessive number of lowDiskSpace events on the "system" pool. > > I've got an mmcallback set up, including a log report of which pool is > triggering the lowDiskSpace callback. Bear in mind that once you hit a lowDiskSpace event your callback will helpfully be called every two minutes until the condition is cleared. So you callback needs to have locking otherwise the mmapplypolicy will go nuts if it takes more than two minutes to clear the lowDiskSpace event. > > The part that is confusing me is that the "system" pool doesn't seem to be > above the policy thresholds. > > For example, 'mmdf' shows that there is about 26% free in the 'system' pool: > > ------------------------- > disk disk size failure holds holds free free > name group metadata data in full blocks in fragments > --------------- ------------- -------- -------- ----- -------------------- > ------------------- > Disks in storage pool: system (Maximum disk size allowed is 33 TB) > dx80_rg16_vol1 546G -1 yes yes 125.1G ( 23%) 23.96G ( 4%) > dx80_rg4_vol1 546G 1 yes yes 108.1G ( 20%) 33.84G ( 6%) > dx80_rg13_vol1 546G 1 yes yes 109G ( 20%) 32.78G ( 6%) > dx80_rg6_vol1 546G 1 yes yes 104.4G ( 19%) 35.61G ( 7%) > dx80_rg3_vol1 546G 1 yes yes 105.6G ( 19%) 35.29G ( 6%) > ------------- -------------------- ------------------- > (pool total) 2.666T 552.1G ( 20%) 161.5G ( 6%) > ------------------------- Bear in mind these are round numbers. You cannot add the two percentages together and get a completely accurate picture. Stands to reason if you think about it. [SNIP] > > /* next threshold: some free space, move middle-aged files */ > RULE 'move files that have not been changed in 7 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' > TO POOL 'dx80_medium' > THRESHOLD(75,65) > LIMIT(95) > WEIGHT(KB_ALLOCATED) > WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 7 ) > AND KB_ALLOCATED >= 1024 > ------------------------- > > > As I understand it, none of those rules should trigger a lowDiskSpace event > when the pool is 74% full, as it is now. I would say 74% and 75% are very close and you are not taking into account that the 20% and 6% are rounded values and adding them together gives a result that is sufficiently slightly wrong to trigger the lowDiskSpace event. > Is the threshold in a file migration policy based on the %free (or used) in > full blocks only, or in the sum of full blocks plus fragments? What does mmdf without a --blocksize option, or with --blocksize 1K look like, and what does doing the accurate maths then reveal? My guess is you are that tiny bit fuller than you thing due to rounding errors, then you are getting hit with the lets call the callback every two minutes till it clears. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From pavel.pokorny at datera.cz Mon Feb 10 13:06:47 2014 From: pavel.pokorny at datera.cz (Pavel Pokorny) Date: Mon, 10 Feb 2014 14:06:47 +0100 Subject: [gpfsug-discuss] Next release of GPFS - Native RAID? Message-ID: Hello to all of you, my name is Pavel and I am from DATERA company. We are IBM business company using GPFS as a internal "brain" for our products solutions we are offering to the customers and using it for internal usage. I wanted to ask you whether there is any expecting term when will be released new version of GPFS? Is there going to be support for Native RAID for GPFS implementations, not just GSS and Power p775? Will the native RAID support FPO extension / licenses? Thank you very much. Pavel -- Ing. Pavel Pokorn? DATERA s.r.o. | Ovocn? trh 580/2 | Praha | Czech Republic www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz -------------- next part -------------- An HTML attachment was scrubbed... URL: From asgeir at twingine.no Tue Feb 11 17:23:18 2014 From: asgeir at twingine.no (Asgeir Storesund Nilsen) Date: Tue, 11 Feb 2014 19:23:18 +0200 Subject: [gpfsug-discuss] Metadata block size Message-ID: Hi, I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. Regards, Asgeir -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Tue Feb 11 22:34:49 2014 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 11 Feb 2014 22:34:49 +0000 Subject: [gpfsug-discuss] Metadata block size In-Reply-To: References: Message-ID: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> Hi Asgeir, >From memory, you need to have the data disks going in as a separate storage pool to have the split block size - so metadata disks in the "system" pool and data disks in , say, the "data" pool. Have you got that split here? ---- Orlando Sent from my phone > On 11 Feb 2014, at 17:23, Asgeir Storesund Nilsen wrote: > > Hi, > > I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. > > However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. > > This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. > > > Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. > > Regards, > Asgeir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From asgeir at twingine.no Wed Feb 12 06:56:19 2014 From: asgeir at twingine.no (Asgeir Storesund Nilsen) Date: Wed, 12 Feb 2014 08:56:19 +0200 Subject: [gpfsug-discuss] Metadata block size In-Reply-To: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> References: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> Message-ID: Orlando, Thanks, that proved to be exactly the cause of my hiccups. I only realized this after reading some more in the mmcrfs manual page and source. But has GPFS' behavior on this actually changed between 3.5.0.0 and 3.5.0.7, or is it only mmcrfs which has become more strict in enforcing what tscrfs actually does? Asgeir On Wed, Feb 12, 2014 at 12:34 AM, Orlando Richards wrote: > Hi Asgeir, > > From memory, you need to have the data disks going in as a separate storage pool to have the split block size - so metadata disks in the "system" pool and data disks in , say, the "data" pool. Have you got that split here? > > ---- > Orlando > > Sent from my phone > >> On 11 Feb 2014, at 17:23, Asgeir Storesund Nilsen wrote: >> >> Hi, >> >> I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. >> >> However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. >> >> This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. >> >> >> Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. >> >> Regards, >> Asgeir >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From secretary at gpfsug.org Wed Feb 12 11:29:33 2014 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Wed, 12 Feb 2014 11:29:33 +0000 Subject: [gpfsug-discuss] GPFS User Group #10 April 29th 2014 Message-ID: Dear members, Come and join us for the *10th GPFS User Group* Date: *Tuesday **29th April 2014* Location: IBM Southbank Client Centre, London, UK With technical presentations to include: *- GPFS 4.1* *- Performance Tuning* Please register for a place via email to: secretary at gpfsug.org Places are likely to be in high demand so register early! Thanks, Claire GPFS User Group Secretary -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.jones at ucl.ac.uk Wed Feb 12 16:43:45 2014 From: thomas.jones at ucl.ac.uk (Jones, Thomas) Date: Wed, 12 Feb 2014 16:43:45 +0000 Subject: [gpfsug-discuss] GPFS User Group #10 April 29th 2014 Message-ID: Dear Clare, I was wondering if there is still places for me to come to the 10th GPFS User group. Regards Thomas Jones Research Platforms Team Leader Data Centre Services Information Services Division University College London 1st Floor The Podium 1 Eversholt Street NW1 2DN phone: +44 20 3108 9859 internal-phone: 59859 mobile: 07580144349 -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederik.ferner at diamond.ac.uk Wed Feb 12 17:10:33 2014 From: frederik.ferner at diamond.ac.uk (Frederik Ferner) Date: Wed, 12 Feb 2014 17:10:33 +0000 Subject: [gpfsug-discuss] UG10 Registrations [was: GPFS User Group #10 April 29th 2014] In-Reply-To: References: Message-ID: <52FBAB09.4050404@diamond.ac.uk> Hi Claire, I'd like to register for the 10th GPFS User Group. Kind regards, Frederik On 12/02/14 11:29, Secretary GPFS UG wrote: > Dear members, > > Come and join us for the *10th GPFS User Group* > > Date: *Tuesday **29th April 2014* > > Location: IBM Southbank Client Centre, London, UK > > > With technical presentations to include: > * > **- GPFS 4.1** > ** > **- Performance Tuning* > > Please register for a place via email to: secretary at gpfsug.org > > > Places are likely to be in high demand so register early! > > Thanks, > Claire > > GPFS User Group Secretary > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Frederik Ferner Senior Computer Systems Administrator phone: +44 1235 77 8624 Diamond Light Source Ltd. mob: +44 7917 08 5110 (Apologies in advance for the lines below. Some bits are a legal requirement and I have no control over them.) -- This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd. Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message. Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom From viccornell at gmail.com Tue Feb 25 14:12:46 2014 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 25 Feb 2014 15:12:46 +0100 Subject: [gpfsug-discuss] vfs_acl_xattr Message-ID: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> Hi - is anyone using vfs_acl_xattr for windows shares on GPFS? Can you say how well it works? Disclaimer: I work for DDN and will use the information to help a customer. Thanks, Vic Vic Cornell viccornell at gmail.com From jonathan at buzzard.me.uk Tue Feb 25 14:23:09 2014 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 25 Feb 2014 14:23:09 +0000 Subject: [gpfsug-discuss] vfs_acl_xattr In-Reply-To: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> References: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> Message-ID: <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> On Tue, 2014-02-25 at 15:12 +0100, Vic Cornell wrote: > Hi - is anyone using vfs_acl_xattr for windows shares on GPFS? > I doubt it. The normal thing to do is to use NFSv4 ACL's in combination with vfs_gpfs. As this gives you 99% of what you might want and is well tested why are you considering vfs_acl_xattr? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From viccornell at gmail.com Tue Feb 25 14:42:53 2014 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 25 Feb 2014 15:42:53 +0100 Subject: [gpfsug-discuss] vfs_acl_xattr In-Reply-To: <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> References: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> Message-ID: <68CE4E15-9B6A-4773-A7AA-8F4977E64714@gmail.com> I suspect ignorance - thanks for the pointer - I'll look at the differences. Vic Cornell viccornell at gmail.com On 25 Feb 2014, at 15:23, Jonathan Buzzard wrote: > vfs_gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.bergman at uphs.upenn.edu Tue Feb 25 20:17:07 2014 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Tue, 25 Feb 2014 15:17:07 -0500 Subject: [gpfsug-discuss] excessive lowDiskSpace events (how is threshold triggered?) Message-ID: <3616.1393359427@localhost> I'm running GPFS 3.5.0.9 under Linux, and I'm seeing what seem to be an excessive number of lowDiskSpace events on the "system" pool. I've got an mmcallback set up, including a log report of which pool is triggering the lowDiskSpace callback. The part that is confusing me is that the "system" pool doesn't seem to be above the policy thresholds. For example, 'mmdf' shows that there is about 26% free in the 'system' pool: ------------------------- disk disk size failure holds holds free free name group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 33 TB) dx80_rg16_vol1 546G -1 yes yes 125.1G ( 23%) 23.96G ( 4%) dx80_rg4_vol1 546G 1 yes yes 108.1G ( 20%) 33.84G ( 6%) dx80_rg13_vol1 546G 1 yes yes 109G ( 20%) 32.78G ( 6%) dx80_rg6_vol1 546G 1 yes yes 104.4G ( 19%) 35.61G ( 7%) dx80_rg3_vol1 546G 1 yes yes 105.6G ( 19%) 35.29G ( 6%) ------------- -------------------- ------------------- (pool total) 2.666T 552.1G ( 20%) 161.5G ( 6%) ------------------------- The current policy has several rules related to the "system" pool: ------------------------- RULE 'move large files (50MB+) in the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(77,70) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE FILE_SIZE >= 52428800 /* highest threshold = least free space, move newest files greater than 1MB */ RULE 'move files that have not been changed in 3 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(76,70) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 3 ) AND KB_ALLOCATED >= 1024 /* next threshold: some free space, move middle-aged files */ RULE 'move files that have not been changed in 7 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(75,65) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 7 ) AND KB_ALLOCATED >= 1024 ------------------------- As I understand it, none of those rules should trigger a lowDiskSpace event when the pool is 74% full, as it is now. Is the threshold in a file migration policy based on the %free (or used) in full blocks only, or in the sum of full blocks plus fragments? Thanks, Mark From jonathan at buzzard.me.uk Tue Feb 25 21:29:43 2014 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 25 Feb 2014 21:29:43 +0000 Subject: [gpfsug-discuss] excessive lowDiskSpace events (how is threshold triggered?) In-Reply-To: <3616.1393359427@localhost> References: <3616.1393359427@localhost> Message-ID: <530D0B47.8060101@buzzard.me.uk> On 25/02/14 20:17, mark.bergman at uphs.upenn.edu wrote: > > I'm running GPFS 3.5.0.9 under Linux, and I'm seeing what seem to be an > excessive number of lowDiskSpace events on the "system" pool. > > I've got an mmcallback set up, including a log report of which pool is > triggering the lowDiskSpace callback. Bear in mind that once you hit a lowDiskSpace event your callback will helpfully be called every two minutes until the condition is cleared. So you callback needs to have locking otherwise the mmapplypolicy will go nuts if it takes more than two minutes to clear the lowDiskSpace event. > > The part that is confusing me is that the "system" pool doesn't seem to be > above the policy thresholds. > > For example, 'mmdf' shows that there is about 26% free in the 'system' pool: > > ------------------------- > disk disk size failure holds holds free free > name group metadata data in full blocks in fragments > --------------- ------------- -------- -------- ----- -------------------- > ------------------- > Disks in storage pool: system (Maximum disk size allowed is 33 TB) > dx80_rg16_vol1 546G -1 yes yes 125.1G ( 23%) 23.96G ( 4%) > dx80_rg4_vol1 546G 1 yes yes 108.1G ( 20%) 33.84G ( 6%) > dx80_rg13_vol1 546G 1 yes yes 109G ( 20%) 32.78G ( 6%) > dx80_rg6_vol1 546G 1 yes yes 104.4G ( 19%) 35.61G ( 7%) > dx80_rg3_vol1 546G 1 yes yes 105.6G ( 19%) 35.29G ( 6%) > ------------- -------------------- ------------------- > (pool total) 2.666T 552.1G ( 20%) 161.5G ( 6%) > ------------------------- Bear in mind these are round numbers. You cannot add the two percentages together and get a completely accurate picture. Stands to reason if you think about it. [SNIP] > > /* next threshold: some free space, move middle-aged files */ > RULE 'move files that have not been changed in 7 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' > TO POOL 'dx80_medium' > THRESHOLD(75,65) > LIMIT(95) > WEIGHT(KB_ALLOCATED) > WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 7 ) > AND KB_ALLOCATED >= 1024 > ------------------------- > > > As I understand it, none of those rules should trigger a lowDiskSpace event > when the pool is 74% full, as it is now. I would say 74% and 75% are very close and you are not taking into account that the 20% and 6% are rounded values and adding them together gives a result that is sufficiently slightly wrong to trigger the lowDiskSpace event. > Is the threshold in a file migration policy based on the %free (or used) in > full blocks only, or in the sum of full blocks plus fragments? What does mmdf without a --blocksize option, or with --blocksize 1K look like, and what does doing the accurate maths then reveal? My guess is you are that tiny bit fuller than you thing due to rounding errors, then you are getting hit with the lets call the callback every two minutes till it clears. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From pavel.pokorny at datera.cz Mon Feb 10 13:06:47 2014 From: pavel.pokorny at datera.cz (Pavel Pokorny) Date: Mon, 10 Feb 2014 14:06:47 +0100 Subject: [gpfsug-discuss] Next release of GPFS - Native RAID? Message-ID: Hello to all of you, my name is Pavel and I am from DATERA company. We are IBM business company using GPFS as a internal "brain" for our products solutions we are offering to the customers and using it for internal usage. I wanted to ask you whether there is any expecting term when will be released new version of GPFS? Is there going to be support for Native RAID for GPFS implementations, not just GSS and Power p775? Will the native RAID support FPO extension / licenses? Thank you very much. Pavel -- Ing. Pavel Pokorn? DATERA s.r.o. | Ovocn? trh 580/2 | Praha | Czech Republic www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz -------------- next part -------------- An HTML attachment was scrubbed... URL: From asgeir at twingine.no Tue Feb 11 17:23:18 2014 From: asgeir at twingine.no (Asgeir Storesund Nilsen) Date: Tue, 11 Feb 2014 19:23:18 +0200 Subject: [gpfsug-discuss] Metadata block size Message-ID: Hi, I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. Regards, Asgeir -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Tue Feb 11 22:34:49 2014 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 11 Feb 2014 22:34:49 +0000 Subject: [gpfsug-discuss] Metadata block size In-Reply-To: References: Message-ID: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> Hi Asgeir, >From memory, you need to have the data disks going in as a separate storage pool to have the split block size - so metadata disks in the "system" pool and data disks in , say, the "data" pool. Have you got that split here? ---- Orlando Sent from my phone > On 11 Feb 2014, at 17:23, Asgeir Storesund Nilsen wrote: > > Hi, > > I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. > > However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. > > This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. > > > Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. > > Regards, > Asgeir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From asgeir at twingine.no Wed Feb 12 06:56:19 2014 From: asgeir at twingine.no (Asgeir Storesund Nilsen) Date: Wed, 12 Feb 2014 08:56:19 +0200 Subject: [gpfsug-discuss] Metadata block size In-Reply-To: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> References: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> Message-ID: Orlando, Thanks, that proved to be exactly the cause of my hiccups. I only realized this after reading some more in the mmcrfs manual page and source. But has GPFS' behavior on this actually changed between 3.5.0.0 and 3.5.0.7, or is it only mmcrfs which has become more strict in enforcing what tscrfs actually does? Asgeir On Wed, Feb 12, 2014 at 12:34 AM, Orlando Richards wrote: > Hi Asgeir, > > From memory, you need to have the data disks going in as a separate storage pool to have the split block size - so metadata disks in the "system" pool and data disks in , say, the "data" pool. Have you got that split here? > > ---- > Orlando > > Sent from my phone > >> On 11 Feb 2014, at 17:23, Asgeir Storesund Nilsen wrote: >> >> Hi, >> >> I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. >> >> However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. >> >> This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. >> >> >> Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. >> >> Regards, >> Asgeir >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From secretary at gpfsug.org Wed Feb 12 11:29:33 2014 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Wed, 12 Feb 2014 11:29:33 +0000 Subject: [gpfsug-discuss] GPFS User Group #10 April 29th 2014 Message-ID: Dear members, Come and join us for the *10th GPFS User Group* Date: *Tuesday **29th April 2014* Location: IBM Southbank Client Centre, London, UK With technical presentations to include: *- GPFS 4.1* *- Performance Tuning* Please register for a place via email to: secretary at gpfsug.org Places are likely to be in high demand so register early! Thanks, Claire GPFS User Group Secretary -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.jones at ucl.ac.uk Wed Feb 12 16:43:45 2014 From: thomas.jones at ucl.ac.uk (Jones, Thomas) Date: Wed, 12 Feb 2014 16:43:45 +0000 Subject: [gpfsug-discuss] GPFS User Group #10 April 29th 2014 Message-ID: Dear Clare, I was wondering if there is still places for me to come to the 10th GPFS User group. Regards Thomas Jones Research Platforms Team Leader Data Centre Services Information Services Division University College London 1st Floor The Podium 1 Eversholt Street NW1 2DN phone: +44 20 3108 9859 internal-phone: 59859 mobile: 07580144349 -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederik.ferner at diamond.ac.uk Wed Feb 12 17:10:33 2014 From: frederik.ferner at diamond.ac.uk (Frederik Ferner) Date: Wed, 12 Feb 2014 17:10:33 +0000 Subject: [gpfsug-discuss] UG10 Registrations [was: GPFS User Group #10 April 29th 2014] In-Reply-To: References: Message-ID: <52FBAB09.4050404@diamond.ac.uk> Hi Claire, I'd like to register for the 10th GPFS User Group. Kind regards, Frederik On 12/02/14 11:29, Secretary GPFS UG wrote: > Dear members, > > Come and join us for the *10th GPFS User Group* > > Date: *Tuesday **29th April 2014* > > Location: IBM Southbank Client Centre, London, UK > > > With technical presentations to include: > * > **- GPFS 4.1** > ** > **- Performance Tuning* > > Please register for a place via email to: secretary at gpfsug.org > > > Places are likely to be in high demand so register early! > > Thanks, > Claire > > GPFS User Group Secretary > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Frederik Ferner Senior Computer Systems Administrator phone: +44 1235 77 8624 Diamond Light Source Ltd. mob: +44 7917 08 5110 (Apologies in advance for the lines below. Some bits are a legal requirement and I have no control over them.) -- This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd. Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message. Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom From viccornell at gmail.com Tue Feb 25 14:12:46 2014 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 25 Feb 2014 15:12:46 +0100 Subject: [gpfsug-discuss] vfs_acl_xattr Message-ID: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> Hi - is anyone using vfs_acl_xattr for windows shares on GPFS? Can you say how well it works? Disclaimer: I work for DDN and will use the information to help a customer. Thanks, Vic Vic Cornell viccornell at gmail.com From jonathan at buzzard.me.uk Tue Feb 25 14:23:09 2014 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 25 Feb 2014 14:23:09 +0000 Subject: [gpfsug-discuss] vfs_acl_xattr In-Reply-To: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> References: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> Message-ID: <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> On Tue, 2014-02-25 at 15:12 +0100, Vic Cornell wrote: > Hi - is anyone using vfs_acl_xattr for windows shares on GPFS? > I doubt it. The normal thing to do is to use NFSv4 ACL's in combination with vfs_gpfs. As this gives you 99% of what you might want and is well tested why are you considering vfs_acl_xattr? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From viccornell at gmail.com Tue Feb 25 14:42:53 2014 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 25 Feb 2014 15:42:53 +0100 Subject: [gpfsug-discuss] vfs_acl_xattr In-Reply-To: <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> References: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> Message-ID: <68CE4E15-9B6A-4773-A7AA-8F4977E64714@gmail.com> I suspect ignorance - thanks for the pointer - I'll look at the differences. Vic Cornell viccornell at gmail.com On 25 Feb 2014, at 15:23, Jonathan Buzzard wrote: > vfs_gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.bergman at uphs.upenn.edu Tue Feb 25 20:17:07 2014 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Tue, 25 Feb 2014 15:17:07 -0500 Subject: [gpfsug-discuss] excessive lowDiskSpace events (how is threshold triggered?) Message-ID: <3616.1393359427@localhost> I'm running GPFS 3.5.0.9 under Linux, and I'm seeing what seem to be an excessive number of lowDiskSpace events on the "system" pool. I've got an mmcallback set up, including a log report of which pool is triggering the lowDiskSpace callback. The part that is confusing me is that the "system" pool doesn't seem to be above the policy thresholds. For example, 'mmdf' shows that there is about 26% free in the 'system' pool: ------------------------- disk disk size failure holds holds free free name group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 33 TB) dx80_rg16_vol1 546G -1 yes yes 125.1G ( 23%) 23.96G ( 4%) dx80_rg4_vol1 546G 1 yes yes 108.1G ( 20%) 33.84G ( 6%) dx80_rg13_vol1 546G 1 yes yes 109G ( 20%) 32.78G ( 6%) dx80_rg6_vol1 546G 1 yes yes 104.4G ( 19%) 35.61G ( 7%) dx80_rg3_vol1 546G 1 yes yes 105.6G ( 19%) 35.29G ( 6%) ------------- -------------------- ------------------- (pool total) 2.666T 552.1G ( 20%) 161.5G ( 6%) ------------------------- The current policy has several rules related to the "system" pool: ------------------------- RULE 'move large files (50MB+) in the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(77,70) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE FILE_SIZE >= 52428800 /* highest threshold = least free space, move newest files greater than 1MB */ RULE 'move files that have not been changed in 3 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(76,70) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 3 ) AND KB_ALLOCATED >= 1024 /* next threshold: some free space, move middle-aged files */ RULE 'move files that have not been changed in 7 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(75,65) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 7 ) AND KB_ALLOCATED >= 1024 ------------------------- As I understand it, none of those rules should trigger a lowDiskSpace event when the pool is 74% full, as it is now. Is the threshold in a file migration policy based on the %free (or used) in full blocks only, or in the sum of full blocks plus fragments? Thanks, Mark From jonathan at buzzard.me.uk Tue Feb 25 21:29:43 2014 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 25 Feb 2014 21:29:43 +0000 Subject: [gpfsug-discuss] excessive lowDiskSpace events (how is threshold triggered?) In-Reply-To: <3616.1393359427@localhost> References: <3616.1393359427@localhost> Message-ID: <530D0B47.8060101@buzzard.me.uk> On 25/02/14 20:17, mark.bergman at uphs.upenn.edu wrote: > > I'm running GPFS 3.5.0.9 under Linux, and I'm seeing what seem to be an > excessive number of lowDiskSpace events on the "system" pool. > > I've got an mmcallback set up, including a log report of which pool is > triggering the lowDiskSpace callback. Bear in mind that once you hit a lowDiskSpace event your callback will helpfully be called every two minutes until the condition is cleared. So you callback needs to have locking otherwise the mmapplypolicy will go nuts if it takes more than two minutes to clear the lowDiskSpace event. > > The part that is confusing me is that the "system" pool doesn't seem to be > above the policy thresholds. > > For example, 'mmdf' shows that there is about 26% free in the 'system' pool: > > ------------------------- > disk disk size failure holds holds free free > name group metadata data in full blocks in fragments > --------------- ------------- -------- -------- ----- -------------------- > ------------------- > Disks in storage pool: system (Maximum disk size allowed is 33 TB) > dx80_rg16_vol1 546G -1 yes yes 125.1G ( 23%) 23.96G ( 4%) > dx80_rg4_vol1 546G 1 yes yes 108.1G ( 20%) 33.84G ( 6%) > dx80_rg13_vol1 546G 1 yes yes 109G ( 20%) 32.78G ( 6%) > dx80_rg6_vol1 546G 1 yes yes 104.4G ( 19%) 35.61G ( 7%) > dx80_rg3_vol1 546G 1 yes yes 105.6G ( 19%) 35.29G ( 6%) > ------------- -------------------- ------------------- > (pool total) 2.666T 552.1G ( 20%) 161.5G ( 6%) > ------------------------- Bear in mind these are round numbers. You cannot add the two percentages together and get a completely accurate picture. Stands to reason if you think about it. [SNIP] > > /* next threshold: some free space, move middle-aged files */ > RULE 'move files that have not been changed in 7 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' > TO POOL 'dx80_medium' > THRESHOLD(75,65) > LIMIT(95) > WEIGHT(KB_ALLOCATED) > WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 7 ) > AND KB_ALLOCATED >= 1024 > ------------------------- > > > As I understand it, none of those rules should trigger a lowDiskSpace event > when the pool is 74% full, as it is now. I would say 74% and 75% are very close and you are not taking into account that the 20% and 6% are rounded values and adding them together gives a result that is sufficiently slightly wrong to trigger the lowDiskSpace event. > Is the threshold in a file migration policy based on the %free (or used) in > full blocks only, or in the sum of full blocks plus fragments? What does mmdf without a --blocksize option, or with --blocksize 1K look like, and what does doing the accurate maths then reveal? My guess is you are that tiny bit fuller than you thing due to rounding errors, then you are getting hit with the lets call the callback every two minutes till it clears. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From pavel.pokorny at datera.cz Mon Feb 10 13:06:47 2014 From: pavel.pokorny at datera.cz (Pavel Pokorny) Date: Mon, 10 Feb 2014 14:06:47 +0100 Subject: [gpfsug-discuss] Next release of GPFS - Native RAID? Message-ID: Hello to all of you, my name is Pavel and I am from DATERA company. We are IBM business company using GPFS as a internal "brain" for our products solutions we are offering to the customers and using it for internal usage. I wanted to ask you whether there is any expecting term when will be released new version of GPFS? Is there going to be support for Native RAID for GPFS implementations, not just GSS and Power p775? Will the native RAID support FPO extension / licenses? Thank you very much. Pavel -- Ing. Pavel Pokorn? DATERA s.r.o. | Ovocn? trh 580/2 | Praha | Czech Republic www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz -------------- next part -------------- An HTML attachment was scrubbed... URL: From asgeir at twingine.no Tue Feb 11 17:23:18 2014 From: asgeir at twingine.no (Asgeir Storesund Nilsen) Date: Tue, 11 Feb 2014 19:23:18 +0200 Subject: [gpfsug-discuss] Metadata block size Message-ID: Hi, I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. Regards, Asgeir -------------- next part -------------- An HTML attachment was scrubbed... URL: From orlando.richards at ed.ac.uk Tue Feb 11 22:34:49 2014 From: orlando.richards at ed.ac.uk (Orlando Richards) Date: Tue, 11 Feb 2014 22:34:49 +0000 Subject: [gpfsug-discuss] Metadata block size In-Reply-To: References: Message-ID: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> Hi Asgeir, >From memory, you need to have the data disks going in as a separate storage pool to have the split block size - so metadata disks in the "system" pool and data disks in , say, the "data" pool. Have you got that split here? ---- Orlando Sent from my phone > On 11 Feb 2014, at 17:23, Asgeir Storesund Nilsen wrote: > > Hi, > > I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. > > However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. > > This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. > > > Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. > > Regards, > Asgeir > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From asgeir at twingine.no Wed Feb 12 06:56:19 2014 From: asgeir at twingine.no (Asgeir Storesund Nilsen) Date: Wed, 12 Feb 2014 08:56:19 +0200 Subject: [gpfsug-discuss] Metadata block size In-Reply-To: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> References: <82C61695-A281-4BEC-AE07-5EC26F072165@ed.ac.uk> Message-ID: Orlando, Thanks, that proved to be exactly the cause of my hiccups. I only realized this after reading some more in the mmcrfs manual page and source. But has GPFS' behavior on this actually changed between 3.5.0.0 and 3.5.0.7, or is it only mmcrfs which has become more strict in enforcing what tscrfs actually does? Asgeir On Wed, Feb 12, 2014 at 12:34 AM, Orlando Richards wrote: > Hi Asgeir, > > From memory, you need to have the data disks going in as a separate storage pool to have the split block size - so metadata disks in the "system" pool and data disks in , say, the "data" pool. Have you got that split here? > > ---- > Orlando > > Sent from my phone > >> On 11 Feb 2014, at 17:23, Asgeir Storesund Nilsen wrote: >> >> Hi, >> >> I want to create a file system with 16MB for data blocks and 256k for metadata blocks. Under filesystem version 13.01 (3.5.0.0) this worked just fine, even when upgrading GPFS later. >> >> However, for a filesystem created with version 13.23 (3.5.0.7), if I specify both data and metadata block sizes, the metadata block size applies for both. If I do not specify metadata block size, the data block size (-B) is used for both. >> >> This has a detrimental impact on our metadataOnly NSDs, as they fill up pretty quickly. >> >> >> Are any of you aware of updates / bugs in GPFS that might help explain and alleviate this issue? Any hints would be appreciated. >> >> Regards, >> Asgeir >> _______________________________________________ >> gpfsug-discuss mailing list >> gpfsug-discuss at gpfsug.org >> http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss From secretary at gpfsug.org Wed Feb 12 11:29:33 2014 From: secretary at gpfsug.org (Secretary GPFS UG) Date: Wed, 12 Feb 2014 11:29:33 +0000 Subject: [gpfsug-discuss] GPFS User Group #10 April 29th 2014 Message-ID: Dear members, Come and join us for the *10th GPFS User Group* Date: *Tuesday **29th April 2014* Location: IBM Southbank Client Centre, London, UK With technical presentations to include: *- GPFS 4.1* *- Performance Tuning* Please register for a place via email to: secretary at gpfsug.org Places are likely to be in high demand so register early! Thanks, Claire GPFS User Group Secretary -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.jones at ucl.ac.uk Wed Feb 12 16:43:45 2014 From: thomas.jones at ucl.ac.uk (Jones, Thomas) Date: Wed, 12 Feb 2014 16:43:45 +0000 Subject: [gpfsug-discuss] GPFS User Group #10 April 29th 2014 Message-ID: Dear Clare, I was wondering if there is still places for me to come to the 10th GPFS User group. Regards Thomas Jones Research Platforms Team Leader Data Centre Services Information Services Division University College London 1st Floor The Podium 1 Eversholt Street NW1 2DN phone: +44 20 3108 9859 internal-phone: 59859 mobile: 07580144349 -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederik.ferner at diamond.ac.uk Wed Feb 12 17:10:33 2014 From: frederik.ferner at diamond.ac.uk (Frederik Ferner) Date: Wed, 12 Feb 2014 17:10:33 +0000 Subject: [gpfsug-discuss] UG10 Registrations [was: GPFS User Group #10 April 29th 2014] In-Reply-To: References: Message-ID: <52FBAB09.4050404@diamond.ac.uk> Hi Claire, I'd like to register for the 10th GPFS User Group. Kind regards, Frederik On 12/02/14 11:29, Secretary GPFS UG wrote: > Dear members, > > Come and join us for the *10th GPFS User Group* > > Date: *Tuesday **29th April 2014* > > Location: IBM Southbank Client Centre, London, UK > > > With technical presentations to include: > * > **- GPFS 4.1** > ** > **- Performance Tuning* > > Please register for a place via email to: secretary at gpfsug.org > > > Places are likely to be in high demand so register early! > > Thanks, > Claire > > GPFS User Group Secretary > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at gpfsug.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > -- Frederik Ferner Senior Computer Systems Administrator phone: +44 1235 77 8624 Diamond Light Source Ltd. mob: +44 7917 08 5110 (Apologies in advance for the lines below. Some bits are a legal requirement and I have no control over them.) -- This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail. Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd. Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message. Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom From viccornell at gmail.com Tue Feb 25 14:12:46 2014 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 25 Feb 2014 15:12:46 +0100 Subject: [gpfsug-discuss] vfs_acl_xattr Message-ID: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> Hi - is anyone using vfs_acl_xattr for windows shares on GPFS? Can you say how well it works? Disclaimer: I work for DDN and will use the information to help a customer. Thanks, Vic Vic Cornell viccornell at gmail.com From jonathan at buzzard.me.uk Tue Feb 25 14:23:09 2014 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 25 Feb 2014 14:23:09 +0000 Subject: [gpfsug-discuss] vfs_acl_xattr In-Reply-To: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> References: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> Message-ID: <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> On Tue, 2014-02-25 at 15:12 +0100, Vic Cornell wrote: > Hi - is anyone using vfs_acl_xattr for windows shares on GPFS? > I doubt it. The normal thing to do is to use NFSv4 ACL's in combination with vfs_gpfs. As this gives you 99% of what you might want and is well tested why are you considering vfs_acl_xattr? JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom. From viccornell at gmail.com Tue Feb 25 14:42:53 2014 From: viccornell at gmail.com (Vic Cornell) Date: Tue, 25 Feb 2014 15:42:53 +0100 Subject: [gpfsug-discuss] vfs_acl_xattr In-Reply-To: <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> References: <9862894E-7EEF-46F6-8617-88647CC04751@gmail.com> <1393338189.25882.19.camel@buzzard.phy.strath.ac.uk> Message-ID: <68CE4E15-9B6A-4773-A7AA-8F4977E64714@gmail.com> I suspect ignorance - thanks for the pointer - I'll look at the differences. Vic Cornell viccornell at gmail.com On 25 Feb 2014, at 15:23, Jonathan Buzzard wrote: > vfs_gpfs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.bergman at uphs.upenn.edu Tue Feb 25 20:17:07 2014 From: mark.bergman at uphs.upenn.edu (mark.bergman at uphs.upenn.edu) Date: Tue, 25 Feb 2014 15:17:07 -0500 Subject: [gpfsug-discuss] excessive lowDiskSpace events (how is threshold triggered?) Message-ID: <3616.1393359427@localhost> I'm running GPFS 3.5.0.9 under Linux, and I'm seeing what seem to be an excessive number of lowDiskSpace events on the "system" pool. I've got an mmcallback set up, including a log report of which pool is triggering the lowDiskSpace callback. The part that is confusing me is that the "system" pool doesn't seem to be above the policy thresholds. For example, 'mmdf' shows that there is about 26% free in the 'system' pool: ------------------------- disk disk size failure holds holds free free name group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------------- Disks in storage pool: system (Maximum disk size allowed is 33 TB) dx80_rg16_vol1 546G -1 yes yes 125.1G ( 23%) 23.96G ( 4%) dx80_rg4_vol1 546G 1 yes yes 108.1G ( 20%) 33.84G ( 6%) dx80_rg13_vol1 546G 1 yes yes 109G ( 20%) 32.78G ( 6%) dx80_rg6_vol1 546G 1 yes yes 104.4G ( 19%) 35.61G ( 7%) dx80_rg3_vol1 546G 1 yes yes 105.6G ( 19%) 35.29G ( 6%) ------------- -------------------- ------------------- (pool total) 2.666T 552.1G ( 20%) 161.5G ( 6%) ------------------------- The current policy has several rules related to the "system" pool: ------------------------- RULE 'move large files (50MB+) in the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(77,70) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE FILE_SIZE >= 52428800 /* highest threshold = least free space, move newest files greater than 1MB */ RULE 'move files that have not been changed in 3 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(76,70) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 3 ) AND KB_ALLOCATED >= 1024 /* next threshold: some free space, move middle-aged files */ RULE 'move files that have not been changed in 7 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' TO POOL 'dx80_medium' THRESHOLD(75,65) LIMIT(95) WEIGHT(KB_ALLOCATED) WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 7 ) AND KB_ALLOCATED >= 1024 ------------------------- As I understand it, none of those rules should trigger a lowDiskSpace event when the pool is 74% full, as it is now. Is the threshold in a file migration policy based on the %free (or used) in full blocks only, or in the sum of full blocks plus fragments? Thanks, Mark From jonathan at buzzard.me.uk Tue Feb 25 21:29:43 2014 From: jonathan at buzzard.me.uk (Jonathan Buzzard) Date: Tue, 25 Feb 2014 21:29:43 +0000 Subject: [gpfsug-discuss] excessive lowDiskSpace events (how is threshold triggered?) In-Reply-To: <3616.1393359427@localhost> References: <3616.1393359427@localhost> Message-ID: <530D0B47.8060101@buzzard.me.uk> On 25/02/14 20:17, mark.bergman at uphs.upenn.edu wrote: > > I'm running GPFS 3.5.0.9 under Linux, and I'm seeing what seem to be an > excessive number of lowDiskSpace events on the "system" pool. > > I've got an mmcallback set up, including a log report of which pool is > triggering the lowDiskSpace callback. Bear in mind that once you hit a lowDiskSpace event your callback will helpfully be called every two minutes until the condition is cleared. So you callback needs to have locking otherwise the mmapplypolicy will go nuts if it takes more than two minutes to clear the lowDiskSpace event. > > The part that is confusing me is that the "system" pool doesn't seem to be > above the policy thresholds. > > For example, 'mmdf' shows that there is about 26% free in the 'system' pool: > > ------------------------- > disk disk size failure holds holds free free > name group metadata data in full blocks in fragments > --------------- ------------- -------- -------- ----- -------------------- > ------------------- > Disks in storage pool: system (Maximum disk size allowed is 33 TB) > dx80_rg16_vol1 546G -1 yes yes 125.1G ( 23%) 23.96G ( 4%) > dx80_rg4_vol1 546G 1 yes yes 108.1G ( 20%) 33.84G ( 6%) > dx80_rg13_vol1 546G 1 yes yes 109G ( 20%) 32.78G ( 6%) > dx80_rg6_vol1 546G 1 yes yes 104.4G ( 19%) 35.61G ( 7%) > dx80_rg3_vol1 546G 1 yes yes 105.6G ( 19%) 35.29G ( 6%) > ------------- -------------------- ------------------- > (pool total) 2.666T 552.1G ( 20%) 161.5G ( 6%) > ------------------------- Bear in mind these are round numbers. You cannot add the two percentages together and get a completely accurate picture. Stands to reason if you think about it. [SNIP] > > /* next threshold: some free space, move middle-aged files */ > RULE 'move files that have not been changed in 7 days from the system pool to dx80_medium' MIGRATE FROM POOL 'system' > TO POOL 'dx80_medium' > THRESHOLD(75,65) > LIMIT(95) > WEIGHT(KB_ALLOCATED) > WHERE (DAYS(CURRENT_TIMESTAMP) - DAYS(CHANGE_TIME) > 7 ) > AND KB_ALLOCATED >= 1024 > ------------------------- > > > As I understand it, none of those rules should trigger a lowDiskSpace event > when the pool is 74% full, as it is now. I would say 74% and 75% are very close and you are not taking into account that the 20% and 6% are rounded values and adding them together gives a result that is sufficiently slightly wrong to trigger the lowDiskSpace event. > Is the threshold in a file migration policy based on the %free (or used) in > full blocks only, or in the sum of full blocks plus fragments? What does mmdf without a --blocksize option, or with --blocksize 1K look like, and what does doing the accurate maths then reveal? My guess is you are that tiny bit fuller than you thing due to rounding errors, then you are getting hit with the lets call the callback every two minutes till it clears. JAB. -- Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk Fife, United Kingdom.