[gpfsug-discuss] Metadata only system pool

Frederick Stock stockf at us.ibm.com
Tue Jan 23 18:18:43 GMT 2018


You are correct about  mmchfs, you can increase the inode maximum but once 
an inode is allocated it cannot be de-allocated in the sense that the 
space can be recovered.  You can of course decreased the inode maximum to 
a value equal to the used and allocated inodes but that would not help you 
here.  Providing more metadata space via additional NSDs seems your  most 
expedient option to address the issue.

Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
stockf at us.ibm.com



From:   "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   01/23/2018 01:10 PM
Subject:        Re: [gpfsug-discuss] Metadata only system pool
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hi All, 

I do have metadata replication set to two, so Alex, does that make more 
sense?

And I had forgotten about indirect blocks for large files, which actually 
makes sense with the user in question … my apologies for that … due to a 
very gravely ill pet and a recovering at home from pneumonia family member 
I’m way more sleep deprived right now than I’d like.  :-(

Fred - I think you’ve already answered this … but mmchfs can only create / 
allocate more inodes … it cannot be used to shrink the number of inodes? 
That would make sense, and if that’s the case then I can allocate more 
NSDs to the system pool.

Thanks…

Kevin

On Jan 23, 2018, at 11:27 AM, Alex Chekholko <alex at calicolabs.com> wrote:

2.8TB seems quite high for only 350M inodes.  Are you sure you only have 
metadata in there?

On Tue, Jan 23, 2018 at 9:25 AM, Frederick Stock <stockf at us.ibm.com> 
wrote:
One possibility is the creation/expansion of directories or allocation of 
indirect blocks for large files.

Not sure if this is the issue here but at one time inode allocation was 
considered slow and so folks may have pre-allocated inodes to avoid that 
overhead during file creation.  To my understanding inode creation time is 
not so slow that users need to pre-allocate inodes.  Yes, there are likely 
some applications where pre-allocating may be necessary but I expect they 
would be the exception.  I mention this because you have a lot of free 
inodes and of course once they are allocated they cannot be de-allocated. 

Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
stockf at us.ibm.com



From:        "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:        01/23/2018 12:17 PM
Subject:        [gpfsug-discuss] Metadata only system pool
Sent by:        gpfsug-discuss-bounces at spectrumscale.org




Hi All, 

I was under the (possibly false) impression that if you have a filesystem 
where the system pool contains metadata only then the only thing that 
would cause the amount of free space in that pool to change is the 
creation of more inodes … is that correct?  In other words, given that I 
have a filesystem with 130 million free (but allocated) inodes:

Inode Information
-----------------
Number of used inodes:       218635454
Number of free inodes:       131364674
Number of allocated inodes:  350000128
Maximum number of inodes:    350000128

I would not expect that a user creating a few hundred or thousands of 
files could cause a “no space left on device” error (which I’ve got one 
user getting).  There’s plenty of free data space, BTW.

Now my system pool is almost “full”:

(pool total)           2.878T                                   34M (  0%) 
       140.9M ( 0%)

But again, what - outside of me creating more inodes - would cause that to 
change??

Thanks…

Kevin

—
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and 
Education
Kevin.Buterbaugh at vanderbilt.edu- (615)875-9633


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=p_1XEUyoJ7-VJxF_w8h9gJh8_Wj0Pey73LCLLoxodpw&m=gou0xYZwz8M-5i8mT6Tthafi8JW2aMrzQGMK1hUEUls&s=jcHOB_vmJjE8PnrpfHqzMkm1nk6QWwkn2npTEP6kcKs&e=





_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7C1607a3fe872e4241587b08d56286a746%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636523252830007825&sdata=rIFx3lzbAIH5SZtFxJsVqWMMSo%2F0LssNc4K4tZH3uQc%3D&reserved=0
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=p_1XEUyoJ7-VJxF_w8h9gJh8_Wj0Pey73LCLLoxodpw&m=fiiMOociXV9hsufScVc2JiRsOMFP-VQALqdlqN9U0HU&s=LN14zBEOYVrP2YRk3of_f08Ok5f256m7SYf1xL0qDvU&e=





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180123/82f9da12/attachment-0002.htm>


More information about the gpfsug-discuss mailing list