[gpfsug-discuss] Metadata only system pool

david_johnson at brown.edu david_johnson at brown.edu
Tue Jan 23 17:23:59 GMT 2018


If the new files need indirect blocks or extended attributes that don’t fit in the basic inode, additional metadata space would need to be allocated. There might be other reasons but these come to mind immediately.

  -- ddj
Dave Johnson

> On Jan 23, 2018, at 12:16 PM, Buterbaugh, Kevin L <Kevin.Buterbaugh at Vanderbilt.Edu> wrote:
> 
> Hi All,
> 
> I was under the (possibly false) impression that if you have a filesystem where the system pool contains metadata only then the only thing that would cause the amount of free space in that pool to change is the creation of more inodes … is that correct?  In other words, given that I have a filesystem with 130 million free (but allocated) inodes:
> 
> Inode Information
> -----------------
> Number of used inodes:       218635454
> Number of free inodes:       131364674
> Number of allocated inodes:  350000128
> Maximum number of inodes:    350000128
> 
> I would not expect that a user creating a few hundred or thousands of files could cause a “no space left on device” error (which I’ve got one user getting).  There’s plenty of free data space, BTW.
> 
> Now my system pool is almost “full”:
> 
> (pool total)           2.878T                                   34M (  0%)        140.9M ( 0%)
> 
> But again, what - outside of me creating more inodes - would cause that to change??
> 
> Thanks…
> 
> Kevin
> 
>> Kevin Buterbaugh - Senior System Administrator
> Vanderbilt University - Advanced Computing Center for Research and Education
> Kevin.Buterbaugh at vanderbilt.edu - (615)875-9633
> 
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180123/f4493d6b/attachment-0002.htm>


More information about the gpfsug-discuss mailing list