[gpfsug-discuss] Metadata only system pool
Alex Chekholko
alex at calicolabs.com
Tue Jan 23 17:27:57 GMT 2018
2.8TB seems quite high for only 350M inodes. Are you sure you only have
metadata in there?
On Tue, Jan 23, 2018 at 9:25 AM, Frederick Stock <stockf at us.ibm.com> wrote:
> One possibility is the creation/expansion of directories or allocation of
> indirect blocks for large files.
>
> Not sure if this is the issue here but at one time inode allocation was
> considered slow and so folks may have pre-allocated inodes to avoid that
> overhead during file creation. To my understanding inode creation time is
> not so slow that users need to pre-allocate inodes. Yes, there are likely
> some applications where pre-allocating may be necessary but I expect they
> would be the exception. I mention this because you have a lot of free
> inodes and of course once they are allocated they cannot be de-allocated.
>
> Fred
> __________________________________________________
> Fred Stock | IBM Pittsburgh Lab | 720-430-8821 <(720)%20430-8821>
> stockf at us.ibm.com
>
>
>
> From: "Buterbaugh, Kevin L" <Kevin.Buterbaugh at Vanderbilt.Edu>
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date: 01/23/2018 12:17 PM
> Subject: [gpfsug-discuss] Metadata only system pool
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hi All,
>
> I was under the (possibly false) impression that if you have a filesystem
> where the system pool contains metadata only then the only thing that would
> cause the amount of free space in that pool to change is the creation of
> more inodes … is that correct? In other words, given that I have a
> filesystem with 130 million free (but allocated) inodes:
>
> Inode Information
> -----------------
> Number of used inodes: 218635454
> Number of free inodes: 131364674
> Number of allocated inodes: 350000128
> Maximum number of inodes: 350000128
>
> I would not expect that a user creating a few hundred or thousands of
> files could cause a “no space left on device” error (which I’ve got one
> user getting). There’s plenty of free data space, BTW.
>
> Now my system pool is almost “full”:
>
> (pool total) 2.878T 34M ( 0%)
> 140.9M ( 0%)
>
> But again, what - outside of me creating more inodes - would cause that to
> change??
>
> Thanks…
>
> Kevin
>
> —
> Kevin Buterbaugh - Senior System Administrator
> Vanderbilt University - Advanced Computing Center for Research and
> Education
> *Kevin.Buterbaugh at vanderbilt.edu* <Kevin.Buterbaugh at vanderbilt.edu>-
> (615)875-9633 <(615)%20875-9633>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.
> org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_
> iaSHvJObTbx-siA1ZOg&r=p_1XEUyoJ7-VJxF_w8h9gJh8_Wj0Pey73LCLLoxodpw&m=
> gou0xYZwz8M-5i8mT6Tthafi8JW2aMrzQGMK1hUEUls&s=jcHOB_
> vmJjE8PnrpfHqzMkm1nk6QWwkn2npTEP6kcKs&e=
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180123/c53672d5/attachment-0002.htm>
More information about the gpfsug-discuss
mailing list