[gpfsug-discuss] pool-metadata_high_error

Frederick Stock stockf at us.ibm.com
Mon May 14 12:28:58 BST 2018


The difference in your inode information is presumably because the fileset 
you reference is an independent fileset and it has its own inode space 
distinct from the indoe space used for the "root" fileset (file system).

Fred
__________________________________________________
Fred Stock | IBM Pittsburgh Lab | 720-430-8821
stockf at us.ibm.com



From:   "Markus Rohwedder" <rohwedder at de.ibm.com>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   05/14/2018 07:19 AM
Subject:        Re: [gpfsug-discuss] pool-metadata_high_error
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Hello, 

the pool metadata high error reports issues with the free blocks in the 
metadataOnly and/or dataAndMetadata NSDs in the system pool.

mmlspool and subsequently the GPFSPool sensor is the source of the 
information that is used be the threshold that reports this error.

So please compare with 

mmlspool 
and 
mmperfmon query gpfs_pool_disksize, gpfs_pool_free_fullkb -b 86400 -n 1

Once inodes are allocated I am not aware of a method to de-allocate them. 
This is what the Knowledge Center says:

"Inodes are allocated when they are used. When a file is deleted, the 
inode is reused, but inodes are never deallocated. When setting the 
maximum number of inodes in a file system, there is the option to 
preallocate inodes. However, in most cases there is no need to preallocate 
inodes because, by default, inodes are allocated in sets as needed. If you 
do decide to preallocate inodes, be careful not to preallocate more inodes 
than will be used; otherwise, the allocated inodes will unnecessarily 
consume metadata space that cannot be reclaimed. "


Mit freundlichen Grüßen / Kind regards

Dr. Markus Rohwedder

Spectrum Scale GUI Development


Phone:
+49 7034 6430190
IBM Deutschland Research & Development

E-Mail:
rohwedder at de.ibm.com
Am Weiher 24


65451 Kelsterbach


Germany



KG ---14.05.2018 12:57:33---Hi Folks IHAC who is reporting 
pool-metadata_high_error on GUI.

From: KG <spectrumscale at kiranghag.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 14.05.2018 12:57
Subject: [gpfsug-discuss] pool-metadata_high_error
Sent by: gpfsug-discuss-bounces at spectrumscale.org



Hi Folks

IHAC who is reporting pool-metadata_high_error on GUI.

The inode utilisation on filesystem is as below
Used inodes - 92922895
free inodes - 1684812529
allocated - 1777735424
max inodes - 1911363520

the inode utilization on one fileset (it is only one being used) is below
Used inodes - 93252664
allocated - 1776624128
max inodes 1876624064

is this because the difference in allocated and max inodes is very less?

Customer tried reducing allocated inodes on fileset (between max and used 
inode) and GUI complains that it is out of range.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/a1cc8512/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/a1cc8512/attachment-0014.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 4659 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/a1cc8512/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/a1cc8512/attachment-0015.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/a1cc8512/attachment-0016.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/a1cc8512/attachment-0017.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/a1cc8512/attachment-0018.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/a1cc8512/attachment-0019.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180514/a1cc8512/attachment-0020.gif>


More information about the gpfsug-discuss mailing list