[gpfsug-discuss] Metadata usage almost doubled after policy run with migration rule

IBM Spectrum Scale scale at us.ibm.com
Wed Jun 9 13:54:36 BST 2021


Hi Billich,

>Or maybe illplaced files use larger inodes? Looks like for each used inode
we increased by about 4k: 400M inodes, 1.6T increase in size

Basically a migration policy run with -I defer would just simply mark the
files as illPlaced which would not cause metadata extension for such files
(e.g., inode size is fixed after file system creation). Instead, I'm just
wondering about your placement rules, which are existing rules or newly
installed rules? Which could set EAs to newly created files and may cause
increased metadata size. Also any new EAs are inserted for files?


Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------

If you feel that your question can benefit other users of  Spectrum Scale
(GPFS), then please post it to the public IBM developerWroks Forum at
https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479.


If your query concerns a potential software error in Spectrum Scale (GPFS)
and you have an IBM software maintenance contract please contact
1-800-237-5511 in the United States or your local IBM Service Center in
other countries.

The forum is informally monitored as time permits and should not be used
for priority messages to the Spectrum Scale (GPFS) team.



From:	"Billich  Heinrich Rainer (ID SD)"
            <heinrich.billich at id.ethz.ch>
To:	gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:	2021/06/08 05:19 PM
Subject:	[EXTERNAL] [gpfsug-discuss] Metadata usage almost doubled after
            policy run	with migration rule
Sent by:	gpfsug-discuss-bounces at spectrumscale.org




 Hello,

 A policy run with ‘-I defer’ and a placement rule did almost double the
 metadata usage of a filesystem. This filled the metadata disks to a
 critical level. I would like to understand if this is to be expected and
 ‘as designed’ or if I face some issue or bug.
  I hope a subsequent run of ‘mmrstripefs -p’ will reduce the metadata
 usage again.
 Thank you


 I want to move all data to a new storage pool and did run a policy like

   RULE 'migrate_to_Data'
     MIGRATE
       WEIGHT(0)
   TO POOL 'Data'

 for each fileset with

   mmapplypolicy -I defer

 Next I want to actually move the data with

   mmrestripefs -p

 After the policy run metadata usage increased from 2.06TiB to 3.53TiB and
 filled the available metadata space by >90%. This is somewhat surprising.
 Will the following  run of ‘mmrestripefs -p’ reduce the usage again, when
 the files are not illplaced any more? The number of used Inodes did not
 change noticeably during the policy run.

 Or maybe illplaced files use larger inodes? Looks like for each used inode
 we increased by about 4k: 400M inodes, 1.6T increase in size

 Thank you,

 Heiner

 Some details

   # mmlsfs  fsxxxx -f -i -B -I -m -M -r -R -V
   flag                value                    description
   ------------------- ------------------------
   -----------------------------------
   -f                 8192                     Minimum fragment (subblock)
   size in bytes (system pool)
                       32768                    Minimum fragment (subblock)
   size in bytes (other pools)
   -i                 4096                     Inode size in bytes
   -B                 1048576                  Block size (system pool)
                       4194304                  Block size (other pools)
   -I                 32768                    Indirect block size in bytes
   -m                 1                        Default number of metadata
   replicas
   -M                 2                        Maximum number of metadata
   replicas
   -r                 1                        Default number of data
   replicas
   -R                 2                        Maximum number of data
   replicas
   -V                 23.00 (5.0.5.0)          Current file system version
                     19.01 (5.0.1.0)          Original file system version

   Inode Information
   -----------------
   Total number of used inodes in all Inode spaces:          398502837
   Total number of free inodes in all Inode spaces:           94184267
   Total number of allocated inodes in all Inode spaces:     492687104
 Total of Maximum number of inodes in all Inode spaces:    916122880
 [attachment "smime.p7s" deleted by Hai Zhong HZ Zhou/China/IBM]
 _______________________________________________
 gpfsug-discuss mailing list
 gpfsug-discuss at spectrumscale.org
 http://gpfsug.org/mailman/listinfo/gpfsug-discuss


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210609/1a6361e9/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210609/1a6361e9/attachment-0002.gif>


More information about the gpfsug-discuss mailing list