[gpfsug-discuss] Metadata usage almost doubled after policy run with migration rule

Billich Heinrich Rainer (ID SD) heinrich.billich at id.ethz.ch
Thu Jun 17 12:53:03 BST 2021


Hello,

 

Thank you for your response. I opened a case with IBM and what we found is – as I understand:

 

If you change the storage pool of a file which has copy in a snapshot the  inode is dublicated (copy on write) – the data pool is part of the inode and its preserved in the snapshot, the snapshot get’s its own inode version. So even if the file’s blocks actually did move to storage pool B the snapshot still shows the previous storage pool A. Once the snapshots get deleted the additional metadata space is freed. Probably backup software does save the storage pool, too. Hence the snapshot must preserve the original value.

 

You can easily verify with mmlsattr that the snapshot version and the plain version show different storage pools.

 

I saw a bout 4500 bytes extra space required for each inode when I did run the migration rule which changed the storage pool.

 

Kind regards,

 

Heiner

 

From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of IBM Spectrum Scale <scale at us.ibm.com>
Reply to: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: Wednesday, 9 June 2021 at 14:55
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Cc: "gpfsug-discuss-bounces at spectrumscale.org" <gpfsug-discuss-bounces at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Metadata usage almost doubled after policy run with migration rule

 

Hi Billich,

>Or maybe illplaced files use larger inodes? Looks like for each used inode we increased by about 4k: 400M inodes, 1.6T increase in size

Basically a migration policy run with -I defer would just simply mark the files as illPlaced which would not cause metadata extension for such files(e.g., inode size is fixed after file system creation). Instead, I'm just wondering about your placement rules, which are existing rules or newly installed rules? Which could set EAs to newly created files and may cause increased metadata size. Also any new EAs are inserted for files?


Regards, The Spectrum Scale (GPFS) team

------------------------------------------------------------------------------------------------------------------
If you feel that your question can benefit other users of Spectrum Scale (GPFS), then please post it to the public IBM developerWroks Forum at https://www.ibm.com/developerworks/community/forums/html/forum?id=11111111-0000-0000-0000-000000000479. 

If your query concerns a potential software error in Spectrum Scale (GPFS) and you have an IBM software maintenance contract please contact 1-800-237-5511 in the United States or your local IBM Service Center in other countries. 

The forum is informally monitored as time permits and should not be used for priority messages to the Spectrum Scale (GPFS) team.

"Billich Heinrich Rainer (ID SD)" ---2021/06/08 05:19:32 PM--- Hello,

From: "Billich Heinrich Rainer (ID SD)" <heinrich.billich at id.ethz.ch>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 2021/06/08 05:19 PM
Subject: [EXTERNAL] [gpfsug-discuss] Metadata usage almost doubled after policy run with migration rule
Sent by: gpfsug-discuss-bounces at spectrumscale.org





Hello,

A policy run with ‘-I defer’ and a placement rule did almost double the metadata usage of a filesystem. This filled the metadata disks to a critical level. I would like to understand if this is to be expected and ‘as designed’ or if I face some issue or bug. 
 I hope a subsequent run of ‘mmrstripefs -p’ will reduce the metadata usage again.
Thank you


I want to move all data to a new storage pool and did run a policy like

RULE 'migrate_to_Data'
  MIGRATE
    WEIGHT(0)

  TO POOL 'Data'

for each fileset with 

  mmapplypolicy -I defer

Next I want to actually move the data with 

  mmrestripefs -p

After the policy run metadata usage increased from 2.06TiB to 3.53TiB and filled the available metadata space by >90%. This is somewhat surprising. Will the following  run of ‘mmrestripefs -p’ reduce the usage again, when the files are not illplaced any more? The number of used Inodes did not change noticeably during the policy run. 

Or maybe illplaced files use larger inodes? Looks like for each used inode we increased by about 4k: 400M inodes, 1.6T increase in size

Thank you,

Heiner

Some details

# mmlsfs  fsxxxx -f -i -B -I -m -M -r -R -V
flag                value                    description
------------------- ------------------------ -----------------------------------
-f                 8192                     Minimum fragment (subblock) size in bytes (system pool)
                    32768                    Minimum fragment (subblock) size in bytes (other pools)
-i                 4096                     Inode size in bytes
-B                 1048576                  Block size (system pool)
                    4194304                  Block size (other pools)
-I                 32768                    Indirect block size in bytes
-m                 1                        Default number of metadata replicas
-M                 2                        Maximum number of metadata replicas
-r                 1                        Default number of data replicas
-R                 2                        Maximum number of data replicas
-V                 23.00 (5.0.5.0)          Current file system version

                    19.01 (5.0.1.0)          Original file system version

Inode Information
-----------------
Total number of used inodes in all Inode spaces:          398502837
Total number of free inodes in all Inode spaces:           94184267
Total number of allocated inodes in all Inode spaces:     492687104

Total of Maximum number of inodes in all Inode spaces:    916122880[attachment "smime.p7s" deleted by Hai Zhong HZ Zhou/China/IBM] _______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210617/086e6118/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.gif
Type: image/gif
Size: 106 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210617/086e6118/attachment-0002.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5254 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210617/086e6118/attachment-0002.bin>


More information about the gpfsug-discuss mailing list