[gpfsug-discuss] Mmrestripefs -R --metadat-only - how to estimate remaining execution time
Eric Horst
erich at uw.edu
Fri May 28 17:25:49 BST 2021
Yes Heiner, my experience is that the inode count in those operations is:
inodes * snapshots = total. I observed that as it starts processing the
snapshot inodes it moves faster as inode usage in snapshots is more sparse.
-Eric
On Fri, May 28, 2021 at 1:56 AM Billich Heinrich Rainer (ID SD) <
heinrich.billich at id.ethz.ch> wrote:
> Hello,
>
>
>
> I just noticed:
>
>
>
> Maybe mmrestripefs does some extra processing on snapshots? The output
> looks much more as expected on filesystems with no snapshots present, both
> number of inodes and MB data processed allow to estimate the remaining
> runtime. Unfortunately all our large production filesystems use snapshots …
>
>
>
> Cheers,
>
>
>
> Heiner
>
>
>
>
>
> 1.23 % complete on Tue May 18 21:07:10 2021 ( 37981765 inodes with
> total 328617 MB data processed)
>
> 1.25 % complete on Tue May 18 21:17:44 2021 ( 39439584 inodes with
> total 341032 MB data processed)
>
> 100.00 % complete on Tue May 18 21:22:44 2021 ( 41312000 inodes with
> total 356088 MB data processed). <<<< # of inodes matches # of
> allocated inodes, MB processed matches used metadata disk space
>
>
>
> Scan completed successfully.
>
> # mmdf fsxxx -F
>
> Inode Information
>
> -----------------
>
> Total number of used inodes in all Inode spaces: 29007227
>
> Total number of free inodes in all Inode spaces: 12304773
>
> Total number of allocated inodes in all Inode spaces: 41312000
> <<<<<<<<<< allocated inodes
>
> Total of Maximum number of inodes in all Inode spaces: 87323648
>
>
>
>
>
> ]# mmdf fsxxx -m --block-size M
>
> disk disk size failure holds holds free in
> MB free in MB
>
> name in MB group metadata data in full
> blocks in fragments
>
> --------------- ------------- -------- -------- ----- --------------------
> -------------------
>
> Disks in storage pool: system (Maximum disk size allowed is 4.13 TB)
>
> ------------- --------------------
> -------------------
>
> (pool total) 2425152 2067720 (
> 85%) 1342 ( 0%). <<<<<< 356190MB used
>
>
>
> *From: *<gpfsug-discuss-bounces at spectrumscale.org> on behalf of "Billich
> Heinrich Rainer (ID SD)" <heinrich.billich at id.ethz.ch>
> *Reply to: *gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> *Date: *Friday, 28 May 2021 at 10:25
> *To: *gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> *Subject: *[gpfsug-discuss] Mmrestripefs -R --metadat-only - how to
> estimate remaining execution time
>
>
>
> Hello,
>
>
>
> I want to estimate how much longer a running
>
>
>
> mmrestripefs -R –metadata-only –qos maintenance
>
>
>
> job will take to finish. We did switch from ‘-m 1’ to ‘-m 2 ’ and now run
> mmrestripefs to get the second copy of all metadata. I know that the %
> values in the output are useless, but now it looks like the number of
> inodes processed is of no use, too? The filesystem has about 1e9 allocated
> inodes, but the mmrestripefs reports up to 7e9 inodes processed? The
> number of inodes reported increases heavily since it passed the number of
> allocated inodes???
>
>
>
> 4.41 % complete on Fri May 28 09:22:05 2021 (7219160122 inodes with
> total 8985067 MB data processed)
>
>
>
>
>
> Should I consider the ‘MB data processed’ instead – will the job finish
> when all used metadata disk space is processed?
>
>
>
> I see little iops on the metadata disks and wonder if something is stuck,
> I’m not sure if it makes sense to wait any longer. But without an estimate
> on how much longer the job will take I hesitate to restart. A restart will
> need to scan all metadata again … Metadata is on SSD or NVMe, disks iops
> never were limiting. I turned qos off on the filesystem, previously I
> restricted the iops. But I don’t see any increase in iops.
>
>
>
> Thank you. I know this is an old issue, but still I would strongly welcome
> any advice or comment.
>
>
>
> Cheers,
>
>
>
> Heiner
>
>
>
> I’m aware of RFE ID:150409 mmrestripefs % complete metric is near useless.
> I’m not sure if it covers the metadata-only case, too.
>
>
>
> Inode Information
>
> -----------------
>
> Total number of used inodes in all Inode spaces: 735005692
>
> Total number of free inodes in all Inode spaces: 335462660
>
> Total number of allocated inodes in all Inode spaces: 1070468352
>
> Total of Maximum number of inodes in all Inode spaces: 1223782912
>
>
>
> Mmrestripefs output:
>
>
>
> START Mon May 24 20:27:25 CEST 2021
>
> Scanning file system metadata, phase 1 ...
>
> 100 % complete on Mon May 24 21:49:27 2021
>
> Scan completed successfully.
>
> Scanning file system metadata, phase 2 ...
>
> 100 % complete on Mon May 24 21:49:28 2021
>
> Scanning file system metadata for data storage pool
>
> 90 % complete on Mon May 24 21:49:34 2021
>
> 100 % complete on Mon May 24 21:49:35 2021
>
> Scanning file system metadata for Capacity storage pool
>
> 49 % complete on Mon May 24 21:49:41 2021
>
> 100 % complete on Mon May 24 21:49:46 2021
>
> Scan completed successfully.
>
> Scanning file system metadata, phase 3 ...
>
> Scan completed successfully.
>
> Scanning file system metadata, phase 4 ...
>
> 100 % complete on Mon May 24 21:49:48 2021
>
> Scan completed successfully.
>
> Scanning file system metadata, phase 5 ...
>
> 100 % complete on Mon May 24 21:49:52 2021
>
> Scan completed successfully.
>
> Scanning user file metadata ...
>
> 0.01 % complete on Mon May 24 21:50:13 2021 ( 613108 inodes with
> total 76693 MB data processed)
>
> 0.01 % complete on Mon May 24 21:50:33 2021 ( 1090048 inodes with
> total 80495 MB data processed)
>
> 0.01 % complete on Mon May 24 21:50:53 2021 ( 1591808 inodes with
> total 84576 MB data processed)
>
> …
>
> 3.97 % complete on Thu May 27 22:01:02 2021 (1048352497 inodes with
> total 8480467 MB data processed)
>
> 3.99 % complete on Thu May 27 22:30:18 2021 (1050254855 inodes with
> total 8495969 MB data processed)
>
> 4.01 % complete on Thu May 27 22:59:39 2021 (1052304294 inodes with
> total 8512683 MB data processed)
>
> 4.03 % complete on Thu May 27 23:29:11 2021 (1055390220 inodes with
> total 8537615 MB data processed)
>
> 4.05 % complete on Thu May 27 23:58:58 2021 (1059333989 inodes with
> total 8568871 MB data processed)
>
> 4.07 % complete on Fri May 28 00:28:48 2021 (1064728403 inodes with
> total 8611605 MB data processed)
>
> 4.09 % complete on Fri May 28 00:58:50 2021 (1067749260 inodes with
> total 8636120 MB data processed). <<< approximately number of allocated
> inodes
>
>
>
> 4.11 % complete on Fri May 28 01:29:00 2021 (1488665433 inodes with
> total 8661588 MB data processed) <<< whats going??
>
> 4.13 % complete on Fri May 28 01:59:23 2021 (1851124480 inodes with
> total 8682324 MB data processed)
>
> 4.15 % complete on Fri May 28 02:29:55 2021 (1885948840 inodes with
> total 8700082 MB data processed)
>
> 4.17 % complete on Fri May 28 03:00:38 2021 (2604503808 inodes with
> total 8724069 MB data processed)
>
> 4.19 % complete on Fri May 28 03:31:38 2021 (2877196260 inodes with
> total 8746504 MB data processed)
>
> 4.21 % complete on Fri May 28 04:02:38 2021 (2933166080 inodes with
> total 8762555 MB data processed)
>
> 4.23 % complete on Fri May 28 04:33:48 2021 (2956295936 inodes with
> total 8782298 MB data processed)
>
> 4.25 % complete on Fri May 28 05:05:09 2021 (3628799151 inodes with
> total 8802452 MB data processed)
>
> 4.27 % complete on Fri May 28 05:36:40 2021 (3970093965 inodes with
> total 8823885 MB data processed)
>
> 4.29 % complete on Fri May 28 06:08:20 2021 (4012553472 inodes with
> total 8841407 MB data processed)
>
> 4.31 % complete on Fri May 28 06:40:11 2021 (4029545087 inodes with
> total 8858676 MB data processed)
>
> 4.33 % complete on Fri May 28 07:12:11 2021 (6080613874 inodes with
> total 8889395 MB data processed)
>
> 4.35 % complete on Fri May 28 07:44:21 2021 (6146937531 inodes with
> total 8907253 MB data processed)
>
> 4.37 % complete on Fri May 28 08:16:45 2021 (6167408718 inodes with
> total 8925236 MB data processed)
>
> 4.39 % complete on Fri May 28 08:49:18 2021 (7151592448 inodes with
> total 8968126 MB data processed)
>
> 4.41 % complete on Fri May 28 09:22:05 2021 (7219160122 inodes with
> total 8985067 MB data processed)
>
>
>
>
>
> # mmdf fsxxxx -m --block-size M
>
> disk disk size failure holds holds free in
> MB free in MB
>
> name in MB group metadata data in full
> blocks in fragments
>
> --------------- ------------- -------- -------- ----- --------------------
> -------------------
>
> Disks in storage pool: system (Maximum disk size allowed is 32.20 TB)
>
> fsxxxx_rg07b_1 2861295 3 yes no 1684452 (
> 59%) 35886 ( 1%). << 1’140’957MB used – old nsd
>
> fsxxxx_rg07a_1 2861295 3 yes no 1685002 (
> 59%) 35883 ( 1%
>
> fsxxxx_rg03b_1 2861295 3 yes no 1681515 (
> 59%) 35627 ( 1%)
>
> fsxxxx_rg03a_1 2861295 3 yes no 1680859 (
> 59%) 35651 ( 1%)
>
>
>
> RG003LG004VS015 2046239 12 yes no 904369 (
> 44%) 114 ( 0%) << 1’141’756MB used – newly added nsd
>
> RG003LG003VS015 2046239 12 yes no 904291 (
> 44%) 118 ( 0%)
>
> RG003LG002VS015 2046239 12 yes no 903632 (
> 44%) 114 ( 0%)
>
> RG003LG001VS015 2046239 12 yes no 903247 (
> 44%) 115 ( 0%)
>
> ------------- --------------------
> -------------------
>
> (pool total) 19630136 10347367 (
> 53%) 143503 ( 1%)
>
>
>
> We run spectrum scale 5.0.5.4/5.0.5.5 on Power.
>
>
>
> Filesystem:
>
>
>
> -f 32768 Minimum fragment (subblock)
> size in bytes
>
> -i 4096 Inode size in bytes
>
> -I 32768 Indirect block size in bytes
>
> -m 2 Default number of metadata
> replicas
>
> -M 2 Maximum number of metadata
> replicas
>
> -r 1 Default number of data replicas
>
> -R 2 Maximum number of data replicas
>
> -j scatter Block allocation type
>
>
>
> -V 23.00 (5.0.5.0) Current file system version
>
> 15.01 (4.2.0.0) Original file system version
>
>
>
> -L 33554432 Logfile size
>
>
>
> --subblocks-per-full-block 32 Number of subblocks per full
> block
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210528/f46d7a39/attachment.htm>
More information about the gpfsug-discuss
mailing list