[gpfsug-discuss] Running mmcheckquota on a file system with 1.3B files

Wahl, Edward ewahl at osc.edu
Mon Aug 19 21:01:14 BST 2019


I'm assuming that was a run in the foreground and not using QoS?

Our timings sound roughly similar for a Foreground run under 4.2.3.x.   1 hour and ~2 hours for 100million and 300 million each.   Also I'm assuming actual file counts, not inode counts!
Background is, of course, all over the place with QoS.  I've seen between 8-12 hours for just 100 million files, but the NSDs on that FS were middling busy during those periods.


I'd love to know if IBM has any "best practice" guidance for running mmcheckquota.

Ed



________________________________
From: gpfsug-discuss-bounces at spectrumscale.org <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Oesterlin, Robert <Robert.Oesterlin at nuance.com>
Sent: Monday, August 19, 2019 9:54 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Running mmcheckquota on a file system with 1.3B files


Thanks, - I kicked it off and it finished in about 12 hours, so much quicker than I expected.





Bob Oesterlin

Sr Principal Storage Engineer, Nuance





From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of IBM Spectrum Scale <scale at us.ibm.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: Monday, August 19, 2019 at 8:24 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: [EXTERNAL] Re: [gpfsug-discuss] Running mmcheckquota on a file system with 1.3B files



Bob, like most questions of this time I think the answer depends on a number of variables.  Generally we do not recommend running the mmcheckquota command during the peak usage of your Spectrum Scale system.  As I think you know the command will increase the IO to the NSDs that hold metadata and the number of NSDs that hold metadata will contribute to the time it takes for the command to complete, i.e. more metadata NSDs should improve the overall execution time.

Regards, The Spectrum Scale (GPFS) team




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20190819/275db700/attachment-0002.htm>


More information about the gpfsug-discuss mailing list