[gpfsug-discuss] Restriping GPFS Metadata

Yuri L Volobuev volobuev at us.ibm.com
Fri Dec 11 23:46:03 GMT 2015


Hi Kevin,

The short answer is: no, it's not possible to do a rebalance (mmrestripefs
-b) for metadata but not data with current GPFS code.  This is something we
plan on addressing in a future code update.

It doesn't really help to separate data and metadata in different pools.
Using -P system results in some metadata being processed, but not all.  All
of this has to do with the mechanics of GPFS PIT code.  If you haven't
already, I recommend reading "Long-running GPFS administration commands" [
https://ibm.biz/BdHnX8] doc for background.  The layout of storage pools is
something that's orthogonal to how PIT scans work.  It's easy to rebalance
just system metadata (inode file, block and inode allocation maps, a few
smaller system files): just ^C mmrestripefs once it gets into Phase 4 (User
metadata).  Rebalancing user metadata (directories, indirect blocks, EA
overflow blocks) requires running mmrestripefs -b to completion, and this
indeed can take a while on a large fs.  If one tries to speed things up
using -P system, then all inodes that don't belong to the system pool will
get summarily skipped, including the metadata associated with those inodes.
A code change is needed to enable metadata-only rebalancing.

yuri
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20151211/f86b4eb1/attachment-0002.htm>


More information about the gpfsug-discuss mailing list