[gpfsug-discuss] Running multiple mmrestripefs in a singlecluster?

Marc A Kaplan makaplan at us.ibm.com
Wed Mar 15 20:33:19 GMT 2017


You can control the load of mmrestripefs (and all maintenance commands) on 
your system using mmchqos ... 




From:   "Olaf Weiser" <olaf.weiser at de.ibm.com>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   03/15/2017 04:04 PM
Subject:        Re: [gpfsug-discuss] Running multiple mmrestripefs in a 
single  cluster?
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



yes.. and please be carefully about the number of nodes , doing the job 
because of multiple PIT worker hammering against your data 
if you limit the restripe to 2 nodes  (-N ......)   of adjust the 
PITworker down to 8 or even 4  ... you can run multiple restripes.. 
without hurting the application workload to much ... but the final 
duration of your restripe then will be affected 
cheers




From:        "Oesterlin, Robert" <Robert.Oesterlin at nuance.com>
To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:        03/15/2017 03:27 PM
Subject:        [gpfsug-discuss] Running multiple mmrestripefs in a single 
cluster?
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



I’m looking at migrating multiple file systems from one set of NSDs to 
another. Assuming I put aside any potential IO bottlenecks, has anyone 
tried running multiple “mmrestripefs” commands in a single cluster?
 
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
507-269-0413
 
 _______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170315/5aab4bad/attachment-0002.htm>


More information about the gpfsug-discuss mailing list