<font size=2 face="sans-serif">HI Kevin, </font><br><font size=2 face="sans-serif">the number of NSDs is more or less nonsense
.. it is just the number of nodes x PITWorker should not exceed to
much the #mutex/FS block</font><br><font size=2 face="sans-serif">did you adjust/tune the PitWorker ?
... </font><br><br><font size=2 face="sans-serif">so far as I know.. that the code checks
the number of NSDs is already considered as a defect and will be fixed
/ is already fixed ( I stepped into it here as well) </font><br><br><font size=2 face="sans-serif">ps. QOS is the better approach to address
this, but unfortunately.. not everyone is using it by default... that's
why I suspect , the development decide to put in a check/limit here ..
which in your case(with QOS) would'nt needed </font><br><br><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">"Buterbaugh, Kevin
L" <Kevin.Buterbaugh@Vanderbilt.Edu></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">05/04/2017 05:44 PM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
Well, this is the pits...</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=3>Hi Olaf, </font><br><br><font size=3>Your explanation mostly makes sense, but...</font><br><br><font size=3>Failed with 4 nodes … failed with 2 nodes … not gonna
try with 1 node. And this filesystem only has 32 disks, which I would
imagine is not an especially large number compared to what some people
reading this e-mail have in their filesystems.</font><br><br><font size=3>I thought that QOS (which I’m using) was what would keep
an mmrestripefs from overrunning the system … QOS has worked extremely
well for us - it’s one of my favorite additions to GPFS.</font><br><br><font size=3>Kevin</font><br><br><font size=3>On May 4, 2017, at 10:34 AM, Olaf Weiser <</font><a href=mailto:olaf.weiser@de.ibm.com><font size=3 color=blue><u>olaf.weiser@de.ibm.com</u></font></a><font size=3>>
wrote:</font><br><br><font size=2 face="sans-serif">no.. it is just in the code, because
we have to avoid to run out of mutexs / block</font><font size=3><br></font><font size=2 face="sans-serif"><br>reduce the number of nodes -N down to 4 (2nodes is even more safer)
... is the easiest way to solve it for now....</font><font size=3><br></font><font size=2 face="sans-serif"><br>I've been told the real root cause will be fixed in one of the next ptfs
.. within this year .. </font><br><font size=2 face="sans-serif">this warning messages itself should
appear every time.. but unfortunately someone coded, that it depends on
the number of disks (NSDs).. that's why I suspect you did'nt see it before<br>but the fact , that we have to make sure, not to overrun the system by
mmrestripe remains.. to please lower the -N number of nodes to 4
or better 2 </font><font size=3><br></font><font size=2 face="sans-serif"><br>(even though we know.. than the mmrestripe will take longer)</font><font size=3><br><br></font><font size=1 color=#5f5f5f face="sans-serif"><br>From: </font><font size=1 face="sans-serif">"Buterbaugh,
Kevin L" <</font><a href=mailto:Kevin.Buterbaugh@Vanderbilt.Edu><font size=1 color=blue face="sans-serif"><u>Kevin.Buterbaugh@Vanderbilt.Edu</u></font></a><font size=1 face="sans-serif">></font><font size=1 color=#5f5f5f face="sans-serif"><br>To: </font><font size=1 face="sans-serif">gpfsug
main discussion list <</font><a href="mailto:gpfsug-discuss@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss@spectrumscale.org</u></font></a><font size=1 face="sans-serif">></font><font size=1 color=#5f5f5f face="sans-serif"><br>Date: </font><font size=1 face="sans-serif">05/04/2017
05:26 PM</font><font size=1 color=#5f5f5f face="sans-serif"><br>Subject: </font><font size=1 face="sans-serif">[gpfsug-discuss]
Well, this is the pits...</font><font size=1 color=#5f5f5f face="sans-serif"><br>Sent by: </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=3><br></font><hr noshade><font size=3><br><br><br>Hi All, <br><br>Another one of those, “I can open a PMR if I need to” type questions…<br><br>We are in the process of combining two large GPFS filesystems into one
new filesystem (for various reasons I won’t get into here). Therefore,
I’m doing a lot of mmrestripe’s, mmdeldisk’s, and mmadddisk’s.<br><br>Yesterday I did an “mmrestripefs <old fs> -r -N <my 8 NSD servers>”
(after suspending a disk, of course). Worked like it should.<br><br>Today I did a “mmrestripefs <new fs> -b -P capacity -N <those
same 8 NSD servers>” and got:<br><br>mmrestripefs: The total number of PIT worker threads of all participating
nodes has been exceeded to safely restripe the file system. The total
number of PIT worker threads, which is the sum of pitWorkerThreadsPerNode
of the participating nodes, cannot exceed 31. Reissue the command
with a smaller set of participating nodes (-N option) and/or lower the
pitWorkerThreadsPerNode configure setting. By default the file system
manager node is counted as a participating node.<br>mmrestripefs: Command failed. Examine previous error messages to determine
cause.<br><br>So there must be some difference in how the “-r” and “-b” options calculate
the number of PIT worker threads. I did an “mmfsadm dump all | grep
pitWorkerThreadsPerNode” on all 8 NSD servers and the filesystem manager
node … they all say the same thing:<br><br> pitWorkerThreadsPerNode 0<br><br>Hmmm, so 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 > 31?!? I’m confused...<br><br>—<br>Kevin Buterbaugh - Senior System Administrator<br>Vanderbilt University - Advanced Computing Center for Research and Education</font><font size=3 color=blue><u><br></u></font><a href=mailto:Kevin.Buterbaugh@vanderbilt.edu><font size=3 color=blue><u>Kevin.Buterbaugh@vanderbilt.edu</u></font></a><font size=3>-
(615)875-9633<br><br></font><tt><font size=2><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font></tt><a href=http://spectrumscale.org/><tt><font size=2 color=blue><u>spectrumscale.org</u></font></tt></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br></font><br><font size=3><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><br><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><br><BR>