<font size=2 face="sans-serif">no.. it is just in the code, because we
have to avoid to run out of mutexs / block</font><br><br><font size=2 face="sans-serif">reduce the number of nodes -N down to
4 (2nodes is even more safer) ... is the easiest way to solve it
for now....</font><br><br><font size=2 face="sans-serif">I've been told the real root cause will
be fixed in one of the next ptfs .. within this year .. </font><br><div><font size=2 face="sans-serif">this warning messages itself should
appear every time.. but unfortunately someone coded, that it depends on
the number of disks (NSDs).. that's why I suspect you did'nt see it before</font><br><font size=2 face="sans-serif">but the fact , that we have to make
sure, not to overrun the system by mmrestripe remains.. to please
lower the -N number of nodes to 4 or better 2 </font><br><br><font size=2 face="sans-serif">(even though we know.. than the mmrestripe
will take longer)</font><br><br><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">"Buterbaugh, Kevin
L" <Kevin.Buterbaugh@Vanderbilt.Edu></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">05/04/2017 05:26 PM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">[gpfsug-discuss]
Well, this is the pits...</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=3>Hi All, </font><br><br><font size=3>Another one of those, “I can open a PMR if I need to”
type questions…</font><br><br><font size=3>We are in the process of combining two large GPFS filesystems
into one new filesystem (for various reasons I won’t get into here). Therefore,
I’m doing a lot of mmrestripe’s, mmdeldisk’s, and mmadddisk’s.</font><br><br><font size=3>Yesterday I did an “mmrestripefs <old fs> -r -N
<my 8 NSD servers>” (after suspending a disk, of course). Worked
like it should.</font><br><br><font size=3>Today I did a “mmrestripefs <new fs> -b -P capacity
-N <those same 8 NSD servers>” and got:</font><br><br><font size=3>mmrestripefs: The total number of PIT worker threads of
all participating nodes has been exceeded to safely restripe the file system.
The total number of PIT worker threads, which is the sum of pitWorkerThreadsPerNode
of the participating nodes, cannot exceed 31. Reissue the command
with a smaller set of participating nodes (-N option) and/or lower the
pitWorkerThreadsPerNode configure setting. By default the file system
manager node is counted as a participating node.</font><br><font size=3>mmrestripefs: Command failed. Examine previous error messages
to determine cause.</font><br><br><font size=3>So there must be some difference in how the “-r” and
“-b” options calculate the number of PIT worker threads. I did
an “mmfsadm dump all | grep pitWorkerThreadsPerNode” on all 8 NSD servers
and the filesystem manager node … they all say the same thing:</font><br><br><font size=3> pitWorkerThreadsPerNode 0</font><br><br><font size=3>Hmmm, so 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 > 31?!?
I’m confused...</font><br><br><font size=3>—</font><br><font size=3>Kevin Buterbaugh - Senior System Administrator</font><br><font size=3>Vanderbilt University - Advanced Computing Center for
Research and Education</font><br><a href=mailto:Kevin.Buterbaugh@vanderbilt.edu><font size=3 color=blue><u>Kevin.Buterbaugh@vanderbilt.edu</u></font></a><font size=3>- (615)875-9633</font><br><br><br><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><br></div><BR>