<font size=3 color=red>>>Thanks for the info on the releases …
can you clarify about pitWorkerThreadsPerNode? </font><br><br><font size=3><b>pitWorkerThreadsPerNode</b> -- Specifies how many threads
do restripe, data movement, etc</font><br><br><font size=3 color=red>>>As I said in my original post, on all
8 NSD servers and the filesystem manager it is set to zero. No matter
how many times I add zero to zero I don’t get a value > 31! ;-)
So I take it that zero has some sort of unspecified significance?
Thanks…</font><br><br><font size=2 face="sans-serif">Value of 0 just indicates </font><font size=3><b>pitWorkerThreadsPerNode</b></font><font size=2 face="sans-serif">takes internal_value based on GPFS setup and file-system configuration
(which can be 16 or lower) based on the following formula.</font><br><br><font size=3>Default is <b>pitWorkerThreadsPerNode</b> = MIN(16,
(numberOfDisks_in_filesystem * 4) / numberOfParticipatingNodes_in_mmrestripefs
+ 1) </font><br><br><font size=2 face="sans-serif">For example, if you have 64 x NSDs in
your file-system and you are using 8 NSD servers in "mmrestripefs
-N", then</font><br><br><font size=3><b>pitWorkerThreadsPerNode</b></font><font size=2 face="sans-serif">= MIN (16, (256/8)+1) resulting in </font><font size=3><b>pitWorkerThreadsPerNode</b></font><font size=2 face="sans-serif">to take value of 16 ( default 0 will result in 16 threads doing restripe
per mmrestripefs participating Node).</font><br><br><font size=2 face="sans-serif">If you want 8 NSD servers (running 4.2.2.3)
to participate in mmrestripefs operation then set "mmchconfig pitWorkerThreadsPerNode=3
-N <8_NSD_Servers>" such that (8 x 3) is less than 31.</font><br><br><font size=2 face="sans-serif">Regards,</font><br><font size=2 face="sans-serif">-Kums</font><br><br><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">"Buterbaugh, Kevin
L" <Kevin.Buterbaugh@Vanderbilt.Edu></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">05/04/2017 12:57 PM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
Well, this is the pits...</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=3>Hi Kums, </font><br><br><font size=3>Thanks for the info on the releases … can you clarify
about pitWorkerThreadsPerNode? As I said in my original post, on
all 8 NSD servers and the filesystem manager it is set to zero. No
matter how many times I add zero to zero I don’t get a value > 31!
;-) So I take it that zero has some sort of unspecified significance?
Thanks…</font><br><br><font size=3>Kevin</font><br><br><font size=3>On May 4, 2017, at 11:49 AM, Kumaran Rajaram <</font><a href=mailto:kums@us.ibm.com><font size=3 color=blue><u>kums@us.ibm.com</u></font></a><font size=3>>
wrote:</font><br><br><font size=2 face="sans-serif">Hi,</font><font size=3><br></font><font size=3 color=red><br>>>I’m running 4.2.2.3 on my GPFS servers (some clients are on 4.2.1.1
or 4.2.0.3 and are gradually being upgraded). What version of GPFS
fixes this? With what I’m doing I need the ability to run mmrestripefs.</font><font size=3><br></font><font size=2 face="sans-serif"><br>GPFS version 4.2.3.0 (and above) fixes this issue and supports "sum
of pitWorkerThreadsPerNode of the participating nodes (-N parameter to
mmrestripefs)" to exceed 31.</font><font size=3><br></font><font size=2 face="sans-serif"><br>If you are using 4.2.2.3, then depending on "number of nodes participating
in the mmrestripefs" then the GPFS config parameter "pitWorkerThreadsPerNode"
need to be adjusted such that "sum of pitWorkerThreadsPerNode of the
participating nodes <= 31".</font><font size=3><br></font><font size=2 face="sans-serif"><br>For example, if "number of nodes participating in the mmrestripefs"
is 6 then adjust "mmchconfig pitWorkerThreadsPerNode=5 -N <participating_nodes>".
GPFS would need to be restarted for this parameter to take effect on the
participating_nodes (verify with </font><font size=2 face="Courier New">mmfsadm
dump config | grep pitWorkerThreadsPerNode</font><font size=2 face="sans-serif">)</font><font size=3><br></font><font size=2 face="sans-serif"><br>Regards,<br>-Kums</font><font size=3><br><br><br><br><br></font><font size=1 color=#5f5f5f face="sans-serif"><br>From: </font><font size=1 face="sans-serif">"Buterbaugh,
Kevin L" <</font><a href=mailto:Kevin.Buterbaugh@Vanderbilt.Edu><font size=1 color=blue face="sans-serif"><u>Kevin.Buterbaugh@Vanderbilt.Edu</u></font></a><font size=1 face="sans-serif">></font><font size=1 color=#5f5f5f face="sans-serif"><br>To: </font><font size=1 face="sans-serif">gpfsug
main discussion list <</font><a href="mailto:gpfsug-discuss@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss@spectrumscale.org</u></font></a><font size=1 face="sans-serif">></font><font size=1 color=#5f5f5f face="sans-serif"><br>Date: </font><font size=1 face="sans-serif">05/04/2017
12:08 PM</font><font size=1 color=#5f5f5f face="sans-serif"><br>Subject: </font><font size=1 face="sans-serif">Re:
[gpfsug-discuss] Well, this is the pits...</font><font size=1 color=#5f5f5f face="sans-serif"><br>Sent by: </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=3><br></font><hr noshade><font size=3><br><br><br>Hi Olaf, <br><br>I didn’t touch pitWorkerThreadsPerNode … it was already zero.<br><br>I’m running 4.2.2.3 on my GPFS servers (some clients are on 4.2.1.1 or
4.2.0.3 and are gradually being upgraded). What version of GPFS fixes
this? With what I’m doing I need the ability to run mmrestripefs.<br><br>It seems to me that mmrestripefs could check whether QOS is enabled …
granted, it would have no way of knowing whether the values used actually
are reasonable or not … but if QOS is enabled then “trust” it to not
overrun the system.<br><br>PMR time? Thanks..<br><br>Kevin<br><br>On May 4, 2017, at 10:54 AM, Olaf Weiser <</font><a href=mailto:olaf.weiser@de.ibm.com><font size=3 color=blue><u>olaf.weiser@de.ibm.com</u></font></a><font size=3>>
wrote:<br></font><font size=2 face="sans-serif"><br>HI Kevin, <br>the number of NSDs is more or less nonsense .. it is just the number of
nodes x PITWorker should not exceed to much the #mutex/FS block<br>did you adjust/tune the PitWorker ? ... <br><br>so far as I know.. that the code checks the number of NSDs is already considered
as a defect and will be fixed / is already fixed ( I stepped into it here
as well) <br><br>ps. QOS is the better approach to address this, but unfortunately.. not
everyone is using it by default... that's why I suspect , the development
decide to put in a check/limit here .. which in your case(with QOS) would'nt
needed </font><font size=3><br><br><br><br></font><font size=1 color=#5f5f5f face="sans-serif"><br><br>From: </font><font size=1 face="sans-serif">"Buterbaugh,
Kevin L" <</font><a href=mailto:Kevin.Buterbaugh@Vanderbilt.Edu><font size=1 color=blue face="sans-serif"><u>Kevin.Buterbaugh@Vanderbilt.Edu</u></font></a><font size=1 face="sans-serif">></font><font size=1 color=#5f5f5f face="sans-serif"><br>To: </font><font size=1 face="sans-serif">gpfsug
main discussion list <</font><a href="mailto:gpfsug-discuss@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss@spectrumscale.org</u></font></a><font size=1 face="sans-serif">></font><font size=1 color=#5f5f5f face="sans-serif"><br>Date: </font><font size=1 face="sans-serif">05/04/2017
05:44 PM</font><font size=1 color=#5f5f5f face="sans-serif"><br>Subject: </font><font size=1 face="sans-serif">Re:
[gpfsug-discuss] Well, this is the pits...</font><font size=1 color=#5f5f5f face="sans-serif"><br>Sent by: </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=3><br></font><hr noshade><font size=3><br><br><br>Hi Olaf, <br><br>Your explanation mostly makes sense, but...<br><br>Failed with 4 nodes … failed with 2 nodes … not gonna try with 1 node.
And this filesystem only has 32 disks, which I would imagine is not
an especially large number compared to what some people reading this e-mail
have in their filesystems.<br><br>I thought that QOS (which I’m using) was what would keep an mmrestripefs
from overrunning the system … QOS has worked extremely well for us - it’s
one of my favorite additions to GPFS.<br><br>Kevin<br><br>On May 4, 2017, at 10:34 AM, Olaf Weiser <</font><a href=mailto:olaf.weiser@de.ibm.com><font size=3 color=blue><u>olaf.weiser@de.ibm.com</u></font></a><font size=3>>
wrote:</font><font size=2 face="sans-serif"><br><br>no.. it is just in the code, because we have to avoid to run out of mutexs
/ block<br><br>reduce the number of nodes -N down to 4 (2nodes is even more safer)
... is the easiest way to solve it for now....<br><br>I've been told the real root cause will be fixed in one of the next ptfs
.. within this year .. <br>this warning messages itself should appear every time.. but unfortunately
someone coded, that it depends on the number of disks (NSDs).. that's why
I suspect you did'nt see it before<br>but the fact , that we have to make sure, not to overrun the system by
mmrestripe remains.. to please lower the -N number of nodes to 4
or better 2 <br><br>(even though we know.. than the mmrestripe will take longer)</font><font size=1 color=#5f5f5f face="sans-serif"><br><br><br>From: </font><font size=1 face="sans-serif">"Buterbaugh,
Kevin L" <</font><a href=mailto:Kevin.Buterbaugh@Vanderbilt.Edu><font size=1 color=blue face="sans-serif"><u>Kevin.Buterbaugh@Vanderbilt.Edu</u></font></a><font size=1 face="sans-serif">></font><font size=1 color=#5f5f5f face="sans-serif"><br>To: </font><font size=1 face="sans-serif">gpfsug
main discussion list <</font><a href="mailto:gpfsug-discuss@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss@spectrumscale.org</u></font></a><font size=1 face="sans-serif">></font><font size=1 color=#5f5f5f face="sans-serif"><br>Date: </font><font size=1 face="sans-serif">05/04/2017
05:26 PM</font><font size=1 color=#5f5f5f face="sans-serif"><br>Subject: </font><font size=1 face="sans-serif">[gpfsug-discuss]
Well, this is the pits...</font><font size=1 color=#5f5f5f face="sans-serif"><br>Sent by: </font><a href="mailto:gpfsug-discuss-bounces@spectrumscale.org"><font size=1 color=blue face="sans-serif"><u>gpfsug-discuss-bounces@spectrumscale.org</u></font></a><font size=3><br></font><hr noshade><font size=3><br><br><br>Hi All, <br><br>Another one of those, “I can open a PMR if I need to” type questions…<br><br>We are in the process of combining two large GPFS filesystems into one
new filesystem (for various reasons I won’t get into here). Therefore,
I’m doing a lot of mmrestripe’s, mmdeldisk’s, and mmadddisk’s.<br><br>Yesterday I did an “mmrestripefs <old fs> -r -N <my 8 NSD servers>”
(after suspending a disk, of course). Worked like it should.<br><br>Today I did a “mmrestripefs <new fs> -b -P capacity -N <those
same 8 NSD servers>” and got:<br><br>mmrestripefs: The total number of PIT worker threads of all participating
nodes has been exceeded to safely restripe the file system. The total
number of PIT worker threads, which is the sum of pitWorkerThreadsPerNode
of the participating nodes, cannot exceed 31. Reissue the command
with a smaller set of participating nodes (-N option) and/or lower the
pitWorkerThreadsPerNode configure setting. By default the file system
manager node is counted as a participating node.<br>mmrestripefs: Command failed. Examine previous error messages to determine
cause.<br><br>So there must be some difference in how the “-r” and “-b” options calculate
the number of PIT worker threads. I did an “mmfsadm dump all | grep
pitWorkerThreadsPerNode” on all 8 NSD servers and the filesystem manager
node … they all say the same thing:<br><br> pitWorkerThreadsPerNode 0<br><br>Hmmm, so 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 0 > 31?!? I’m confused...<br><br>—<br>Kevin Buterbaugh - Senior System Administrator<br>Vanderbilt University - Advanced Computing Center for Research and Education</font><font size=3 color=blue><u><br></u></font><a href=mailto:Kevin.Buterbaugh@vanderbilt.edu><font size=3 color=blue><u>Kevin.Buterbaugh@vanderbilt.edu</u></font></a><font size=3>-
(615)875-9633</font><tt><font size=2><br><br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font></tt><a href=http://spectrumscale.org/><tt><font size=2 color=blue><u>spectrumscale.org</u></font></tt></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br><br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><tt><font size=2><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font></tt><a href=http://spectrumscale.org/><tt><font size=2 color=blue><u>spectrumscale.org</u></font></tt></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br><br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><tt><font size=2><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font></tt><a href=http://spectrumscale.org/><tt><font size=2 color=blue><u>spectrumscale.org</u></font></tt></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><font size=3><br><br><br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><br><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><BR>