<div dir="ltr"><div>Two thoughts:</div><div><br></div><div>1) Has your config data update fully propagated after the mmchnode? We've (rarely) seen some weird stuff happen when that process isn't complete yet, or if a node in question simply didn't get the update (try md5sum'ing the mmsdrfs file on nrg1-gpfs13 and compare to the cluster manager's md5sum, make sure the push process isn't still running, etc.). If you see discrepancies, you could try an mmsdrrestore to get that node back into spec.<br></div><div><br></div><div>2) If everything looks fine; what are the chances you could simply try restarting GPFS on nrg1-gpfs13? Might be particularly interesting to see what the cluster tries to do with the filesystem once that node is down.</div><div><br></div><div>-Jordan<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Jun 22, 2018 at 9:28 AM, Oesterlin, Robert <span dir="ltr"><<a href="mailto:Robert.Oesterlin@nuance.com" target="_blank">Robert.Oesterlin@nuance.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div link="blue" vlink="purple" lang="EN-US">
<div class="m_-6094124250629600547WordSection1">
<p class="MsoNormal"><span style="font-size:12.0pt">Yep. And nrg1-gpfs13 isn’t even a manager node anymore!<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">[root@nrg1-gpfs01 ~]# mmchmgr dataeng nrg1-gpfs05<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">Sending migrate request to current manager node 10.30.43.136 (nrg1-gpfs13).<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">Node 10.30.43.136 (nrg1-gpfs13) resigned as manager for dataeng.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">Node 10.30.43.136 (nrg1-gpfs13) appointed as manager for dataeng.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">2018-06-22_09:26:08.305-0400: [I] Command: mmchmgr /dev/dataeng <a href="http://nrg1-gpfs05.nrg1.us.grid.nuance.com" target="_blank">nrg1-gpfs05.nrg1.us.grid.<wbr>nuance.com</a><u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">2018-06-22_09:26:09.178-0400: [N] Node 10.30.43.136 (nrg1-gpfs13) resigned as manager for dataeng.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">2018-06-22_09:26:09.179-0400: [N] Node 10.30.43.136 (nrg1-gpfs13) appointed as manager for dataeng.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">2018-06-22_09:26:09.179-0400: [I] Command: successful mmchmgr /dev/dataeng <a href="http://nrg1-gpfs05.nrg1.us.grid.nuance.com" target="_blank">nrg1-gpfs05.nrg1.us.grid.<wbr>nuance.com</a><u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">2018-06-22_09:26:10.116-0400: [I] Node 10.30.43.136 (nrg1-gpfs13) completed take over for dataeng.<u></u><u></u></span></p><span class="">
<div>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal"><span style="font-size:12.0pt">Bob Oesterlin<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt">Sr Principal Storage Engineer, Nuance<u></u><u></u></span></p>
</div>
<p class="MsoNormal"><span style="font-size:12.0pt"><u></u> <u></u></span></p>
<p class="MsoNormal"><span style="font-size:12.0pt"><u></u> <u></u></span></p>
</span><div style="border:none;border-top:solid #b5c4df 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b><span style="font-size:12.0pt;color:black">From: </span></b><span style="font-size:12.0pt;color:black"><<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">gpfsug-discuss-bounces@<wbr>spectrumscale.org</a>> on behalf of "Buterbaugh, Kevin L" <Kevin.Buterbaugh@Vanderbilt.<wbr>Edu><br>
<b>Reply-To: </b>gpfsug main discussion list <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">gpfsug-discuss@spectrumscale.<wbr>org</a>><br>
<b>Date: </b>Friday, June 22, 2018 at 8:21 AM<br>
<b>To: </b>gpfsug main discussion list <<a href="mailto:gpfsug-discuss@spectrumscale.org" target="_blank">gpfsug-discuss@spectrumscale.<wbr>org</a>><br>
<b>Subject: </b>[EXTERNAL] Re: [gpfsug-discuss] File system manager - won't change to new node<u></u><u></u></span></p>
</div><span class="">
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<p class="MsoNormal">Hi Bob, <u></u><u></u></p>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Have you tried explicitly moving it to a specific manager node? That’s what I always do … I personally never let GPFS pick when I’m moving the management functions for some reason. Thanks…<u></u><u></u></p>
</div>
<div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
<div>
<p class="MsoNormal">Kevin<u></u><u></u></p>
</div>
<div>
<div>
<p class="MsoNormal"><br>
<br>
<u></u><u></u></p>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</span></div>
</div>
<br>______________________________<wbr>_________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/<wbr>listinfo/gpfsug-discuss</a><br>
<br></blockquote></div><br></div>