<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<meta name="Generator" content="Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0cm;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0cm;
margin-right:0cm;
margin-bottom:0cm;
margin-left:36.0pt;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:435097765;
mso-list-type:hybrid;
mso-list-template-ids:2119967786 67698703 67698713 67698715 67698703 67698713 67698715 67698703 67698713 67698715;}
@list l0:level1
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level2
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level3
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
@list l0:level4
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level5
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level6
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
@list l0:level7
{mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level8
{mso-level-number-format:alpha-lower;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;}
@list l0:level9
{mso-level-number-format:roman-lower;
mso-level-tab-stop:none;
mso-level-number-position:right;
text-indent:-9.0pt;}
ol
{margin-bottom:0cm;}
ul
{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal">Hello GPFS Team,<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">We are observing strange behavior of GPFS during startup on SLES12 node.
<o:p></o:p></p>
<p class="MsoNormal">In our test cluster, we reinstalled VLP1 node with SLES 12 SP3 as a base and when GPFS starts for the first time on this node, it complains about<o:p></o:p></p>
<p class="MsoNormal">too little NSD threads:<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">..<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:28.947+0100: GPFS: 6027-310 [I] mmfsd initializing. {Version: 4.2.3.7 Built: Feb 15 2018 11:38:38} ...<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:28.947+0100: [I] Cleaning old shared memory ...<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:28.947+0100: [I] First pass parsing mmfs.cfg ...<o:p></o:p></p>
<p class="MsoNormal">..<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:29.375+0100: [I] Initializing the cluster manager ...<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:29.523+0100: [I] Initializing the token manager ...<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:29.524+0100: [I] Initializing network shared disks ...<o:p></o:p></p>
<p class="MsoNormal"><b><u>2018-03-16_13:11:29.626+0100: [E] NSD thread configuration needs 413 more threads, exceeds max thread count 1024<o:p></o:p></u></b></p>
<p class="MsoNormal">2018-03-16_13:11:29.628+0100: GPFS: 6027-311 [N] mmfsd is shutting down.<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:29.628+0100: [N] Reason for shutdown: Could not initialize network shared disks<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:29.633+0100: [E] processStart: fork: err 11<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:30.701+0100: runmmfs starting<o:p></o:p></p>
<p class="MsoNormal">Removing old /var/adm/ras/mmfs.log.* files:<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:11:30.713+0100 runmmfs: respawn 32 waiting 336 seconds before restarting mmfsd<o:p></o:p></p>
<p class="MsoNormal">2018-03-16_13:13:13.298+0100: [I] Calling user exit script mmSdrBackup: event mmSdrBackup, async command /var/mmfs/etc/mmsdrbackup<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">GPFS starts loop and tries to respawn mmfsd periodically:<o:p></o:p></p>
<p class="MsoNormal"><b><u>2018-03-16_13:11:30.713+0100 runmmfs: respawn 32 waiting 336 seconds before restarting mmfsd<o:p></o:p></u></b></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">It seems that this issue can be resolved by doing mmshutdown. Later, when we manually perform mmstartup the problem is gone.<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">We are running GPFS 4.2.3.7 and all nodes except VLP1 are running SLES11 SP4. Only on VLP1 we installed SLES12 SP3.<o:p></o:p></p>
<p class="MsoNormal">The test cluster looks as below:<o:p></o:p></p>
<p class="MsoNormal">Node Daemon node name IP address Admin node name Designation<o:p></o:p></p>
<p class="MsoNormal">-----------------------------------------------------------------------<o:p></o:p></p>
<p class="MsoNormal"> 1 VLP0.cs-intern 192.168.101.210 VLP0.cs-intern quorum-manager-snmp_collector<o:p></o:p></p>
<p class="MsoNormal"> 2 VLP1.cs-intern 192.168.101.211 VLP1.cs-intern quorum-manager<o:p></o:p></p>
<p class="MsoNormal"> 3 TBP0.cs-intern 192.168.101.215 TBP0.cs-intern quorum<o:p></o:p></p>
<p class="MsoNormal"> 4 IDP0.cs-intern 192.168.101.110 IDP0.cs-intern<o:p></o:p></p>
<p class="MsoNormal"> 5 IDP1.cs-intern 192.168.101.111 IDP1.cs-intern<o:p></o:p></p>
<p class="MsoNormal"> 6 IDP2.cs-intern 192.168.101.112 IDP2.cs-intern<o:p></o:p></p>
<p class="MsoNormal"> 7 IDP3.cs-intern 192.168.101.113 IDP3.cs-intern<o:p></o:p></p>
<p class="MsoNormal"> 8 ICP0.cs-intern 192.168.101.10 ICP0.cs-intern<o:p></o:p></p>
<p class="MsoNormal"> 9 ICP1.cs-intern 192.168.101.11 ICP1.cs-intern<o:p></o:p></p>
<p class="MsoNormal"> 10 ICP2.cs-intern 192.168.101.12 ICP2.cs-intern<o:p></o:p></p>
<p class="MsoNormal"> 11 ICP3.cs-intern 192.168.101.13 ICP3.cs-intern<o:p></o:p></p>
<p class="MsoNormal"> 12 ICP4.cs-intern 192.168.101.14 ICP4.cs-intern<o:p></o:p></p>
<p class="MsoNormal"> 13 ICP5.cs-intern 192.168.101.15 ICP5.cs-intern<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">We have enabled traces and reproduced the issue as follows:<o:p></o:p></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><![if !supportLists]><span style="mso-list:Ignore">1.<span style="font:7.0pt "Times New Roman"">
</span></span><![endif]>When GPFS daemon was in a respawn loop, we have started traces, all files from this period you can find in uploaded archive under
<b><u>1_nsd_threads_problem</u></b> directory<o:p></o:p></p>
<p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><![if !supportLists]><span style="mso-list:Ignore">2.<span style="font:7.0pt "Times New Roman"">
</span></span><![endif]>We have manually stopped the “respawn” loop on VLP1 by executing mmshutdown and start GPFS manually by mmstartup. All traces from this execution can be found in archive file under
<b><u>2_mmshutdown_mmstartup </u></b>directory<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">All data related to this problem is uploaded to our ftp to file:<o:p></o:p></p>
<p class="MsoNormal"><a href="ftp://ftp.ts.fujitsu.com/CS-Diagnose/IBM">ftp.ts.fujitsu.com/CS-Diagnose/IBM</a>, (fe_cs_oem, 12Monkeys) item435_nsd_threads.tar.gz<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Could you please have a look at this problem?<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal">Best regards,<o:p></o:p></p>
<p class="MsoNormal">Tomasz Wolski<o:p></o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
<p class="MsoNormal"><o:p> </o:p></p>
</div>
</body>
</html>