<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 14 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:Tahoma;
        panose-1:2 11 6 4 3 5 4 4 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0cm;
        margin-bottom:.0001pt;
        font-size:12.0pt;
        font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
p
        {mso-style-priority:99;
        mso-margin-top-alt:auto;
        margin-right:0cm;
        mso-margin-bottom-alt:auto;
        margin-left:0cm;
        font-size:12.0pt;
        font-family:"Times New Roman","serif";}
tt
        {mso-style-priority:99;
        font-family:"Courier New";}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
        {mso-style-priority:99;
        mso-style-link:"Sprechblasentext Zchn";
        margin:0cm;
        margin-bottom:.0001pt;
        font-size:8.0pt;
        font-family:"Tahoma","sans-serif";}
span.E-MailFormatvorlage19
        {mso-style-type:personal-reply;
        font-family:"Calibri","sans-serif";
        color:#1F497D;}
span.SprechblasentextZchn
        {mso-style-name:"Sprechblasentext Zchn";
        mso-style-priority:99;
        mso-style-link:Sprechblasentext;
        font-family:"Tahoma","sans-serif";}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}
@page WordSection1
        {size:612.0pt 792.0pt;
        margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="DE-CH" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">For „1“ we use the quorum node to do “start disk” or “restripe file system” (quorum node without disks).<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">For “2” we use kernel NFS with cNFS<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">I used the command “cnfsNFSDprocs 64” to set the NFS threads. Is this correct?<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">gpfs01:~ # cat /proc/fs/nfsd/threads
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">64<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">I will verify the settings in our lab, will use the following configuration:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">mmchconfig worker1Threads=128<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">mmchconfig prefetchThreads=128<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">mmchconfig nsdMaxWorkerThreads=128<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">mmchconfig cnfsNFSDprocs=256<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">daniel<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-GB" style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal"><b><span lang="DE" style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">Von:</span></b><span lang="DE" style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">
<a href="mailto:gpfsug-discuss-bounces@gpfsug.org">gpfsug-discuss-bounces@gpfsug.org</a>
<a href="mailto:[mailto:gpfsug-discuss-bounces@gpfsug.org]">[mailto:gpfsug-discuss-bounces@gpfsug.org]</a>
<b>Im Auftrag von </b>Sven Oehme<br>
<b>Gesendet:</b> Samstag, 4. </span><span lang="EN-GB" style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">Juli 2015 00:49<br>
<b>An:</b> gpfsug main discussion list<br>
<b>Betreff:</b> Re: [gpfsug-discuss] GPFS 4.1.1 without QoS for mmrestripefs?<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal"><span lang="EN-GB"><o:p> </o:p></span></p>
<p><span lang="EN-GB">this triggers a few questions<br>
<br>
1. have you tried running it only on a node that doesn't serve NFS data ? <br>
2. what NFS stack are you using ? is this the kernel NFS Server as part of linux means you use cNFS ?
<br>
<br>
if the answer to 2 is yes, have you adjusted the nfsd threads in /etc/sysconfig/nfs ? the default is only 8 and if you run with the default you have a very low number of threads from the outside competing with a larger number of threads doing restripe, increasing
 the nfsd threads could help. you could also reduce the number of internal restripe threads to try out if that helps mitigating the impact.<br>
<br>
to try an extreme low value set the following : <br>
<br>
mmchconfig pitWorkerThreadsPerNode=1 -i <br>
<br>
and retry the restripe again, to reset it back to default run<br>
<br>
mmchconfig pitWorkerThreadsPerNode=DEFAULT -i <br>
<br>
sven<br>
<br>
------------------------------------------<br>
Sven Oehme <br>
Scalable Storage Research <br>
email: </span><a href="mailto:oehmes@us.ibm.com"><span lang="EN-GB">oehmes@us.ibm.com</span></a><span lang="EN-GB">
<br>
Phone: +1 (408) 824-8904 <br>
IBM Almaden Research Lab <br>
------------------------------------------<br>
<br>
</span><img border="0" width="16" height="16" id="Bild_x0020_1" src="cid:image001.gif@01D0BB27.B4CC9750" alt="Beschreibung: Inactive hide details for Daniel Vogel ---07/02/2015 12:12:46 AM---Sven, Yes I agree, but “using –N” to reduce the load help"><span lang="EN-GB" style="color:#424282">Daniel
 Vogel ---07/02/2015 12:12:46 AM---Sven, Yes I agree, but “using –N” to reduce the load helps not really. If I use NFS, for example, as</span><span lang="EN-GB"><br>
<br>
</span><span lang="EN-GB" style="font-size:10.0pt;color:#5F5F5F">From: </span><span lang="EN-GB" style="font-size:10.0pt">Daniel Vogel <</span><a href="mailto:Daniel.Vogel@abcsystems.ch"><span lang="EN-GB" style="font-size:10.0pt">Daniel.Vogel@abcsystems.ch</span></a><span lang="EN-GB" style="font-size:10.0pt">></span><span lang="EN-GB"><br>
</span><span lang="EN-GB" style="font-size:10.0pt;color:#5F5F5F">To: </span><span lang="EN-GB" style="font-size:10.0pt">"'gpfsug main discussion list'" <</span><a href="mailto:gpfsug-discuss@gpfsug.org"><span lang="EN-GB" style="font-size:10.0pt">gpfsug-discuss@gpfsug.org</span></a><span lang="EN-GB" style="font-size:10.0pt">></span><span lang="EN-GB"><br>
</span><span lang="EN-GB" style="font-size:10.0pt;color:#5F5F5F">Date: </span><span lang="EN-GB" style="font-size:10.0pt">07/02/2015 12:12 AM</span><span lang="EN-GB"><br>
</span><span lang="EN-GB" style="font-size:10.0pt;color:#5F5F5F">Subject: </span>
<span lang="EN-GB" style="font-size:10.0pt">Re: [gpfsug-discuss] GPFS 4.1.1 without QoS for mmrestripefs?</span><span lang="EN-GB"><br>
</span><span style="font-size:10.0pt;color:#5F5F5F">Sent by: </span><a href="mailto:gpfsug-discuss-bounces@gpfsug.org"><span style="font-size:10.0pt">gpfsug-discuss-bounces@gpfsug.org</span></a><o:p></o:p></p>
<div>
<div class="MsoNormal">
<hr size="2" width="100%" noshade="" style="color:#8091A5" align="left">
</div>
</div>
<p class="MsoNormal"><span lang="EN-GB"><br>
<br>
<br>
</span><span lang="EN-GB" style="font-family:"Calibri","sans-serif";color:#1F497D">Sven,</span><span lang="EN-GB"><br>
<br>
</span><span lang="EN-GB" style="font-family:"Calibri","sans-serif";color:#1F497D">Yes I agree, but “using –N” to reduce the load helps not really. If I use NFS, for example, as a ESX data store, ESX I/O latency for NFS goes very high, the VM’s hangs. By the
 way I use SSD PCIe cards, perfect “mirror speed” but slow I/O on NFS.</span><span lang="EN-GB"><br>
</span><span lang="EN-GB" style="font-family:"Calibri","sans-serif";color:#1F497D">The GPFS cluster concept I use are different than GSS or traditional FC (shared storage). I use shared nothing with IB (no FPO), many GPFS nodes with NSD’s. I know the need to
 resync the FS with mmchdisk / mmrestripe will happen more often. The only one feature will help is QoS for the GPFS admin jobs. I hope we are not fare away from this.</span><span lang="EN-GB"><br>
<br>
</span><span lang="EN-GB" style="font-family:"Calibri","sans-serif";color:#1F497D">Thanks,</span><span lang="EN-GB"><br>
</span><span lang="EN-GB" style="font-family:"Calibri","sans-serif";color:#1F497D">Daniel</span><span lang="EN-GB"><br>
<br>
<br>
</span><b><span lang="EN-GB" style="font-family:"Tahoma","sans-serif"">Von:</span></b><span lang="EN-GB" style="font-family:"Tahoma","sans-serif"">
</span><a href="mailto:gpfsug-discuss-bounces@gpfsug.org"><span lang="EN-GB" style="font-family:"Tahoma","sans-serif"">gpfsug-discuss-bounces@gpfsug.org</span></a><span lang="EN-GB" style="font-family:"Tahoma","sans-serif""> [</span><a href="mailto:gpfsug-discuss-bounces@gpfsug.org"><span lang="EN-GB" style="font-family:"Tahoma","sans-serif"">mailto:gpfsug-discuss-bounces@gpfsug.org</span></a><span lang="EN-GB" style="font-family:"Tahoma","sans-serif"">]
<b>Im Auftrag von </b>Sven Oehme<b><br>
Gesendet:</b> Mittwoch, 1. Juli 2015 16:21<b><br>
An:</b> gpfsug main discussion list<b><br>
Betreff:</b> Re: [gpfsug-discuss] GPFS 4.1.1 without QoS for mmrestripefs?</span><span lang="EN-GB"><o:p></o:p></span></p>
<p><span lang="EN-GB" style="font-size:13.5pt">Daniel,<br>
<br>
as you know, we can't discuss future / confidential items on a mailing list. <br>
what i presented as an outlook to future releases hasn't changed from a technical standpoint, we just can't share a release date until we announce it official.<br>
there are multiple ways today to limit the impact on restripe and other tasks, the best way to do this is to run the task ( using -N) on a node (or very small number of nodes) that has no performance critical role. while this is not perfect, it should limit
 the impact significantly. .<br>
<br>
sven<br>
<br>
------------------------------------------<br>
Sven Oehme <br>
Scalable Storage Research <br>
email: </span><a href="mailto:oehmes@us.ibm.com"><span lang="EN-GB" style="font-size:13.5pt">oehmes@us.ibm.com</span></a><span lang="EN-GB" style="font-size:13.5pt">
<br>
Phone: +1 (408) 824-8904 <br>
IBM Almaden Research Lab <br>
------------------------------------------<br>
<br>
</span><img border="0" width="16" height="16" id="Bild_x0020_3" src="cid:image001.gif@01D0BB27.B4CC9750" alt="Beschreibung: Inactive hide details for Daniel Vogel ---07/01/2015 03:29:11 AM---Hi Years ago, IBM made some plan to do a implementation "QoS"><span lang="EN-GB" style="font-size:13.5pt;color:#424282">Daniel
 Vogel ---07/01/2015 03:29:11 AM---Hi Years ago, IBM made some plan to do a implementation "QoS for mmrestripefs, mmdeldisk...". If a "</span><span lang="EN-GB" style="font-size:13.5pt"><br>
</span><span lang="EN-GB" style="color:#5F5F5F"><br>
From: </span><span lang="EN-GB">Daniel Vogel <</span><a href="mailto:Daniel.Vogel@abcsystems.ch"><span lang="EN-GB">Daniel.Vogel@abcsystems.ch</span></a><span lang="EN-GB">><span style="color:#5F5F5F"><br>
To: </span>"'gpfsug-discuss@gpfsug.org'" <</span><a href="mailto:gpfsug-discuss@gpfsug.org"><span lang="EN-GB">gpfsug-discuss@gpfsug.org</span></a><span lang="EN-GB">><span style="color:#5F5F5F"><br>
Date: </span>07/01/2015 03:29 AM<span style="color:#5F5F5F"><br>
Subject: </span>[gpfsug-discuss] GPFS 4.1.1 without QoS for mmrestripefs?<span style="color:#5F5F5F"><br>
</span></span><span style="color:#5F5F5F">Sent by: </span><a href="mailto:gpfsug-discuss-bounces@gpfsug.org">gpfsug-discuss-bounces@gpfsug.org</a><o:p></o:p></p>
<div>
<div class="MsoNormal">
<hr size="2" width="100%" noshade="" style="color:gray" align="left">
</div>
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-GB"><br>
</span><span lang="EN-GB" style="font-size:13.5pt"><br>
<br>
</span><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif""><br>
Hi</span><span lang="EN-GB" style="font-size:13.5pt"><br>
</span><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif""><br>
Years ago, IBM made some plan to do a implementation “QoS for mmrestripefs, mmdeldisk…”. If a “mmfsrestripe” is running, very poor performance for NFS access.<br>
I opened a PMR to ask for QoS in version 4.1.1 (Spectrum Scale).</span><span lang="EN-GB" style="font-size:13.5pt"><br>
</span><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif""><br>
PMR 61309,113,848:</span><i><span lang="EN-GB" style="font-size:13.5pt;font-family:"Arial","sans-serif""><br>
I discussed the question of QOS with the development team. These <br>
command changes that were noticed are not meant to be used as GA code<br>
which is why they are not documented. I cannot provide any further <br>
information from the support perspective. </span></i><span lang="EN-GB" style="font-size:13.5pt"><br>
<br>
</span><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif""><br>
Anybody knows about QoS? The last hope was at “GPFS Workshop Stuttgart März 2015” with Sven Oehme as speaker.</span><span lang="EN-GB" style="font-size:13.5pt"><br>
</span><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif""><br>
Daniel Vogel<br>
IT Consultant</span><span lang="EN-GB" style="font-size:13.5pt"><br>
</span><b><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif""><br>
ABC SYSTEMS AG</span></b><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif""><br>
Hauptsitz Zürich<br>
Rütistrasse 28<br>
CH - 8952 Schlieren<br>
T +41 43 433 6 433<br>
D +41 43 433 6 467<u><span style="color:blue"><br>
</span></u></span><a href="http://www.abcsystems.ch/"><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif"">http://www.abcsystems.ch</span></a><span lang="EN-GB" style="font-size:13.5pt"><br>
</span><b><i><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1F497D"><br>
ABC - A</span></i></b><i><span lang="EN-GB" style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1F497D">lways
<b>B</b>etter <b>C</b>oncepts. <b>A</b>pproved <b>B</b>y <b>C</b>ustomers since 1981.</span></i><span lang="EN-GB" style="font-family:"Courier New""><br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at gpfsug.org<u><span style="color:blue"><br>
</span></u></span><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><span lang="EN-GB" style="font-family:"Courier New"">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</span></a><span lang="EN-GB" style="font-size:13.5pt"><br>
<br>
</span><tt><span lang="EN-GB" style="font-size:10.0pt">_______________________________________________</span></tt><span lang="EN-GB" style="font-size:10.0pt;font-family:"Courier New""><br>
<tt>gpfsug-discuss mailing list</tt><br>
<tt>gpfsug-discuss at gpfsug.org</tt><br>
</span><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><span lang="EN-GB" style="font-size:10.0pt;font-family:"Courier New"">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</span></a><span lang="EN-GB" style="font-size:10.0pt;font-family:"Courier New""><br>
</span><span lang="EN-GB"><br>
<br>
<o:p></o:p></span></p>
</div>
</body>
</html>