<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hi,</p>
<p>Tried these settings, but sadly I'm not seeing any changes.</p>
<p>Thanks,</p>
<p>Kenneth<br>
</p>
<br>
<div class="moz-cite-prefix">On 21/04/17 09:25, Olaf Weiser wrote:<br>
</div>
<blockquote
cite="mid:OF80C82C90.0838F149-ONC1258109.0027B72A-C1258109.0028C647@notes.na.collabserv.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<font face="sans-serif" size="2">pls check</font><br>
<font face="sans-serif" size="2">workerThreads (assuming you 're
> 4.2.2) start with 128 .. increase iteratively </font><br>
<font face="sans-serif" size="2">pagepool at least 8 G</font><br>
<font face="sans-serif" size="2">ignorePrefetchLunCount=yes (1) </font><br>
<br>
<font face="sans-serif" size="2">then you won't see a difference
and
GPFS is as fast or even faster .. </font><br>
<br>
<br>
<br>
<font face="sans-serif" color="#5f5f5f" size="1">From:
</font><font face="sans-serif" size="1">"Marcus Koenig1"
<a class="moz-txt-link-rfc2396E" href="mailto:marcusk@nz1.ibm.com"><marcusk@nz1.ibm.com></a></font><br>
<font face="sans-serif" color="#5f5f5f" size="1">To:
</font><font face="sans-serif" size="1">gpfsug main discussion
list <a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss@spectrumscale.org"><gpfsug-discuss@spectrumscale.org></a></font><br>
<font face="sans-serif" color="#5f5f5f" size="1">Date:
</font><font face="sans-serif" size="1">04/21/2017 03:24 AM</font><br>
<font face="sans-serif" color="#5f5f5f" size="1">Subject:
</font><font face="sans-serif" size="1">Re: [gpfsug-discuss]
bizarre performance behavior</font><br>
<font face="sans-serif" color="#5f5f5f" size="1">Sent by:
</font><font face="sans-serif" size="1"><a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a></font><br>
<hr noshade="noshade"><br>
<br>
<br>
<font size="2">Hi Kennmeth,</font><font size="3"><br>
</font><font size="2"><br>
we also had similar performance numbers in our tests. Native was
far quicker
than through GPFS. When we learned though that the client tested
the performance
on the FS at a big blocksize (512k) with small files - we were
able to
speed it up significantly using a smaller FS blocksize
(obviously we had
to recreate the FS).</font><font size="3"><br>
</font><font size="2"><br>
So really depends on how you do your tests.</font>
<p><font face="Arial" color="#8f8f8f" size="3"><b>Cheers,</b></font><font
size="3"><br>
</font><font face="Arial" color="#8f8f8f" size="3"><b><br>
Marcus Koenig</b></font><font face="Arial" size="2"><br>
Lab Services Storage & Power Specialist</font><font
face="Calibri" size="2"><i><br>
IBM Australia & New Zealand Advanced Technical Skills</i></font><font
face="Arial" size="2"><br>
IBM Systems-Hardware</font>
<table style="border-collapse:collapse;" width="943">
<tbody>
<tr height="8">
<td colspan="3" style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" width="678">
<hr></td>
<td style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" valign="top" width="264"><img
src="cid:part1.7EDA808B.31E956D4@ugent.be"
style="border:0px solid;" align="bottom" height="1"
width="1"></td>
</tr>
<tr height="8" valign="top">
<td style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" width="119"><img
src="cid:part2.41479C5A.179F98EC@ugent.be"
style="border:0px solid;" align="bottom"></td>
<td style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" width="219"><font face="Arial"
color="#4181c0" size="1"><b>Mobile:</b></font><font
face="Arial" color="#5f5f5f" size="1">+64 21 67 34 27</font><font
face="Arial" color="#4181c0" size="1"><b><br>
E-mail:</b></font><font face="Arial" color="#5f5f5f"
size="1"> </font><a moz-do-not-send="true"
href="mailto:brendanp@nz1.ibm.com" target="_blank"><font
face="Arial" color="#5f5f5f" size="1"><u>marcusk@nz1.ibm.com</u></font></a>
<p><font face="Arial" color="#5f5f5f" size="1">82
Wyndham Street<br>
Auckland, AUK 1010<br>
New Zealand</font></p>
<div align="center"><font size="3"><br>
<br>
</font></div>
</td>
<td style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" width="340">
<div align="center"><img
src="cid:part4.18028789.4C1FE8C0@ugent.be"
style="border:0px solid;" align="bottom"><img
src="cid:part5.A70B7B6A.8CB24499@ugent.be"
style="border:0px solid;" align="bottom" height="52"
width="104"></div>
</td>
<td style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" width="264"><img
src="cid:part6.EB015307.03F9D754@ugent.be"
style="border:0px solid;" align="bottom" height="1"
width="1"></td>
</tr>
<tr height="8" valign="top">
<td style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" width="119"><img
src="cid:part7.9ECB7422.9BBF25B5@ugent.be"
style="border:0px solid;" align="bottom" height="1"
width="1"></td>
<td style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" width="219"><img
src="cid:part8.2CF4AF5F.2F335D57@ugent.be"
style="border:0px solid;" align="bottom" height="1"
width="1"></td>
<td style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" width="340"><img
src="cid:part9.AB88861E.BB34002F@ugent.be"
style="border:0px solid;" align="bottom" height="1"
width="1"></td>
<td style="border-style:none none none
none;border-color:#000000;border-width:0px 0px 0px
0px;padding:0px 0px;" width="264"><img
src="cid:part10.3D4FAA0F.6F5BBDE9@ugent.be"
style="border:0px solid;" align="bottom" height="1"
width="1"></td>
</tr>
</tbody>
</table>
<br>
<font size="3"><br>
</font><img src="cid:part11.01BDE962.D0774778@ugent.be"
alt="Inactive hide details for "Uwe Falke"
---04/21/2017 03:07:48 AM---Hi Kennmeth, is prefetching off or
on at your storage backe" style="border:0px solid;"><font
color="#424282" size="2">"Uwe
Falke" ---04/21/2017 03:07:48 AM---Hi Kennmeth, is prefetching
off
or on at your storage backend?</font><font size="3"><br>
</font><font color="#5f5f5f" size="2"><br>
From: </font><font size="2">"Uwe Falke"
<a class="moz-txt-link-rfc2396E" href="mailto:UWEFALKE@de.ibm.com"><UWEFALKE@de.ibm.com></a></font><font color="#5f5f5f"
size="2"><br>
To: </font><font size="2">gpfsug main discussion list
<a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss@spectrumscale.org"><gpfsug-discuss@spectrumscale.org></a></font><font
color="#5f5f5f" size="2"><br>
Date: </font><font size="2">04/21/2017 03:07 AM</font><font
color="#5f5f5f" size="2"><br>
Subject: </font><font size="2">Re: [gpfsug-discuss] bizarre
performance behavior</font><font color="#5f5f5f" size="2"><br>
Sent by: </font><font size="2"><a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a></font><font
size="3"><br>
</font></p>
<hr noshade="noshade"><font size="3"><br>
<br>
</font><tt><font size="2"><br>
Hi Kennmeth, <br>
<br>
is prefetching off or on at your storage backend?<br>
Raw sequential is very different from GPFS sequential at the
storage <br>
device !<br>
GPFS does its own prefetching, the storage would never know
what sectors
<br>
sequential read at GPFS level maps to at storage level!<br>
<br>
<br>
Mit freundlichen Grüßen / Kind regards<br>
<br>
<br>
Dr. Uwe Falke<br>
<br>
IT Specialist<br>
High Performance Computing Services / Integrated Technology
Services /
<br>
Data Center Services<br>
-------------------------------------------------------------------------------------------------------------------------------------------<br>
IBM Deutschland<br>
Rathausstr. 7<br>
09111 Chemnitz<br>
Phone: +49 371 6978 2165<br>
Mobile: +49 175 575 2877<br>
E-Mail: <a class="moz-txt-link-abbreviated" href="mailto:uwefalke@de.ibm.com">uwefalke@de.ibm.com</a><br>
-------------------------------------------------------------------------------------------------------------------------------------------<br>
IBM Deutschland Business & Technology Services GmbH /
Geschäftsführung:
<br>
Andreas Hasse, Thorsten Moehring<br>
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht
Stuttgart,
<br>
HRB 17122 <br>
<br>
<br>
<br>
<br>
From: Kenneth Waegeman <a class="moz-txt-link-rfc2396E" href="mailto:kenneth.waegeman@ugent.be"><kenneth.waegeman@ugent.be></a><br>
To: gpfsug main discussion list
<a class="moz-txt-link-rfc2396E" href="mailto:gpfsug-discuss@spectrumscale.org"><gpfsug-discuss@spectrumscale.org></a><br>
Date: 04/20/2017 04:53 PM<br>
Subject: Re: [gpfsug-discuss] bizarre performance
behavior<br>
Sent by: <a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a><br>
<br>
<br>
<br>
Hi,<br>
<br>
Having an issue that looks the same as this one: <br>
We can do sequential writes to the filesystem at 7,8 GB/s
total , which
is <br>
the expected speed for our current storage <br>
backend. While we have even better performance with
sequential reads
on <br>
raw storage LUNS, using GPFS we can only reach 1GB/s in total
(each nsd
<br>
server seems limited by 0,5GB/s) independent of the number of
clients
<br>
(1,2,4,..) or ways we tested (fio,dd). We played with blockdev
params,
<br>
MaxMBps, PrefetchThreads, hyperthreading, c1e/cstates, .. as
discussed
in <br>
this thread, but nothing seems to impact this read
performance. <br>
Any ideas?<br>
Thanks!<br>
<br>
Kenneth<br>
<br>
On 17/02/17 19:29, Jan-Frode Myklebust wrote:<br>
I just had a similar experience from a sandisk infiniflash
system <br>
SAS-attached to s single host. Gpfsperf reported 3,2 Gbyte/s
for writes.
<br>
and 250-300 Mbyte/s on sequential reads!! Random reads were on
the order
<br>
of 2 Gbyte/s.<br>
<br>
After a bit head scratching snd fumbling around I found out
that reducing
<br>
maxMBpS from 10000 to 100 fixed the problem! Digging further I
found that
<br>
reducing prefetchThreads from default=72 to 32 also fixed it,
while <br>
leaving maxMBpS at 10000. Can now also read at 3,2 GByte/s.<br>
<br>
Could something like this be the problem on your box as well?<br>
<br>
<br>
<br>
-jf<br>
fre. 17. feb. 2017 kl. 18.13 skrev Aaron Knister
<<a class="moz-txt-link-abbreviated" href="mailto:aaron.s.knister@nasa.gov">aaron.s.knister@nasa.gov</a><br>
>:<br>
Well, I'm somewhat scrounging for hardware. This is in our
test<br>
environment :) And yep, it's got the 2U gpu-tray in it
although even<br>
without the riser it has 2 PCIe slots onboard (excluding the
on-board<br>
dual-port mezz card) so I think it would make a fine NSD
server even<br>
without the riser.<br>
<br>
-Aaron<br>
<br>
On 2/17/17 11:43 AM, Simon Thompson (Research Computing - IT
Services)<br>
wrote:<br>
> Maybe its related to interrupt handlers somehow? You
drive the load
up <br>
on one socket, you push all the interrupt handling to the
other socket
<br>
where the fabric card is attached?<br>
><br>
> Dunno ... (Though I am intrigued you use idataplex nodes
as NSD servers,
<br>
I assume its some 2U gpu-tray riser one or something !)<br>
><br>
> Simon<br>
> ________________________________________<br>
> From: <a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a> [<br>
<a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a>] on behalf of Aaron
Knister [<br>
<a class="moz-txt-link-abbreviated" href="mailto:aaron.s.knister@nasa.gov">aaron.s.knister@nasa.gov</a>]<br>
> Sent: 17 February 2017 15:52<br>
> To: gpfsug main discussion list<br>
> Subject: [gpfsug-discuss] bizarre performance behavior<br>
><br>
> This is a good one. I've got an NSD server with 4x 16GB
fibre<br>
> connections coming in and 1x FDR10 and 1x QDR connection
going out
to<br>
> the clients. I was having a really hard time getting
anything resembling<br>
> sensible performance out of it (4-5Gb/s writes but maybe
1.2Gb/s for<br>
> reads). The back-end is a DDN SFA12K and I *know* it can
do better
than<br>
> that.<br>
><br>
> I don't remember quite how I figured this out but simply
by running<br>
> "openssl speed -multi 16" on the nsd server to drive up
the load I saw<br>
> an almost 4x performance jump which is pretty much goes
against every<br>
> sysadmin fiber in me (i.e. "drive up the cpu load with
unrelated
crap to<br>
> quadruple your i/o performance").<br>
><br>
> This feels like some type of C-states frequency scaling
shenanigans
that<br>
> I haven't quite ironed down yet. I booted the box with
the following<br>
> kernel parameters "intel_idle.max_cstate=0
processor.max_cstate=0"
which<br>
> didn't seem to make much of a difference. I also tried
setting the<br>
> frequency governer to userspace and setting the minimum
frequency
to<br>
> 2.6ghz (it's a 2.6ghz cpu). None of that really matters--
I still
have<br>
> to run something to drive up the CPU load and then
performance improves.<br>
><br>
> I'm wondering if this could be an issue with the C1E
state? I'm curious<br>
> if anyone has seen anything like this. The node is a
dx360 M4<br>
> (Sandybridge) with 16 2.6GHz cores and 32GB of RAM.<br>
><br>
> -Aaron<br>
><br>
> --<br>
> Aaron Knister<br>
> NASA Center for Climate Simulation (Code 606.2)<br>
> Goddard Space Flight Center<br>
> (301) 286-2776<br>
> _______________________________________________<br>
> gpfsug-discuss mailing list<br>
> gpfsug-discuss at spectrumscale.org<br>
> </font></tt><a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font
color="blue" size="2"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><tt><font
size="2"><br>
> _______________________________________________<br>
> gpfsug-discuss mailing list<br>
> gpfsug-discuss at spectrumscale.org<br>
> </font></tt><a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font
color="blue" size="2"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><tt><font
size="2"><br>
><br>
<br>
--<br>
Aaron Knister<br>
NASA Center for Climate Simulation (Code 606.2)<br>
Goddard Space Flight Center<br>
(301) 286-2776<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at spectrumscale.org</font></tt><tt><font
color="blue" size="2"><u><br>
</u></font></tt><a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font
color="blue" size="2"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><tt><font
size="2"><br>
<br>
<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at spectrumscale.org</font></tt><tt><font
color="blue" size="2"><u><br>
</u></font></tt><a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font
color="blue" size="2"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><tt><font
size="2"><br>
<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at spectrumscale.org</font></tt><tt><font
color="blue" size="2"><u><br>
</u></font></tt><a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font
color="blue" size="2"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><tt><font
size="2"><br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at spectrumscale.org</font></tt><tt><font
color="blue" size="2"><u><br>
</u></font></tt><a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font
color="blue" size="2"><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></tt></a><tt><font
size="2"><br>
</font></tt><font size="3"><br>
<br>
<br>
</font><tt><font size="2">_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at spectrumscale.org<br>
</font></tt><a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font
size="2">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font
size="2"><br>
</font></tt><br>
<br>
<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
</blockquote>
<br>
</body>
</html>