<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hi,</p>
<p><br>
</p>
<p>Having an issue that looks the same as this one: </p>
<p>We can do sequential writes to the filesystem at 7,8 GB/s total ,
which is the expected speed for our current storage <br>
backend. While we have even better performance with sequential
reads on raw storage LUNS, using GPFS we can only reach 1GB/s in
total (each nsd server seems limited by 0,5GB/s) independent of
the number of clients <br>
(1,2,4,..) or ways we tested (fio,dd). We played with blockdev
params, MaxMBps, PrefetchThreads, hyperthreading, c1e/cstates, ..
as discussed in this thread, but nothing seems to impact this read
performance. <br>
</p>
<p>Any ideas?</p>
Thanks!<br>
<br>
Kenneth<br>
<br>
<font face="Times New Roman, Times, serif"></font>
<div class="moz-cite-prefix">On 17/02/17 19:29, Jan-Frode Myklebust
wrote:<br>
</div>
<blockquote
cite="mid:CAHwPatjtV0dc4i9bH_Os8-4GAO5w7uBHmYeTdnk3p93ZmhmZyg@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
I just had a similar experience from a sandisk infiniflash system
SAS-attached to s single host. Gpfsperf reported 3,2 Gbyte/s for
writes. and 250-300 Mbyte/s on sequential reads!! Random reads
were on the order of 2 Gbyte/s.<br>
<br>
After a bit head scratching snd fumbling around I found out that
reducing maxMBpS from 10000 to 100 fixed the problem! Digging
further I found that reducing prefetchThreads from default=72 to
32 also fixed it, while leaving maxMBpS at 10000. Can now also
read at 3,2 GByte/s.<br>
<br>
Could something like this be the problem on your box as well?<br>
<br>
<br>
<br>
-jf<br>
<div class="gmail_quote">
<div dir="ltr">fre. 17. feb. 2017 kl. 18.13 skrev Aaron Knister
<<a moz-do-not-send="true"
href="mailto:aaron.s.knister@nasa.gov">aaron.s.knister@nasa.gov</a>>:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Well, I'm
somewhat scrounging for hardware. This is in our test<br
class="gmail_msg">
environment :) And yep, it's got the 2U gpu-tray in it
although even<br class="gmail_msg">
without the riser it has 2 PCIe slots onboard (excluding the
on-board<br class="gmail_msg">
dual-port mezz card) so I think it would make a fine NSD
server even<br class="gmail_msg">
without the riser.<br class="gmail_msg">
<br class="gmail_msg">
-Aaron<br class="gmail_msg">
<br class="gmail_msg">
On 2/17/17 11:43 AM, Simon Thompson (Research Computing - IT
Services)<br class="gmail_msg">
wrote:<br class="gmail_msg">
> Maybe its related to interrupt handlers somehow? You
drive the load up on one socket, you push all the interrupt
handling to the other socket where the fabric card is
attached?<br class="gmail_msg">
><br class="gmail_msg">
> Dunno ... (Though I am intrigued you use idataplex nodes
as NSD servers, I assume its some 2U gpu-tray riser one or
something !)<br class="gmail_msg">
><br class="gmail_msg">
> Simon<br class="gmail_msg">
> ________________________________________<br
class="gmail_msg">
> From: <a moz-do-not-send="true"
href="mailto:gpfsug-discuss-bounces@spectrumscale.org"
class="gmail_msg" target="_blank">gpfsug-discuss-bounces@spectrumscale.org</a>
[<a moz-do-not-send="true"
href="mailto:gpfsug-discuss-bounces@spectrumscale.org"
class="gmail_msg" target="_blank">gpfsug-discuss-bounces@spectrumscale.org</a>]
on behalf of Aaron Knister [<a moz-do-not-send="true"
href="mailto:aaron.s.knister@nasa.gov" class="gmail_msg"
target="_blank">aaron.s.knister@nasa.gov</a>]<br
class="gmail_msg">
> Sent: 17 February 2017 15:52<br class="gmail_msg">
> To: gpfsug main discussion list<br class="gmail_msg">
> Subject: [gpfsug-discuss] bizarre performance behavior<br
class="gmail_msg">
><br class="gmail_msg">
> This is a good one. I've got an NSD server with 4x 16GB
fibre<br class="gmail_msg">
> connections coming in and 1x FDR10 and 1x QDR connection
going out to<br class="gmail_msg">
> the clients. I was having a really hard time getting
anything resembling<br class="gmail_msg">
> sensible performance out of it (4-5Gb/s writes but maybe
1.2Gb/s for<br class="gmail_msg">
> reads). The back-end is a DDN SFA12K and I *know* it can
do better than<br class="gmail_msg">
> that.<br class="gmail_msg">
><br class="gmail_msg">
> I don't remember quite how I figured this out but simply
by running<br class="gmail_msg">
> "openssl speed -multi 16" on the nsd server to drive up
the load I saw<br class="gmail_msg">
> an almost 4x performance jump which is pretty much goes
against every<br class="gmail_msg">
> sysadmin fiber in me (i.e. "drive up the cpu load with
unrelated crap to<br class="gmail_msg">
> quadruple your i/o performance").<br class="gmail_msg">
><br class="gmail_msg">
> This feels like some type of C-states frequency scaling
shenanigans that<br class="gmail_msg">
> I haven't quite ironed down yet. I booted the box with
the following<br class="gmail_msg">
> kernel parameters "intel_idle.max_cstate=0
processor.max_cstate=0" which<br class="gmail_msg">
> didn't seem to make much of a difference. I also tried
setting the<br class="gmail_msg">
> frequency governer to userspace and setting the minimum
frequency to<br class="gmail_msg">
> 2.6ghz (it's a 2.6ghz cpu). None of that really matters--
I still have<br class="gmail_msg">
> to run something to drive up the CPU load and then
performance improves.<br class="gmail_msg">
><br class="gmail_msg">
> I'm wondering if this could be an issue with the C1E
state? I'm curious<br class="gmail_msg">
> if anyone has seen anything like this. The node is a
dx360 M4<br class="gmail_msg">
> (Sandybridge) with 16 2.6GHz cores and 32GB of RAM.<br
class="gmail_msg">
><br class="gmail_msg">
> -Aaron<br class="gmail_msg">
><br class="gmail_msg">
> --<br class="gmail_msg">
> Aaron Knister<br class="gmail_msg">
> NASA Center for Climate Simulation (Code 606.2)<br
class="gmail_msg">
> Goddard Space Flight Center<br class="gmail_msg">
> (301) 286-2776<br class="gmail_msg">
> _______________________________________________<br
class="gmail_msg">
> gpfsug-discuss mailing list<br class="gmail_msg">
> gpfsug-discuss at <a moz-do-not-send="true"
href="http://spectrumscale.org" rel="noreferrer"
class="gmail_msg" target="_blank">spectrumscale.org</a><br
class="gmail_msg">
> <a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
rel="noreferrer" class="gmail_msg" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br
class="gmail_msg">
> _______________________________________________<br
class="gmail_msg">
> gpfsug-discuss mailing list<br class="gmail_msg">
> gpfsug-discuss at <a moz-do-not-send="true"
href="http://spectrumscale.org" rel="noreferrer"
class="gmail_msg" target="_blank">spectrumscale.org</a><br
class="gmail_msg">
> <a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
rel="noreferrer" class="gmail_msg" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br
class="gmail_msg">
><br class="gmail_msg">
<br class="gmail_msg">
--<br class="gmail_msg">
Aaron Knister<br class="gmail_msg">
NASA Center for Climate Simulation (Code 606.2)<br
class="gmail_msg">
Goddard Space Flight Center<br class="gmail_msg">
(301) 286-2776<br class="gmail_msg">
_______________________________________________<br
class="gmail_msg">
gpfsug-discuss mailing list<br class="gmail_msg">
gpfsug-discuss at <a moz-do-not-send="true"
href="http://spectrumscale.org" rel="noreferrer"
class="gmail_msg" target="_blank">spectrumscale.org</a><br
class="gmail_msg">
<a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
rel="noreferrer" class="gmail_msg" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br
class="gmail_msg">
</blockquote>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
</blockquote>
<br>
</body>
</html>