<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p><br>
</p>
<br>
<div class="moz-cite-prefix">On 21/04/17 15:10, Knister, Aaron S.
(GSFC-606.2)[COMPUTER SCIENCE CORP] wrote:<br>
</div>
<blockquote cite="mid:BF39DDE5-4E08-4F3E-985A-42E5576E8BA3@nasa.gov"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<div dir="ltr">Fantastic news! It might also be worth running
"cpupower monitor" or "turbostat" on your NSD servers while
you're running dd tests from the clients to see what CPU
frequency your cores are actually running at.
</div>
</blockquote>
Thanks! I verified with turbostat and cpuinfo, our cpus are running
in high performance mode and frequency is always at highest level.<br>
<br>
<blockquote cite="mid:BF39DDE5-4E08-4F3E-985A-42E5576E8BA3@nasa.gov"
type="cite">
<div dir="ltr">
<div dir="ltr"><br>
</div>
<div dir="ltr">A typical NSD server workload (especially with IB
verbs and for reads) can be pretty light on CPU which might
not prompt your CPU crew governor to up the frequency (which
can affect throughout). If your frequency scaling governor
isn't kicking up the frequency of your CPUs I've seen that
cause this behavior in my testing.
<div dir="ltr"><br>
</div>
<div dir="ltr">-Aaron</div>
</div>
</div>
<span id="draft-break"></span><br>
<br>
<span id="draft-break"></span><br>
<br>
<div>
<div class="null" dir="auto">On April 21, 2017 at 05:43:40 EDT,
Kenneth Waegeman <a class="moz-txt-link-rfc2396E" href="mailto:kenneth.waegeman@ugent.be"><kenneth.waegeman@ugent.be></a> wrote:<br
class="null">
</div>
<blockquote type="cite"
style="border-left-style:solid;border-width:1px;margin-left:0px;padding-left:10px;"
class="null">
<div class="null" dir="auto">
<div class="null">
<div class="null" bgcolor="#FFFFFF" text="#000000">
<p class="null">Hi, <br class="null">
</p>
<p class="null">We are running a test setup with 2 NSD
Servers backed by 4 Dell Powervaults MD3460s. nsd00 is
primary serving LUNS of controller A of the 4
powervaults, nsd02 is primary serving LUNS of
controller B.
<br class="null">
</p>
<p class="null">We are testing from 2 testing machines
connected to the nsds with infiniband, verbs enabled.<br
class="null">
</p>
When we do dd from the NSD servers, we see indeed
performance going to 5.8GB/s for one nsd, 7.2GB/s for
the two! So it looks like GPFS is able to get the data
at a decent speed. Since we can write from the clients
at a good speed, I didn't suspect the communication
between clients and nsds being the issue, especially
since total performance stays the same using 1 or
multiple clients.
<br class="null">
<br class="null">
I'll use the nsdperf tool to see if we can find
anything, <br class="null">
<br class="null">
thanks!<br class="null">
<br class="null">
K<br class="null">
<br class="null">
<div nop="moz-cite-prefix" class="null">On 20/04/17
17:04, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE
CORP] wrote:<br class="null">
</div>
<div class="null" ref="16146">
<div id="bx-quote-16146" class="null"><span
class="null"></span></div>
</div>
<blockquote type="cite" class="null">
<div class="null">
<div dir="ltr" class="null">Interesting. Could you
share a little more about your architecture? Is it
possible to mount the fs on an NSD server and do
some dd's from the fs on the NSD server? If that
gives you decent performance perhaps try NSDPERF
next <span class="null"><a moz-do-not-send="true"
nop="moz-txt-link-freetext"
href="https://www.ibm.com/developerworks/community/wikis/home?lang=en#%21/wiki/General+Parallel+File+System+%28GPFS%29/page/Testing+network+performance+with+nsdperf"
class="null">https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General+Parallel+File+System+(GPFS)/page/Testing+network+performance+with+nsdperf</a></span>
<div class="null"><span class="null"><br
class="null">
</span></div>
<div class="null"><span class="null">-Aaron</span></div>
</div>
<span id="draft-break" class="null"></span><br
class="null">
<br class="null">
<span id="draft-break" class="null"></span><br
class="null">
<br class="null">
<div class="null">
<div nop="null" dir="auto" class="null">On April
20, 2017 at 10:53:47 EDT, Kenneth Waegeman
<a moz-do-not-send="true"
nop="moz-txt-link-rfc2396E"
href="mailto:kenneth.waegeman@ugent.be"
class="null">
<kenneth.waegeman@ugent.be></a> wrote:<br
nop="null" class="null">
</div>
<blockquote type="cite" class="null">
<div class="null">
<div nop="null" dir="auto" class="null">
<div nop="null" class="null">
<div nop="null" bgcolor="#FFFFFF"
text="#000000" class="null">
<p nop="null" class="null">Hi,</p>
<p nop="null" class="null"><br
nop="null" class="null">
</p>
<p nop="null" class="null">Having an
issue that looks the same as this
one: </p>
<p nop="null" class="null">We can do
sequential writes to the filesystem at
7,8 GB/s total , which is the expected
speed for our current storage
<br nop="null" class="null">
backend. While we have even better
performance with sequential reads on
raw storage LUNS, using GPFS we can
only reach 1GB/s in total (each nsd
server seems limited by 0,5GB/s)
independent of the number of clients
<br nop="null" class="null">
(1,2,4,..) or ways we tested (fio,dd).
We played with blockdev params,
MaxMBps, PrefetchThreads,
hyperthreading, c1e/cstates, .. as
discussed in this thread, but nothing
seems to impact this read performance.
<br nop="null" class="null">
</p>
<p nop="null" class="null">Any ideas?</p>
Thanks!<br nop="null" class="null">
<br nop="null" class="null">
Kenneth<br nop="null" class="null">
<br nop="null" class="null">
<div nop="moz-cite-prefix" class="null">On
17/02/17 19:29, Jan-Frode Myklebust
wrote:<br nop="null" class="null">
</div>
<div nop="null" ref="16034" class="null">
<div id="bx-quote-16034" nop="null"
class="null"><span nop="null"
class="null"></span></div>
</div>
<blockquote type="cite" class="null">
<div class="null">
<div nop="null" class="null">I just
had a similar experience from a
sandisk infiniflash system
SAS-attached to s single host.
Gpfsperf reported 3,2 Gbyte/s for
writes. and 250-300 Mbyte/s on
sequential reads!! Random reads
were on the order of 2 Gbyte/s.<br
nop="null" class="null">
<br nop="null" class="null">
After a bit head scratching snd
fumbling around I found out that
reducing maxMBpS from 10000 to 100
fixed the problem! Digging further
I found that reducing
prefetchThreads from default=72 to
32 also fixed it, while leaving
maxMBpS at 10000. Can now also
read at 3,2 GByte/s.<br nop="null"
class="null">
<br nop="null" class="null">
Could something like this be the
problem on your box as well?<br
nop="null" class="null">
<br nop="null" class="null">
<br nop="null" class="null">
<br nop="null" class="null">
-jf<br nop="null" class="null">
<div nop="gmail_quote"
class="null">
<div dir="ltr" nop="null"
class="null">fre. 17. feb.
2017 kl. 18.13 skrev Aaron
Knister <<a
moz-do-not-send="true"
nop="moz-txt-link-abbreviated"
href="mailto:aaron.s.knister@nasa.gov" class="null"><a class="moz-txt-link-abbreviated" href="mailto:aaron.s.knister@nasa.gov">aaron.s.knister@nasa.gov</a></a>>:<br
nop="null" class="null">
</div>
<blockquote nop="gmail_quote"
style="margin:0 0 0
.8ex;border-left:1px #ccc
solid;padding-left:1ex"
class="null">
Well, I'm somewhat scrounging
for hardware. This is in our
test<br nop="gmail_msg"
class="null">
environment :) And yep, it's
got the 2U gpu-tray in it
although even<br
nop="gmail_msg" class="null">
without the riser it has 2
PCIe slots onboard (excluding
the on-board<br
nop="gmail_msg" class="null">
dual-port mezz card) so I
think it would make a fine NSD
server even<br nop="gmail_msg"
class="null">
without the riser.<br
nop="gmail_msg" class="null">
<br nop="gmail_msg"
class="null">
-Aaron<br nop="gmail_msg"
class="null">
<br nop="gmail_msg"
class="null">
On 2/17/17 11:43 AM, Simon
Thompson (Research Computing -
IT Services)<br
nop="gmail_msg" class="null">
wrote:<br nop="gmail_msg"
class="null">
> Maybe its related to
interrupt handlers somehow?
You drive the load up on one
socket, you push all the
interrupt handling to the
other socket where the fabric
card is attached?<br
nop="gmail_msg" class="null">
><br nop="gmail_msg"
class="null">
> Dunno ... (Though I am
intrigued you use idataplex
nodes as NSD servers, I assume
its some 2U gpu-tray riser one
or something !)<br
nop="gmail_msg" class="null">
><br nop="gmail_msg"
class="null">
> Simon<br nop="gmail_msg"
class="null">
>
________________________________________<br
nop="gmail_msg" class="null">
> From: <a
moz-do-not-send="true"
href="mailto:gpfsug-discuss-bounces@spectrumscale.org"
nop="gmail_msg"
target="_blank" class="null">
<a class="moz-txt-link-abbreviated" href="mailto:gpfsug-discuss-bounces@spectrumscale.org">gpfsug-discuss-bounces@spectrumscale.org</a></a> [<a moz-do-not-send="true"
nop="moz-txt-link-abbreviated"
href="mailto:gpfsug-discuss-bounces@spectrumscale.org"
class="null">gpfsug-discuss-bounces@spectrumscale.org</a>]
on behalf of Aaron Knister [<a
moz-do-not-send="true"
nop="moz-txt-link-abbreviated"
href="mailto:aaron.s.knister@nasa.gov" class="null"><a class="moz-txt-link-abbreviated" href="mailto:aaron.s.knister@nasa.gov">aaron.s.knister@nasa.gov</a></a>]<br
nop="gmail_msg" class="null">
> Sent: 17 February 2017
15:52<br nop="gmail_msg"
class="null">
> To: gpfsug main
discussion list<br
nop="gmail_msg" class="null">
> Subject: [gpfsug-discuss]
bizarre performance behavior<br
nop="gmail_msg" class="null">
><br nop="gmail_msg"
class="null">
> This is a good one. I've
got an NSD server with 4x 16GB
fibre<br nop="gmail_msg"
class="null">
> connections coming in and
1x FDR10 and 1x QDR connection
going out to<br
nop="gmail_msg" class="null">
> the clients. I was having
a really hard time getting
anything resembling<br
nop="gmail_msg" class="null">
> sensible performance out
of it (4-5Gb/s writes but
maybe 1.2Gb/s for<br
nop="gmail_msg" class="null">
> reads). The back-end is a
DDN SFA12K and I *know* it can
do better than<br
nop="gmail_msg" class="null">
> that.<br nop="gmail_msg"
class="null">
><br nop="gmail_msg"
class="null">
> I don't remember quite
how I figured this out but
simply by running<br
nop="gmail_msg" class="null">
> "openssl speed -multi 16"
on the nsd server to drive up
the load I saw<br
nop="gmail_msg" class="null">
> an almost 4x performance
jump which is pretty much goes
against every<br
nop="gmail_msg" class="null">
> sysadmin fiber in me
(i.e. "drive up the cpu load
with unrelated crap to<br
nop="gmail_msg" class="null">
> quadruple your i/o
performance").<br
nop="gmail_msg" class="null">
><br nop="gmail_msg"
class="null">
> This feels like some type
of C-states frequency scaling
shenanigans that<br
nop="gmail_msg" class="null">
> I haven't quite ironed
down yet. I booted the box
with the following<br
nop="gmail_msg" class="null">
> kernel parameters
"intel_idle.max_cstate=0
processor.max_cstate=0" which<br
nop="gmail_msg" class="null">
> didn't seem to make much
of a difference. I also tried
setting the<br nop="gmail_msg"
class="null">
> frequency governer to
userspace and setting the
minimum frequency to<br
nop="gmail_msg" class="null">
> 2.6ghz (it's a 2.6ghz
cpu). None of that really
matters-- I still have<br
nop="gmail_msg" class="null">
> to run something to drive
up the CPU load and then
performance improves.<br
nop="gmail_msg" class="null">
><br nop="gmail_msg"
class="null">
> I'm wondering if this
could be an issue with the C1E
state? I'm curious<br
nop="gmail_msg" class="null">
> if anyone has seen
anything like this. The node
is a dx360 M4<br
nop="gmail_msg" class="null">
> (Sandybridge) with 16
2.6GHz cores and 32GB of RAM.<br
nop="gmail_msg" class="null">
><br nop="gmail_msg"
class="null">
> -Aaron<br nop="gmail_msg"
class="null">
><br nop="gmail_msg"
class="null">
> --<br nop="gmail_msg"
class="null">
> Aaron Knister<br
nop="gmail_msg" class="null">
> NASA Center for Climate
Simulation (Code 606.2)<br
nop="gmail_msg" class="null">
> Goddard Space Flight
Center<br nop="gmail_msg"
class="null">
> (301) 286-2776<br
nop="gmail_msg" class="null">
>
_______________________________________________<br
nop="gmail_msg" class="null">
> gpfsug-discuss mailing
list<br nop="gmail_msg"
class="null">
> gpfsug-discuss at <a
moz-do-not-send="true"
href="http://spectrumscale.org"
rel="noreferrer"
nop="gmail_msg"
target="_blank" class="null">
spectrumscale.org</a><br
nop="gmail_msg" class="null">
> <a
moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
rel="noreferrer"
nop="gmail_msg"
target="_blank" class="null">
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></a><br nop="gmail_msg"
class="null">
>
_______________________________________________<br
nop="gmail_msg" class="null">
> gpfsug-discuss mailing
list<br nop="gmail_msg"
class="null">
> gpfsug-discuss at <a
moz-do-not-send="true"
href="http://spectrumscale.org"
rel="noreferrer"
nop="gmail_msg"
target="_blank" class="null">
spectrumscale.org</a><br
nop="gmail_msg" class="null">
> <a
moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
rel="noreferrer"
nop="gmail_msg"
target="_blank" class="null">
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></a><br nop="gmail_msg"
class="null">
><br nop="gmail_msg"
class="null">
<br nop="gmail_msg"
class="null">
--<br nop="gmail_msg"
class="null">
Aaron Knister<br
nop="gmail_msg" class="null">
NASA Center for Climate
Simulation (Code 606.2)<br
nop="gmail_msg" class="null">
Goddard Space Flight Center<br
nop="gmail_msg" class="null">
(301) 286-2776<br
nop="gmail_msg" class="null">
_______________________________________________<br nop="gmail_msg"
class="null">
gpfsug-discuss mailing list<br
nop="gmail_msg" class="null">
gpfsug-discuss at <a
moz-do-not-send="true"
href="http://spectrumscale.org"
rel="noreferrer"
nop="gmail_msg"
target="_blank" class="null">
spectrumscale.org</a><br
nop="gmail_msg" class="null">
<a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
rel="noreferrer"
nop="gmail_msg"
target="_blank" class="null">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br
nop="gmail_msg" class="null">
</blockquote>
<span id="bx-quote-end-16146"
class="null"></span><span
id="bx-quote-end-16034"
nop="null" class="null"></span></div>
<br nop="null" class="null">
<fieldset
nop="mimeAttachmentHeader"
class="null"></fieldset>
<br nop="null" class="null">
<pre nop="null" class="null" wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a moz-do-not-send="true" nop="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" class="null">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
</div>
</div>
</blockquote>
<br nop="null" class="null">
</div>
</div>
</div>
</div>
</blockquote>
</div>
<br class="null">
<fieldset nop="mimeAttachmentHeader" class="null"></fieldset>
<br class="null">
<pre class="null" wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a moz-do-not-send="true" nop="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" class="null">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
</div>
</blockquote>
<br class="null">
</div>
</div>
</div>
</blockquote>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
</blockquote>
<br>
</body>
</html>