[gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN: effect of ignorePrefetchLUNCount

Jan-Frode Myklebust janfrode at tanso.net
Tue Jun 16 18:54:41 BST 2020


tir. 16. jun. 2020 kl. 15:32 skrev Giovanni Bracco <giovanni.bracco at enea.it
>:

>
> > I would correct MaxMBpS -- put it at something reasonable, enable
> > verbsRdmaSend=yes and
> > ignorePrefetchLUNCount=yes.
>
> Now we have set:
> verbsRdmaSend yes
> ignorePrefetchLUNCount yes
> maxMBpS 8000
>
> but the only parameter which has a strong effect by itself is
>
> ignorePrefetchLUNCount yes
>
> and the readout performance increased of a factor at least 4, from
> 50MB/s to 210 MB/s



That’s interesting.. ignoreprefetchluncount=yes should mean it more
aggresively schedules IO. Did you also try lowering maxMBpS? I’m thinking
maybe something is getting flooded somewhere..

Another knob would be to increase workerThreads, and/or prefetchPct (don’t
quite renember how these influence each other).

And it would be useful to run nsdperf between client and nsd-servers, to
verify/rule out any network issue.


> fio --name=seqwrite --rw=write --buffered=1 --ioengine=posixaio --bs=1m
> --numjobs=1 --size=100G --runtime=60
>
> fio --name=seqread --rw=wread --buffered=1 --ioengine=posixaio --bs=1m
> --numjobs=1 --size=100G --runtime=60
>
>
Not too familiar with fio, but ... does it help to increase numjobs?

And.. do you tell both sides which fabric number they’re on («verbsPorts
qib0/1/1») so the GPFS knows not to try to connect verbsPorts that can’t
communicate?


  -jf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200616/7e74b63d/attachment-0002.htm>


More information about the gpfsug-discuss mailing list