[gpfsug-discuss] very low read performance in simple spectrum scale/gpfs cluster with a storage-server SAN

Jan-Frode Myklebust janfrode at tanso.net
Fri Jun 5 18:02:49 BST 2020


fre. 5. jun. 2020 kl. 15:53 skrev Giovanni Bracco <giovanni.bracco at enea.it>:

> answer in the text
>
> On 05/06/20 14:58, Jan-Frode Myklebust wrote:
> >
> > Could maybe be interesting to drop the NSD servers, and let all nodes
> > access the storage via srp ?
>
> no we can not: the production clusters fabric is a mix of a QDR based
> cluster and a OPA based cluster and NSD nodes provide the service to both.
>

You could potentially still do SRP from QDR nodes, and via NSD for your
omnipath nodes. Going via NSD seems like a bit pointless indirection.



> >
> > Maybe turn off readahead, since it can cause performance degradation
> > when GPFS reads 1 MB blocks scattered on the NSDs, so that read-ahead
> > always reads too much. This might be the cause of the slow read seen —
> > maybe you’ll also overflow it if reading from both NSD-servers at the
> > same time?
>
> I have switched the readahead off and this produced a small (~10%)
> increase of performances when reading from a NSD server, but no change
> in the bad behaviour for the GPFS clients


> >
> >
> > Plus.. it’s always nice to give a bit more pagepool to hhe clients than
> > the default.. I would prefer to start with 4 GB.
>
> we'll do also that and we'll let you know!


Could you show your mmlsconfig? Likely you should set maxMBpS to indicate
what kind of throughput a client can do (affects GPFS
readahead/writebehind).  Would typically also increase workerThreads on
your NSD servers.


1 MB blocksize is a bit bad for your 9+p+q RAID with 256 KB strip size.
When you write one GPFS block, less than a half RAID stripe is written,
which means you  need to read back some data to calculate new parities. I
would prefer 4 MB block size, and maybe also change to 8+p+q so that one
GPFS is a multiple of a full 2 MB stripe.


   -jf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200605/547f1049/attachment-0002.htm>


More information about the gpfsug-discuss mailing list