[gpfsug-discuss] sequential I/O write - performance

Jan-Frode Myklebust janfrode at tanso.net
Thu Feb 8 19:08:08 GMT 2024


You’re missing a few standard configs that might be relevant.. I would
suggest:


workerThreads=512 (or 1024)
ignoreprefetchluncount=yes
numamemoryinterleave=yes (and make sure numactl is installed)

ignoreprefetchluncount is important to tell the system that you have
multiple HDDs backing each LUN, otherwise it thinks there a single spindle,
and won’t schedule much readahead/writebehind.



  -jf


tor. 8. feb. 2024 kl. 16:00 skrev Michal Hruška <Michal.Hruska at mcomputers.cz
>:

> @Aaron
>
> Yes, I can confirm that 2MB blocks are transfered over.
>
> @ Jan-Frode
>
> We tried to change multiple parameters, but if you know the best
> combination for sequential IO, please let me know.
>
>
>
> #mmlsconfig
>
> autoload no
>
> dmapiFileHandleSize 32
>
> minReleaseLevel 5.1.9.0
>
> tscCmdAllowRemoteConnections no
>
> ccrEnabled yes
>
> cipherList AUTHONLY
>
> sdrNotifyAuthEnabled yes
>
> pagepool 64G
>
> maxblocksize 16384K
>
> maxMBpS 40000
>
> maxReceiverThreads 32
>
> nsdMaxWorkerThreads 512
>
> nsdMinWorkerThreads 8
>
> nsdMultiQueue 256
>
> nsdSmallThreadRatio 0
>
> nsdThreadsPerQueue 3
>
> prefetchAggressiveness 2
>
> adminMode central
>
>
>
> /dev/fs0
>
> @Uwe
>
> Using iohist we found out that gpfs is overloading one dm-device (it took
> about 500ms to finish IOs). We replaced the „problematic“ dm-device (as we
> have enough drives to play with) for new one but the overloading issue just
> jumped to another dm-device.
> We believe that this behaviour is caused by the gpfs but we are unable to
> locate the root cause of it.
>
>
>
> Best,
> Michal
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240208/38252def/attachment.htm>


More information about the gpfsug-discuss mailing list