[gpfsug-discuss] sequential I/O write - performance

Michal Hruška Michal.Hruska at mcomputers.cz
Thu Feb 8 14:59:15 GMT 2024


@Aaron
Yes, I can confirm that 2MB blocks are transfered over.

@ Jan-Frode
We tried to change multiple parameters, but if you know the best combination for sequential IO, please let me know.

#mmlsconfig
autoload no
dmapiFileHandleSize 32
minReleaseLevel 5.1.9.0
tscCmdAllowRemoteConnections no
ccrEnabled yes
cipherList AUTHONLY
sdrNotifyAuthEnabled yes
pagepool 64G
maxblocksize 16384K
maxMBpS 40000
maxReceiverThreads 32
nsdMaxWorkerThreads 512
nsdMinWorkerThreads 8
nsdMultiQueue 256
nsdSmallThreadRatio 0
nsdThreadsPerQueue 3
prefetchAggressiveness 2
adminMode central

/dev/fs0

@Uwe
Using iohist we found out that gpfs is overloading one dm-device (it took about 500ms to finish IOs). We replaced the "problematic" dm-device (as we have enough drives to play with) for new one but the overloading issue just jumped to another dm-device.
We believe that this behaviour is caused by the gpfs but we are unable to locate the root cause of it.

Best,
Michal

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240208/e3abefe9/attachment.htm>


More information about the gpfsug-discuss mailing list