[gpfsug-discuss] sequential I/O write - performance

Glen Corneau gcorneau at us.ibm.com
Thu Feb 8 18:50:08 GMT 2024


Just a few thoughts:

We've often increased the seqDiscardThreshold in SAS on AIX GPFS implementations to much larger values ( a lot of large file sequential I/O).  With a pagepool of 64G, maybe set to 4 or 8GB?

Sequential writes from the application when spread across mutliple LUNs on the storage+Scale often do not resemble sequential writes from the storage POV.

If overrunning a disk device representation in the OS, the device queue depth is one parameter that might be expanded to offer some relief.
---
Glen Corneau
Senior, Power Partner Technical Specialist (PTS-P)
IBM Technology, North America
Email: gcorneau at us.ibm.com
Cell: 512-420-7988
________________________________
From: gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org> on behalf of Michal Hruška <Michal.Hruska at mcomputers.cz>
Sent: Thursday, February 8, 2024 08:59
To: gpfsug-discuss at gpfsug.org <gpfsug-discuss at gpfsug.org>
Subject: [EXTERNAL] Re: [gpfsug-discuss] sequential I/O write - performance

@Aaron Yes, I can confirm that 2MB blocks are transfered over. @ Jan-Frode We tried to change multiple parameters, but if you know the best combination for sequential IO, please let me know. #mmlsconfig autoload no dmapiFileHandleSize 32 minReleaseLevel
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.
<https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!2e-hQ74-QNlQ0KX70gHlDsWODTjDLcgSEK8lhkIKcCimb6m97aqUeYtL-5luaJl8omKubeXu1XqHpskHhQfv3H_4lomxcIKH$>
Report Suspicious

ZjQcmQRYFpfptBannerEnd

@Aaron

Yes, I can confirm that 2MB blocks are transfered over.


@ Jan-Frode

We tried to change multiple parameters, but if you know the best combination for sequential IO, please let me know.



#mmlsconfig

autoload no

dmapiFileHandleSize 32

minReleaseLevel 5.1.9.0

tscCmdAllowRemoteConnections no

ccrEnabled yes

cipherList AUTHONLY

sdrNotifyAuthEnabled yes

pagepool 64G

maxblocksize 16384K

maxMBpS 40000

maxReceiverThreads 32

nsdMaxWorkerThreads 512

nsdMinWorkerThreads 8

nsdMultiQueue 256

nsdSmallThreadRatio 0

nsdThreadsPerQueue 3

prefetchAggressiveness 2

adminMode central



/dev/fs0


@Uwe

Using iohist we found out that gpfs is overloading one dm-device (it took about 500ms to finish IOs). We replaced the „problematic“ dm-device (as we have enough drives to play with) for new one but the overloading issue just jumped to another dm-device.
We believe that this behaviour is caused by the gpfs but we are unable to locate the root cause of it.



Best,
Michal


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240208/857706dc/attachment-0001.htm>


More information about the gpfsug-discuss mailing list