[gpfsug-discuss] sequential I/O write - performance

Michal Hruška Michal.Hruska at mcomputers.cz
Mon Feb 19 16:00:23 GMT 2024


Hello,

@Jan-Frode
Yes, we tried to disable cache prefetch on storage but this is not the best way for gaining performance on sequential writes.

@Yaron
1.) There are 8 LUNs exported from single storage system. Each LUN has its own pool and its own mdisk (8+2+1 drives). When we doubled LUNs per pool the performance dropped a bit. When we used bigger mdisks - with more drives(76) we have seen performance gain using multiple LUNs per pool.
2.) Block-size was set to multiple values - 2MB, 4MB, 16MB. The best was to use 4MB. Even when the storage systems writes 2MB blocks on one mdisk as stripe size is fixed at 256KB and there is 8 data drives in one mdisk.
3.) As there is only one type of LUNs (data) exported from storage system we used only one pool in GPFS FS and it was the system pool.
4.) Not always - we tried to use multiple clients (up to 3) and multiple servers (up to 3) but we have not seen performance gain. We were able to get the same performance from clients communicating with servers over 100GbE LAN using native client connection.

Best,
Michal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240219/39fbf59e/attachment.htm>


More information about the gpfsug-discuss mailing list