[gpfsug-discuss] sequential I/O write - performance tuning

Michal Hruška Michal.Hruska at mcomputers.cz
Wed Feb 7 13:06:03 GMT 2024


Dear gpfsUserGroup,

we are dealing with new gpfs cluster (Storage Scale 5.1.9 - RHEL 9.3) [3 FE servers and one storage system] and some performance issues.
We were able to tune underlying storage system to get to performance ~4500 MiB/s from 8 RAID groups using XFS (one XFS per one RAID group) and parallel fio test.
Once we installed the gpfs - one FS accross all 8 RAID groups and we observed performance drop down to ~3300 MiB/s using the same fio test.
All tests were performed from one front-end node connected directly to the storage system via FibreChannel (4 paths each path is 32GFC).

Storage systems RAID groups are sized to fit 2MB data blocks to utilize full-stripe writes as the RAID geometry is 8+2 using 256KB block size -> 8*256KB = 2MB.
I/O pattern on FC is optimized too.
GPFS metadata were moved to NVMe SSDs on different storage system.
We already tried some obvious troubleshooting on gpfs side like: maxMBpS, scatter/cluster, different block sizes and some other parameters but there is no performance gain.

We were advised that gpfs might not perform pure sequential writes towards the storage system and therefore the storage system is performing more random I/O than sequential.

Could you please share with us some thougths about how to make gpfs as much sequential as possible? The goal is to reach at least 4000 MiB/s for sequential writes/reads.

best regards,
Michal Hruška



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240207/d10aaa59/attachment.htm>


More information about the gpfsug-discuss mailing list