[gpfsug-discuss] sequential I/O write - performance tuning

Jan-Frode Myklebust janfrode at tanso.net
Wed Feb 7 23:00:01 GMT 2024


Also, please show mmlsconfig output.

  -jf

ons. 7. feb. 2024 kl. 20:32 skrev Aaron Knister <aaron.knister at gmail.com>:

> What does iostat output look like when you’re running the tests on GPFS?
> It would be good to confirm that GPFS is successfully submitting 2MB I/o
> requests.
>
> Sent from my iPhone
>
> On Feb 7, 2024, at 08:08, Michal Hruška <Michal.Hruska at mcomputers.cz>
> wrote:
>
> 
>
> Dear gpfsUserGroup,
>
>
>
> we are dealing with new gpfs cluster (Storage Scale 5.1.9 – RHEL 9.3) [3
> FE servers and one storage system] and some performance issues.
>
> We were able to tune underlying storage system to get to performance ~4500
> MiB/s from 8 RAID groups using XFS (one XFS per one RAID group) and
> parallel fio test.
>
> Once we installed the gpfs – one FS accross all 8 RAID groups and we
> observed performance drop down to ~3300 MiB/s using the same fio test.
>
> All tests were performed from one front-end node connected directly to the
> storage system via FibreChannel (4 paths each path is 32GFC).
>
>
>
> Storage systems RAID groups are sized to fit 2MB data blocks to utilize
> full-stripe writes as the RAID geometry is 8+2 using 256KB block size ->
> 8*256KB = 2MB.
>
> I/O pattern on FC is optimized too.
>
> GPFS metadata were moved to NVMe SSDs on different storage system.
>
> We already tried some obvious troubleshooting on gpfs side like: maxMBpS,
> scatter/cluster, different block sizes and some other parameters but there
> is no performance gain.
>
>
>
> We were advised that gpfs might not perform pure sequential writes towards
> the storage system and therefore the storage system is performing more
> random I/O than sequential.
>
>
>
> Could you please share with us some thougths about how to make gpfs as
> much sequential as possible? The goal is to reach at least 4000 MiB/s for
> sequential writes/reads.
>
>
>
> best regards,
>
> *Michal Hruška*
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240208/a94aad7f/attachment-0001.htm>


More information about the gpfsug-discuss mailing list