[gpfsug-discuss] sequential I/O write - performance

dale mac macthev at gmail.com
Sat Feb 10 19:39:11 GMT 2024


Michal,

I think you need to revise your testing method. Let me explain.

Based on my understandings:

3 FE servers and one storage system


~4500 MiB/s from 8 RAID groups using XFS (one XFS per one RAID group) and parallel fio test.
one FS across all 8 RAID groups and we observed performance drop down to ~3300


The test you are running is a non-clustered fs versus a clustered fs.

XFS,

8 XFS filesystems.
Each FS has it own Array and independent Meta, not shared between nodes
Array will see sequential IO for each array and will be able to aggregate IO’s and prefetch on read.
No lock traffic between nodes
Didn’t mention for the FIO runs is this one node or the three nodes with 8 XFS fs’s spread across?

GPFS Clustered Filesystem

1 GPFS Filesystem (fs0) In this case
Parallel Filesystem with shared Meta and access
Lock and Meta traffic across nodes
GPFS Stripes across NSD, 8 in this case.  So each fio stream will appear random at the storage  when combined. ( this very different from your 8x XFS test)
Array logic will not see this as sequential and delivery a much lower performance from a sequential point of view as each stream is intermixed.

 
What to do,

Try 

8 individual gpfs FS with your FIO test like XFS test. Ie do like for like.  8x XFS versus 8x GPFS.  From an array perspective same io pattern.
1 gpfs FS and 1 Array and matching 1 FIO ( then x8 result)


PS: You haven’t mention the type of array used? Sometimes the following is important.


Disable prefetch at the array.  This causes the array to sometimes over work the backend due to incorrectly fetching data that is never used causing extra io and cache displacement.  Ie GPFS aggressively prefetches which triggers the array to do further prefetch and both are not used.


Dale
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240211/48fbd9bf/attachment-0001.htm>


More information about the gpfsug-discuss mailing list