[gpfsug-discuss] IBM Flashsystem 7300 HDD sequential write performance issue

YARON DANIEL YARD at il.ibm.com
Tue Jan 23 19:39:30 GMT 2024


Hi

Please review:

https://www.ibm.com/docs/en/storage-scale/5.0.3?topic=recommendations-operating-system-configuration-tuning

Regards



Yaron Daniel
94 Em Ha'Moshavot Rd
[cid:image001.png at 01DA4E44.A419AE20]
Storage and Cloud Consultant
Petach Tiqva, 49527
Technology Services
IBM Technology Lifecycle Service
Israel


Phone:
+972-3-916-5672

Fax:
+972-3-916-5672


Mobile:
+972-52-8395593


e-mail:
yard at il.ibm.com<mailto:yard at il.ibm.com>


Webex:            https://ibm.webex.com/meet/yard<webex:%20%20%20%20%20%20%20%20%20%20%20%20https://ibm.webex.com/meet/yard>
IBM Israel<webex:%20%20%20%20%20%20%20%20%20%20%20%20%20https://ibm.webex.com/meet/yard%0dIBM%20Israel>



From: gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org> On Behalf Of Jan-Frode Myklebust
Sent: Tuesday, 23 January 2024 21:30
To: gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
Subject: [EXTERNAL] Re: [gpfsug-discuss] IBM Flashsystem 7300 HDD sequential write performance issue

First thing I would check is that the GPFS block size is a multiple of a full RAID stripe. It’s been a while since I worked with SVC/FlashSystem performance, but this has been my main issue. So, 8+2p with the default 128KB «chunk size» would
ZjQcmQRYFpfptBannerStart
This Message Is From an Untrusted Sender
You have not previously corresponded with this sender.
    Report Suspicious  <https://us-phishalarm-ewt.proofpoint.com/EWT/v1/PjiDSg!12-vrJF3prOyMQWwFOsq4IVI2F5sW_rNmdbw10fUE-83G9qDoqHYRutCSd3L3Y0Et2YfQhxWnsuZhQfbT1dpCZ4bDtks0wi9Fpa3x3wHAwMy-No9_vCZEOUx$>   ‌
ZjQcmQRYFpfptBannerEnd

First thing I would check is that the GPFS block size is a multiple of a full RAID stripe. It’s been a while since I worked with SVC/FlashSystem performance, but this has been my main issue. So, 8+2p with the default 128KB «chunk size» would work with 1 MB or larger block size.

The other thing was that it’s important to disable prefetching (chsystem -cacheprefetch off), as it will always be prefetching the wrong data because of how GPFS scatters the blocks.

And.. on linux side there’s some max device transfersize setting that has had huge impact on some systems.. But the exact setting escapes me right now..


HTH


  -jf


tir. 23. jan. 2024 kl. 15:05 skrev Petr Plodík <petr.plodik at mcomputers.cz<mailto:petr.plodik at mcomputers.cz>>:
Hi,

we have GPFS cluster with two IBM FlashSystem 7300 systems with HD expansion and 80x 12TB HDD each (in DRAID 8+P+Q), 3 GPFS servers connected via 32G FC. We are doing performance tuning on sequential writes to HDDs and seeing suboptimal performance. After several tests, it turns out, that the bottleneck seems to be the single HDD write performance, which is below 40MB/s and one would expect at least 100MB/s.

Does anyone have experiences with IBM flashsystem sequential write performance tuning or has these arrays in the infrastructure? We would really appreciate any help/explanation.

Thank you!

Petr Plodik
M Computers s.r.o.
petr.plodik at mcomputers.cz<mailto:petr.plodik at mcomputers.cz>



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org<http://gpfsug.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org<http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240123/b72e7293/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 1049 bytes
Desc: image001.png
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240123/b72e7293/attachment-0001.png>


More information about the gpfsug-discuss mailing list