[gpfsug-discuss] IBM Flashsystem 7300 HDD sequential write performance issue

ANDREW BEATTIE abeattie at au1.ibm.com
Tue Jan 23 22:17:12 GMT 2024


Suggest you reach out to your local IBm team and ask them to put you in touch with the flash system performance testing / development team @BARRY WHYTE<mailto:barry.whyte at nz1.ibm.com>, @andrew Martin, @evelin Perez

Barry is in Europe for TechXChange roadshow atm so not sure what his response times will be like,

But for the record, there are reasons why IBM won't commit to performance benchmarks for Scale filesystems on anything other than Scale Storage System / Elastic Storage System  building blocks.

At a high level I suspect your probably bumping into Draid overheads as well as the bandwidth limitations of the SAS storage adapters for the expansion shelves.

Just because the drives have a raw performance number does not mean that it's 100% usable.
Flash system performance team will be able to advise more accurately.

Regards,

AJ

Regards,

Andrew Beattie
Technical Sales Specialist - Storage for Big Data & AI
IBM Australia and New Zealand
P. +61 421 337 927
E. abeattie at au1.ibm.com
Twitter: AndrewJBeattie
LinkedIn:
________________________________
From: gpfsug-discuss <gpfsug-discuss-bounces at gpfsug.org> on behalf of Petr Plod?k <petr.plodik at mcomputers.cz>
Sent: Wednesday, January 24, 2024 12:04:19 AM
To: gpfsug-discuss at gpfsug.org <gpfsug-discuss at gpfsug.org>
Subject: [EXTERNAL] [gpfsug-discuss] IBM Flashsystem 7300 HDD sequential write performance issue

Hi,

we have GPFS cluster with two IBM FlashSystem 7300 systems with HD expansion and 80x 12TB HDD each (in DRAID 8+P+Q), 3 GPFS servers connected via 32G FC. We are doing performance tuning on sequential writes to HDDs and seeing suboptimal performance. After several tests, it turns out, that the bottleneck seems to be the single HDD write performance, which is below 40MB/s and one would expect at least 100MB/s.

Does anyone have experiences with IBM flashsystem sequential write performance tuning or has these arrays in the infrastructure? We would really appreciate any help/explanation.

Thank you!

Petr Plodik
M Computers s.r.o.
petr.plodik at mcomputers.cz



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240123/85c24d7f/attachment.htm>


More information about the gpfsug-discuss mailing list