[gpfsug-discuss] IBM Flashsystem 7300 HDD sequential write performance issue

Alec anacreo at gmail.com
Tue Jan 23 22:32:30 GMT 2024


I would want to understand what your test was and how you determined it's
single drive performance.  If you're just taking your aggregate throughout
and dividing by number of drives, you're probably missing entirely the most
restrictive part of the chain.

You can not pour water through a funnel then have tablespoons below it and
complain about the tablespoon performance.

Map out the actual bandwidth all the way through your chain, and every
choke point along the way and then make sure each point isn't constrained.

Starting from the test mechanism itself.

You can really rule out some things easily.

Go from single thread to multiple threads to rule out CPU bottlenecks.
Take a path out of the mix to see if the underlying connection is the
constraint, make a less wide raid config or a more wide raid config to see
if your performance changes.

Some of these changes will have no impact to your top throughout and you
can help to eliminate the variables that way.

Also are you saying that 32G is your aggregate throughout across multiple
FCs?  That's only 4GB/s.

Check out the fiber hardware and make sure you divided your work evenly
across port groups and have clear paths to the storage through each port
group, or ensure all the workload is in one portgroup and make sure you're
not exceeding that port groups speed.

Alec




On Tue, Jan 23, 2024, 6:06 AM Petr Plodík <petr.plodik at mcomputers.cz> wrote:

> Hi,
>
> we have GPFS cluster with two IBM FlashSystem 7300 systems with HD
> expansion and 80x 12TB HDD each (in DRAID 8+P+Q), 3 GPFS servers connected
> via 32G FC. We are doing performance tuning on sequential writes to HDDs
> and seeing suboptimal performance. After several tests, it turns out, that
> the bottleneck seems to be the single HDD write performance, which is below
> 40MB/s and one would expect at least 100MB/s.
>
> Does anyone have experiences with IBM flashsystem sequential write
> performance tuning or has these arrays in the infrastructure? We would
> really appreciate any help/explanation.
>
> Thank you!
>
> Petr Plodik
> M Computers s.r.o.
> petr.plodik at mcomputers.cz
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240123/b21c59c4/attachment.htm>


More information about the gpfsug-discuss mailing list