<div dir="auto">I would want to understand what your test was and how you determined it's single drive performance. If you're just taking your aggregate throughout and dividing by number of drives, you're probably missing entirely the most restrictive part of the chain.<div dir="auto"><br></div><div dir="auto">You can not pour water through a funnel then have tablespoons below it and complain about the tablespoon performance.</div><div dir="auto"><br></div><div dir="auto">Map out the actual bandwidth all the way through your chain, and every choke point along the way and then make sure each point isn't constrained.</div><div dir="auto"><br></div><div dir="auto">Starting from the test mechanism itself.</div><div dir="auto"><br></div><div dir="auto">You can really rule out some things easily.</div><div dir="auto"><br></div><div dir="auto">Go from single thread to multiple threads to rule out CPU bottlenecks. Take a path out of the mix to see if the underlying connection is the constraint, make a less wide raid config or a more wide raid config to see if your performance changes.</div><div dir="auto"><br></div><div dir="auto">Some of these changes will have no impact to your top throughout and you can help to eliminate the variables that way.</div><div dir="auto"><br></div><div dir="auto">Also are you saying that 32G is your aggregate throughout across multiple FCs? That's only 4GB/s.</div><div dir="auto"><br></div><div dir="auto">Check out the fiber hardware and make sure you divided your work evenly across port groups and have clear paths to the storage through each port group, or ensure all the workload is in one portgroup and make sure you're not exceeding that port groups speed.</div><div dir="auto"><br></div><div dir="auto">Alec</div><div dir="auto"><br></div><div dir="auto"><br></div><br><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">On Tue, Jan 23, 2024, 6:06 AM Petr Plodík <<a href="mailto:petr.plodik@mcomputers.cz">petr.plodik@mcomputers.cz</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi, <br>
<br>
we have GPFS cluster with two IBM FlashSystem 7300 systems with HD expansion and 80x 12TB HDD each (in DRAID 8+P+Q), 3 GPFS servers connected via 32G FC. We are doing performance tuning on sequential writes to HDDs and seeing suboptimal performance. After several tests, it turns out, that the bottleneck seems to be the single HDD write performance, which is below 40MB/s and one would expect at least 100MB/s.<br>
<br>
Does anyone have experiences with IBM flashsystem sequential write performance tuning or has these arrays in the infrastructure? We would really appreciate any help/explanation.<br>
<br>
Thank you!<br>
<br>
Petr Plodik<br>
M Computers s.r.o.<br>
<a href="mailto:petr.plodik@mcomputers.cz" target="_blank" rel="noreferrer">petr.plodik@mcomputers.cz</a><br>
<br>
<br>
<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://gpfsug.org" rel="noreferrer noreferrer" target="_blank">gpfsug.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org" rel="noreferrer noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org</a><br>
</blockquote></div></div>