[gpfsug-discuss] Long IO waiters and IBM Storwize V5030

Jan-Frode Myklebust janfrode at tanso.net
Fri May 28 18:50:21 BST 2021


One thing to check: Storwize/SVC code will *always* guess wrong on
prefetching for GPFS. You can see this with having a lot higher read data
throughput on mdisk vs. on on vdisks in the webui. To fix it, disable
cache_prefetch with "chsystem -cache_prefetch off".

This being a global setting, you probably only should set it if the system
is only used for GPFS.


   -jf

On Fri, May 28, 2021 at 5:58 PM Saula, Oluwasijibomi <
oluwasijibomi.saula at ndsu.edu> wrote:

> Hi Folks,
>
> So, we are experiencing some very long IO waiters in our GPFS cluster:
>
> #  mmdiag --waiters
>
>
> === mmdiag: waiters ===
>
> Waiting 17.3823 sec since 10:41:01, monitored, thread 21761 NSDThread: for
> I/O completion
>
> Waiting 16.6140 sec since 10:41:02, monitored, thread 21730 NSDThread: for
> I/O completion
>
> Waiting 15.3004 sec since 10:41:03, monitored, thread 21763 NSDThread: for
> I/O completion
>
> Waiting 15.2013 sec since 10:41:03, monitored, thread 22175
>
> However, GPFS support is pointing to our IBM Storwize V5030 disk system
> as the source of latency. Unfortunately, we don't have paid support for the
> system so we are polling for anyone who might be able to assist.
>
> Does anyone by chance have any experience with IBM Storwize V5030 or
> possess a problem determination guide for the V5030?
>
> We've briefly reviewed the V5030 management portal, but we still haven't
> identified a cause for the increased latencies (i.e. read ~129ms, write
> ~198ms).
>
> Granted, we have some heavy client workloads, yet we seem to experience
> this drastic drop in performance every couple of months, probably
> exacerbated by heavy IO demands.
>
> Any assistance would be much appreciated.
>
>
> Thanks,
>
> *Oluwasijibomi (Siji) Saula*
>
> HPC Systems Administrator  /  Information Technology
>
>
>
> Research 2 Building 220B / Fargo ND 58108-6050
>
> p: 701.231.7749 / www.ndsu.edu
>
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210528/832c7e75/attachment-0002.htm>


More information about the gpfsug-discuss mailing list