[gpfsug-discuss] GPFS 5.1.9.4 on Windows 11 Pro. Performance issues, write.
Uwe Falke
uwe.falke at kit.edu
Wed Sep 4 16:04:51 BST 2024
On 04.09.24 15:59, Henrik Cednert wrote:
> Thanks Uwe
Just saw: the app seems to have issued IO requests not sequentially but
in bunches.
what is apparent (but I have not recognized it before):
after such a bunch of IOs, the next bunch is typically issued when the
longest-taking IO from the prious bunch was completed. For example:
13:24:59.341629 R data 17:805453824 16384 3,993 cli
C0A82DD5:63877BDA 192.168.45.213
13:24:59.341629 R data 6:1720532992 16384 27,471 cli
C0A82DD5:63877BCE 192.168.45.214
13:24:59.341629 R data 14:1720532992 16384 44,953 cli
C0A82DD5:63877BD6 192.168.45.214
3 read IOs issued at 13:24:59.341629, longest taking 44.953ms.
59.341629+0.044953=59.386582, which is 1ms before the next IO req is
issued at 13:24:59.387584
13:24:59.387584 R data 17:805453824 16384 5,990 cli
C0A82DD5:63877BDA 192.168.45.213
that one takes just 5.990ms , the next bunch of IO reqs is seen 6.988
ms later (i.e. again the service time plus about 1 ms):
13:24:59.394572 R data 17:805453824 16384 7,993 cli
C0A82DD5:63877BDA 192.168.45.213
now a triplet of IOs again, but one of them taking very long:
13:24:59.402565 R data 25:805453824 16384 4,991 cli
C0A82DD5:63877BE2 192.168.45.213
13:24:59.402565 R data 22:1720532992 16384 26,309 cli
C0A82DD5:63877BDF 192.168.45.214
13:24:59.402565 R data 7:1720532992 16384 142,856 cli
C0A82DD5:63877BCF 192.168.45.213
That incurs the expected delay of 143.854ms :
13:24:59.546419 R data 25:805453824 16384 7,992 cli
C0A82DD5:63877BE2 192.168.45.213
etc. pp.
13:24:59.556408 R data 25:805453824 16384 7,987 cli
C0A82DD5:63877BE2 192.168.45.213
13:24:59.564395 R data 10:805453824 16384 5,505 cli
C0A82DD5:63877BD2 192.168.45.214
13:24:59.564395 R data 23:1720532992 16384 28,053 cli
C0A82DD5:63877BE0 192.168.45.213
13:24:59.564395 R data 15:1720532992 16384 33,044 cli
C0A82DD5:63877BD8 192.168.45.213
13:24:59.598437 R data 10:805453824 16384 5,504 cli
C0A82DD5:63877BD2 192.168.45.214
13:24:59.604939 R data 18:805453824 16384 4,993 cli
C0A82DD5:63877BDB 192.168.45.214
13:24:59.604939 R data 8:1720532992 16384 36,015 cli
C0A82DD5:63877BD0 192.168.45.214
...
so there is some parallelization in IO, but using multiple threads
should improve things as the app apparently still waits for IOs being
performed. and of course with an otherwise unloaded storage and just one
client, service times >25ms should not be seen and most should be
10...20ms.
Uwe
--
Karlsruhe Institute of Technology (KIT)
Scientific Computing Centre (SCC)
Scientific Data Management (SDM)
Uwe Falke
Hermann-von-Helmholtz-Platz 1, Building 442, Room 187
D-76344 Eggenstein-Leopoldshafen
Tel: +49 721 608 28024
Email:uwe.falke at kit.edu
www.scc.kit.edu
Registered office:
Kaiserstraße 12, 76131 Karlsruhe, Germany
KIT – The Research University in the Helmholtz Association
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240904/ee1efa7d/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5814 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20240904/ee1efa7d/attachment.bin>
More information about the gpfsug-discuss
mailing list