[gpfsug-discuss] sequential I/O write - performance

Zdenek Salvet salvet at ics.muni.cz
Fri Feb 9 08:19:54 GMT 2024


On Thu, Feb 08, 2024 at 02:59:15PM +0000, Michal Hruška wrote:
> @Uwe
> Using iohist we found out that gpfs is overloading one dm-device (it took about 500ms to finish IOs). We replaced the "problematic" dm-device (as we have enough drives to play with) for new one but the overloading issue just jumped to another dm-device.
> We believe that this behaviour is caused by the gpfs but we are unable to locate the root cause of it.

Hello,
this behaviour could be caused by an assymmetry in data paths 
of your storage, relatively small imbalance can make request queue
of a slightly slower disk grow seemingly unproportionally.

In general, I think you need to scale your GPFS parameters down, not up,
in order to force better write clustering and achieve top speed
of rotational disks unless array controllers use huge cache memory.
If you can change your benchmark workload, try synchronous writes
(dd oflag=dsync ...).

Best regards,
Zdenek Salvet                                              salvet at ics.muni.cz 
Institute of Computer Science of Masaryk University, Brno, Czech Republic
and CESNET, z.s.p.o., Prague, Czech Republic
Phone: ++420-549 49 6534                           Fax: ++420-541 212 747
----------------------------------------------------------------------------
      Teamwork is essential -- it allows you to blame someone else.




More information about the gpfsug-discuss mailing list