<font size=2 face="sans-serif">from your file system configuration ..
mmfs <dev> -L you'll find the size of the LOG</font><br><font size=2 face="sans-serif">since release 4.x ..you can change it,
but you need to re-mount the FS on every client , to make the change effective
... </font><br><br><font size=2 face="sans-serif">when a clients initiate writes/changes
to GPFS it needs to update its changes to the log - if this
narrows a certain filling degree, GPFS triggers so called logWrapThreads
to write content to disk and so free space </font><br><br><font size=2 face="sans-serif">with your given numbers ... double digit
[ms] waiter times .. you fs get's probably slowed down.. and there's something
suspect with the storage, because LOG-IOs are rather small and should not
take that long</font><br><br><font size=2 face="sans-serif">to give you an example from a healthy
environment... the IO times are so small, that you usually don't see waiters
for this.. </font><br><br><tt><font size=1>I/O start time RW Buf type disk:sectorNum
nSec time ms tag1
tag2 Disk UID typ
NSD node context thread <br>--------------- -- ----------- ----------------- ----- ------- ---------
--------- ------------------ --- --------------- --------- ----------</font></tt><br><tt><font size=1>06:23:32.358851 W logData
2:524306424 8 0.439
0 0 C0A70D08:57CF40D1
cli 192.167.20.17 LogData SGExceptionLogBufferFullThread
<br>06:23:33.576367 W logData 1:524257280
8 0.646
0 0 C0A70D08:57CF40D0 cli 192.167.20.16
LogData SGExceptionLogBufferFullThread</font></tt><br><tt><font size=1>06:23:32.358851 W logData
2:524306424 8 0.439
0 0 C0A70D08:57CF40D1
cli 192.167.20.17 LogData SGExceptionLogBufferFullThread
<br>06:23:33.576367 W logData 1:524257280
8 0.646
0 0 C0A70D08:57CF40D0 cli 192.167.20.16
LogData SGExceptionLogBufferFullThread <br>06:23:32.212426 W iallocSeg 1:524490048
64 0.733 2
245 C0A70D08:57CF40D0 cli 192.167.20.16 Logwrap
LogWrapHelperThread <br>06:23:32.212412 W logWrap 2:524552192
8 0.755
0 179200 C0A70D08:57CF40D1 cli 192.167.20.17
Logwrap LogWrapHelperThread <br>06:23:32.212432 W logWrap 2:525162760
8 0.737
0 125473 C0A70D08:57CF40D1 cli 192.167.20.17
Logwrap LogWrapHelperThread <br>06:23:32.212416 W iallocSeg 2:524488384
64 0.763 2
347 C0A70D08:57CF40D1 cli 192.167.20.17 Logwrap
LogWrapHelperThread <br>06:23:32.212414 W logWrap 2:525266944
8 2.160
0 177664 C0A70D08:57CF40D1 cli 192.167.20.17
Logwrap LogWrapHelperThread</font></tt><tt><font size=3><br><br></font></tt><br><font size=2 face="sans-serif">hope this helps .. </font><br><br><br><div><font size=2 face="sans-serif">Mit freundlichen Grüßen / Kind regards</font><br><br><font size=2 face="sans-serif"> <br>Olaf Weiser<br> <br>EMEA Storage Competence Center Mainz, German / IBM Systems, Storage Platform,<br>-------------------------------------------------------------------------------------------------------------------------------------------<br>IBM Deutschland<br>IBM Allee 1<br>71139 Ehningen<br>Phone: +49-170-579-44-66<br>E-Mail: olaf.weiser@de.ibm.com<br>-------------------------------------------------------------------------------------------------------------------------------------------<br>IBM Deutschland GmbH / Vorsitzender des Aufsichtsrats: Martin Jetter<br>Geschäftsführung: Martina Koederitz (Vorsitzende), Susanne Peter, Norbert
Janzen, Dr. Christian Keller, Ivo Koerner, Markus Koerner<br>Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart,
HRB 14562 / WEEE-Reg.-Nr. DE 99369940 </font><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Aaron Knister <aaron.s.knister@nasa.gov></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">10/15/2016 07:23 AM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">[gpfsug-discuss]
SGExceptionLogBufferFullThread waiter</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><tt><font size=2>I've got a node that's got some curious waiters on
it (see below). Could <br>someone explain what the "SGExceptionLogBufferFullThread" waiter
means?<br><br>Thanks!<br><br>-Aaron<br><br>=== mmdiag: waiters ===<br>0x7FFFF040D600 waiting 0.038822715 seconds, <br>SGExceptionLogBufferFullThread: on ThCond 0x7FFFDBB07628 <br>(0x7FFFDBB07628) (parallelWaitCond), reason 'wait for parallel write' <br>for NSD I/O completion on node 10.1.53.5 <c0n20><br>0x7FFFE83F3D60 waiting 0.039629116 seconds, CleanBufferThread: on ThCond
<br>0x17B1488 (0x17B1488) (MsgRecordCondvar), reason 'RPC wait' for NSD I/O
<br>completion on node 10.1.53.7 <c0n22><br>0x7FFFE8373A90 waiting 0.038921480 seconds, CleanBufferThread: on ThCond
<br>0x7FFFCD2B4E30 (0x7FFFCD2B4E30) (LogFileBufferDescriptorCondvar), reason
<br>'force wait on force active buffer write'<br>0x42CD9B0 waiting 0.028227004 seconds, CleanBufferThread: on ThCond <br>0x7FFFCD2B4E30 (0x7FFFCD2B4E30) (LogFileBufferDescriptorCondvar), reason
<br>'force wait for buffer write to complete'<br>0x7FFFE0F0EAD0 waiting 0.027864343 seconds, CleanBufferThread: on ThCond
<br>0x7FFFDC0EEA88 (0x7FFFDC0EEA88) (MsgRecordCondvar), reason 'RPC wait' <br>for NSD I/O completion on node 10.1.53.7 <c0n22><br>0x1575560 waiting 0.028045975 seconds, RemoveHandlerThread: on ThCond <br>0x18020CE4E08 (0xFFFFC90020CE4E08) (LkObjCondvar), reason 'waiting for
<br>LX lock'<br>0x1570560 waiting 0.038724949 seconds, CreateHandlerThread: on ThCond <br>0x18020CE50A0 (0xFFFFC90020CE50A0) (LkObjCondvar), reason 'waiting for
<br>LX lock'<br>0x1563D60 waiting 0.073919918 seconds, RemoveHandlerThread: on ThCond <br>0x180235F6440 (0xFFFFC900235F6440) (LkObjCondvar), reason 'waiting for
<br>LX lock'<br>0x1561560 waiting 0.054854513 seconds, RemoveHandlerThread: on ThCond <br>0x1802292D200 (0xFFFFC9002292D200) (LkObjCondvar), reason 'waiting for
<br>LX lock'<br>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br><br></font></tt><br><br></div><BR>