<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
In my case GPFS storage is used to store VM images (KVM) and hence
the small IO.<br>
<br>
I always see lots of small 4K writes and the GPFS filesystem block
size is 8MB. I thought the reason for the small writes is that the
linux kernel requests GPFS to initiate a periodic sync which by
default is every 5 seconds and can be controlled by
"vm.dirty_writeback_centisecs". <br>
<br>
I thought HAWC would help in such cases and would harden (coalesce)
the small writes in the "system" pool and would flush to the "data"
pool in larger block size. <br>
<br>
Note - I am not doing direct i/o explicitly. <br>
<br>
<br>
<br>
<div class="moz-cite-prefix">On 8/1/2016 14:49, Sven Oehme wrote:<br>
</div>
<blockquote
cite="mid:CALssuR3HqvoE1xzW0FWjmN2bhH4cAPWM-_PDu8ViV+sDC5cyFg@mail.gmail.com"
type="cite">
<div dir="ltr">when you say 'synchronous write' what do you mean
by that ?
<div>if you are talking about using direct i/o (O_DIRECT flag),
they don't leverage HAWC data path, its by design.</div>
<div><br>
</div>
<div>sven</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Aug 1, 2016 at 11:36 AM, Tejas
Rao <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:raot@bnl.gov" target="_blank">raot@bnl.gov</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">I have
enabled write cache (HAWC) by running the below commands.
The recovery logs are supposedly placed in the replicated
system metadata pool (SSDs). I do not have a "system.log"
pool as it is only needed if recovery logs are stored on the
client nodes.<br>
<br>
mmchfs gpfs01 --write-cache-threshold 64K<br>
mmchfs gpfs01 -L 1024M<br>
mmchconfig logPingPongSector=no<br>
<br>
I have recycled the daemon on all nodes in the cluster
(including the NSD nodes).<br>
<br>
I still see small synchronous writes (4K) from the clients
going to the data drives (data pool). I am checking this by
looking at "mmdiag --iohist" output. Should they not be
going to the system pool?<br>
<br>
Do I need to do something else? How can I confirm that HAWC
is working as advertised?<br>
<br>
Thanks.<br>
<br>
<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a moz-do-not-send="true"
href="http://spectrumscale.org" rel="noreferrer"
target="_blank">spectrumscale.org</a><br>
<a moz-do-not-send="true"
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
</blockquote>
<br>
</body>
</html>