[gpfsug-discuss] HAWC compare to regular pagepool

Sven Oehme oehmes at gmail.com
Fri Jan 1 19:57:16 GMT 2016


Hi Pavel,

HAWC is only used for stable buffered i/os, so for example if you have an
application that opens files with O_SYNC instead of writing it directly to
shared storage the data gets 'hardened' in the HAWC device which is
typically some form of smaller , but very fast NV storage device like a
NVRAM, SSD or shared FLASH DEVICE. the acknowledgment to the application
happens as soon as its in the HAWC device, so it can't get lost even if you
loose power.
this is particular useful for database application, when you run VM's on
top of GPFS or other workloads that primarily perform small stable writes.

sven


On Fri, Jan 1, 2016 at 11:22 AM, Pavel Pokorny <pavel.pokorny at datera.cz>
wrote:

> Hello,
> I would like to ask whether I understand HAWC functionality correctly
> especially compare to regular pagepool behavior.
>
> Regular pagepool behavior:
>
>    1. Application on a node makes write call.
>    2. Data is moved from application data buffer to page pool buffer.
>    3. If not using direct IO (open with O_DIRECT or set the -D
>    attribute); at this point, the application has completed the write system
>    call and GPFS acknowledge write.
>
> My understanding is that with HAWC this behavior is very similar with the
> difference that at step 3 data are also stored in nonvolatile storage on
> client.
> Am I correct? This will mean that HAWC is giving us mainly data hardening
> and for direct writes also performance.
> Thanks, Pavel
>
> --
> Ing. Pavel Pokorný
> DATERA s.r.o. | Hadovitá 962/10 | Praha | Czech Republic
> www.datera.cz | Mobil: +420 602 357 194 | E-mail: pavel.pokorny at datera.cz
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160101/93f0ba11/attachment-0005.htm>


More information about the gpfsug-discuss mailing list