[gpfsug-discuss] system.log pool on client nodes for HAWC

Kenneth Waegeman kenneth.waegeman at ugent.be
Mon Sep 3 16:06:28 BST 2018


Thank you Vasily and Simon for the clarification!

I was looking further into it, and I got stuck with more questions :)


- In 
https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_hawc_tuning.htm 
I read:
     HAWC does not change the following behaviors:
         write behavior of small files when the data is placed in the 
inode itself
         write behavior of directory blocks or other metadata

I wondered why? Is the metadata not logged in the (same) recovery logs? 
(It seemed by reading 
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.ins.doc/bl1ins_logfile.htm 
it does )


- Would there be a way to estimate how much of the write requests on a 
running cluster would benefit from enabling HAWC ?


Thanks again!

Kenneth

On 31/08/18 19:49, Vasily Tarasov wrote:
> That is correct. The blocks of each recovery log are striped across 
> the devices in the system.log pool (if it is defined). As a result, 
> even when all clients have a local device in the system.log pool, many 
> writes to the recovery log will go to remote devices. For a client 
> that lacks a local device in the system.log pool, log writes will 
> always be remote.
> Notice, that typically in such a setup you would enable log 
> replication for HA. Otherwise, if a single client fails (and its 
> recover log is lost) the whole cluster fails as there is no log  to 
> recover FS to consistent state. Therefore, at least one remote write 
> is essential.
> HTH,
> --
> Vasily Tarasov,
> Research Staff Member,
> Storage Systems Research,
> IBM Research - Almaden
>
>     ----- Original message -----
>     From: Kenneth Waegeman <kenneth.waegeman at ugent.be>
>     Sent by: gpfsug-discuss-bounces at spectrumscale.org
>     To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
>     Cc:
>     Subject: [gpfsug-discuss] system.log pool on client nodes for HAWC
>     Date: Tue, Aug 28, 2018 5:31 AM
>     Hi all,
>
>     I was looking into HAWC , using the 'distributed fast storage in
>     client
>     nodes' method (
>     https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_hawc_using.htm
>
>     )
>
>     This is achieved by putting  a local device on the clients in the
>     system.log pool. Reading another article
>     (https://www.ibm.com/support/knowledgecenter/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_syslogpool.htm
>
>     ) this would now be used for ALL File system recovery logs.
>
>     Does this mean that if you have a (small) subset of clients with fast
>     local devices added in the system.log pool, all other clients will use
>     these too instead of the central system pool?
>
>     Thank you!
>
>     Kenneth
>
>     _______________________________________________
>     gpfsug-discuss mailing list
>     gpfsug-discuss at spectrumscale.org
>     http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20180903/1d6e628c/attachment-0001.htm>


More information about the gpfsug-discuss mailing list