[gpfsug-discuss] Using HAWC (write cache)

Simon Thompson (Research Computing - IT Services) S.J.Thompson at bham.ac.uk
Wed Aug 26 13:57:56 BST 2015


Oh and one other question about HAWC, does it work when running
multi-cluster? I.e. Can clients in a remote cluster have HAWC devices?

Simon

On 26/08/2015 12:26, "Simon Thompson (Research Computing - IT Services)"
<S.J.Thompson at bham.ac.uk> wrote:

>Hi,
>
>I was wondering if anyone knows how to configure HAWC which was added in
>the 4.1.1 release (this is the hardened write cache)
>(http://www-01.ibm.com/support/knowledgecenter/#!/STXKQY/411/com.ibm.spect
>r
>um.scale.v4r11.adv.doc/bl1adv_hawc_using.htm)
>
>In particular I'm interested in running it on my client systems which have
>SSDs fitted for LROC, I was planning to use a small amount of the LROC SSD
>for HAWC on our hypervisors as it buffers small IO writes, which sounds
>like what we want for running VMs which are doing small IO updates to the
>VM disk images stored on GPFS.
>
>The docs are a little lacking in detail of how you create NSD disks on
>clients, I've tried using:
>%nsd: device=sdb2
>  nsd=cl0901u17_hawc_sdb2
>  servers=cl0901u17
>  pool=system.log
>  failureGroup=90117
>
>(and also with usage=metadataOnly as well), however mmcrsnd -F tells me
>"mmcrnsd: Node cl0903u29.climb.cluster does not have a GPFS server license
>designation"
>
>
>Which is correct as its a client system, though HAWC is supposed to be
>able to run on client systems. I know for LROC you have to set
>usage=localCache, is there a new value for using HAWC?
>
>I'm also a little unclear about failureGroups for this. The docs suggest
>setting the HAWC to be replicated for client systems, so I guess that
>means putting each client node into its own failure group?
>
>Thanks
>
>Simon
>
>_______________________________________________
>gpfsug-discuss mailing list
>gpfsug-discuss at gpfsug.org
>http://gpfsug.org/mailman/listinfo/gpfsug-discuss




More information about the gpfsug-discuss mailing list