[gpfsug-discuss] Using HAWC (write cache)

Sanchez, Paul Paul.Sanchez at deshaw.com
Wed Aug 26 13:50:44 BST 2015


There is a more severe issue with LROC enabled in saveInodePtrs() which results in segfaults and loss of acknowledged writes, which has caused us to roll back all LROC for now.  We are testing an efix (ref Defect 970773, IV76155) now which addresses this. But I would advise against running with LROC/HAWC in production without this fix. We experienced this on 4.1.0-6, but had the efix built against 4.1.1-1, so the exposure seems likely to be all 4.1 versions.

Thx
Paul



Sent with Good (www.good.com)

________________________________
From: gpfsug-discuss-bounces at gpfsug.org on behalf of Oesterlin, Robert
Sent: Wednesday, August 26, 2015 8:27:36 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)

Yep. Mine do too, initially. It seems after a number of days, they get marked as removed. In any case IBM confirmed it. So… tread lightly.

Bob Oesterlin
Sr Storage Engineer, Nuance Communications
507-269-0413


From: <gpfsug-discuss-bounces at gpfsug.org<mailto:gpfsug-discuss-bounces at gpfsug.org>> on behalf of "Simon Thompson (Research Computing - IT Services)"
Reply-To: gpfsug main discussion list
Date: Wednesday, August 26, 2015 at 7:23 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Using HAWC (write cache)

Hmm mine seem to be working which I created this morning (on a client node):


mmdiag --lroc


=== mmdiag: lroc ===

LROC Device(s): '0A1E017755DD7808#/dev/sdb1;' status Running

Cache inodes 1 dirs 1 data 1  Config: maxFile 0 stubFile 0

Max capacity: 190732 MB, currently in use: 4582 MB

Statistics from: Tue Aug 25 14:54:52 2015


Total objects stored 4927 (4605 MB) recalled 81 (55 MB)

      objects failed to store 467 failed to recall 1 failed to inval 0

      objects queried 0 (0 MB) not found 0 = 0.00 %

      objects invalidated 548 (490 MB)


This was running 4.1.1-1.

Simon


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20150826/ba3a5155/attachment-0003.htm>


More information about the gpfsug-discuss mailing list