<div dir="ltr">let me clarify and get back, i am not 100% sure on a cross cluster , i think the main point was that the FS manager for that fs should be reassigned (which could also happen via mmchmgr) and then the individual clients that mount that fs restarted , but i will double check and reply later . <div><br></div></div><br><div class="gmail_quote"><div dir="ltr">On Thu, May 4, 2017 at 6:39 AM Simon Thompson (IT Research Support) <<a href="mailto:S.J.Thompson@bham.ac.uk">S.J.Thompson@bham.ac.uk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Which cluster though? The client and storage are separate clusters, so all the nodes on the remote cluster or storage cluster?<br>
<br>
Thanks<br>
<br>
Simon<br>
________________________________________<br>
From: <a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">gpfsug-discuss-bounces@spectrumscale.org</a> [<a href="mailto:gpfsug-discuss-bounces@spectrumscale.org" target="_blank">gpfsug-discuss-bounces@spectrumscale.org</a>] on behalf of <a href="mailto:oehmes@gmail.com" target="_blank">oehmes@gmail.com</a> [<a href="mailto:oehmes@gmail.com" target="_blank">oehmes@gmail.com</a>]<br>
Sent: 04 May 2017 14:28<br>
To: gpfsug main discussion list<br>
Subject: Re: [gpfsug-discuss] HAWC question<br>
<br>
well, it's a bit complicated which is why the message is there in the first place.<br>
<br>
reason is, there is no easy way to tell except by dumping the stripgroup on the filesystem manager and check what log group your particular node is assigned to and then check the size of the log group.<br>
<br>
as soon as the client node gets restarted it should in most cases pick up a new log group and that should be at the new size, but to be 100% sure we say all nodes need to be restarted.<br>
<br>
you need to also turn HAWC on as well, i assume you just left this out of the email , just changing log size doesn't turn it on :-)<br>
<br>
On Thu, May 4, 2017 at 6:15 AM Simon Thompson (IT Research Support) <<a href="mailto:S.J.Thompson@bham.ac.uk" target="_blank">S.J.Thompson@bham.ac.uk</a><mailto:<a href="mailto:S.J.Thompson@bham.ac.uk" target="_blank">S.J.Thompson@bham.ac.uk</a>>> wrote:<br>
Hi,<br>
<br>
I have a question about HAWC, we are trying to enable this for our<br>
OpenStack environment, system pool is on SSD already, so we try to change<br>
the log file size with:<br>
<br>
mmchfs FSNAME -L 128M<br>
<br>
This says:<br>
<br>
mmchfs: Attention: You must restart the GPFS daemons before the new log<br>
file<br>
size takes effect. The GPFS daemons can be restarted one node at a time.<br>
When the GPFS daemon is restarted on the last node in the cluster, the new<br>
log size becomes effective.<br>
<br>
<br>
We multi-cluster the file-system, so do we have to restart every node in<br>
all clusters, or just in the storage cluster?<br>
<br>
And how do we tell once it has become active?<br>
<br>
Thanks<br>
<br>
Simon<br>
<br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><<a href="http://spectrumscale.org" rel="noreferrer" target="_blank">http://spectrumscale.org</a>><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" target="_blank">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br>
</blockquote></div>