[gpfsug-discuss] 'mmces address move' weirdness?

Simon Thompson (IT Research Support) S.J.Thompson at bham.ac.uk
Tue Jun 13 09:28:25 BST 2017


Suspending the node doesn't stop the services though, we've done a bunch of testing by connecting to the "real" IP on the box we wanted to test and that works fine.

OK, so you end up connecting to shares like \\192.168.1.20\sharename, but its perfectly fine for testing purposes.

In our experience, suspending the node has been fine for this as it moves the IP to a "working" node and keeps user service running whilst we test.

Simon

From: <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "Sobey, Richard A" <r.sobey at imperial.ac.uk<mailto:r.sobey at imperial.ac.uk>>
Reply-To: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>" <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: Tuesday, 13 June 2017 at 09:08
To: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>" <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] 'mmces address move' weirdness?

Yes, suspending the node would do it, but in the case where you want to remove a node from service but keep it running for testing it’s not ideal.

I think you can set the IP address balancing policy to none which might do what we want.
From: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Simon Thompson (IT Research Support)
Sent: 12 June 2017 21:06
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] 'mmces address move' weirdness?

mmces node suspend -N

Is what you want. This will move the address and stop it being assigned one, otherwise the rebalance will occur. I think you can change the way it balances, but the default is to distribute.

Simon

From: <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "Sobey, Richard A" <r.sobey at imperial.ac.uk<mailto:r.sobey at imperial.ac.uk>>
Reply-To: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>" <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: Monday, 12 June 2017 at 21:01
To: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>" <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Subject: Re: [gpfsug-discuss] 'mmces address move' weirdness?


I think it's intended but I don't know why. The AUTH service became unhealthy on one of our CES nodes (SMB only) and we moved its float address elsewhere. CES decided to move it back again moments later despite the node not being fit.



Sorry that doesn't really help but at least you're not alone!

________________________________
From:gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org> <gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of valdis.kletnieks at vt.edu<mailto:valdis.kletnieks at vt.edu> <valdis.kletnieks at vt.edu<mailto:valdis.kletnieks at vt.edu>>
Sent: 12 June 2017 20:41
To: gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>
Subject: [gpfsug-discuss] 'mmces address move' weirdness?

So here's our address setup:

mmces address list

Address         Node                                Group      Attribute
-------------------------------------------------------------------------
172.28.45.72    arproto1.ar.nis.isb.internal        isb        none
172.28.45.73    arproto2.ar.nis.isb.internal        isb        none
172.28.46.72    arproto2.ar.nis.vtc.internal        vtc        none
172.28.46.73    arproto1.ar.nis.vtc.internal        vtc        none

Having some nfs-ganesha weirdness on arproto2.ar.nis.vtc.internal, so I try to
move the address over to its pair so I can look around without impacting users.
However, seems like something insists on moving it right back 60 seconds
later...

Question 1: Is this expected behavior?
Question 2: If it is, what use is 'mmces address move' if it just gets
undone a few seconds later...

(running on arproto2.ar.nis.vtc.internal):

## (date; ip addr show | grep '\.72';mmces address move --ces-ip 172.28.46.72 --ces-node arproto1.ar.nis.vtc.internal;  while (/bin/true); do date; ip addr show | grep '\.72'; sleep 1; done;) | tee migrate.not.nailed.down
Mon Jun 12 15:34:33 EDT 2017
    inet 172.28.46.72/26 brd 172.28.46.127 scope global secondary bond1:0
Mon Jun 12 15:34:40 EDT 2017
Mon Jun 12 15:34:41 EDT 2017
Mon Jun 12 15:34:42 EDT 2017
Mon Jun 12 15:34:43 EDT 2017
(skipped)
Mon Jun 12 15:35:44 EDT 2017
Mon Jun 12 15:35:45 EDT 2017
    inet 172.28.46.72/26 brd 172.28.46.127 scope global secondary bond1:0
Mon Jun 12 15:35:46 EDT 2017
    inet 172.28.46.72/26 brd 172.28.46.127 scope global secondary bond1:0
Mon Jun 12 15:35:47 EDT 2017
    inet 172.28.46.72/26 brd 172.28.46.127 scope global secondary bond1:0
^C
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20170613/f180b44b/attachment-0002.htm>


More information about the gpfsug-discuss mailing list