<font size=2 face="sans-serif">this configuration (2 nodes and tiebreaker)
is not designed to survive node and disk failures at the same time... </font><br><font size=2 face="sans-serif">this depends on , where the clustermanager
and the filesystem manager runs .. when a node and half of the disk disappear
at the same time...</font><br><br><font size=2 face="sans-serif">for a real active-active configuration
you may consider </font><br><a href=https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.1/com.ibm.spectrum.scale.v4r21.doc/bl1adv_actact.htm><font size=2 color=blue face="sans-serif">https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.1/com.ibm.spectrum.scale.v4r21.doc/bl1adv_actact.htm</font></a><br><br><br><br><br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">Jan-Frode Myklebust
<janfrode@tanso.net></font><br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">gpfsug main discussion
list <gpfsug-discuss@spectrumscale.org></font><br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">05/04/2017 07:27 AM</font><br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [gpfsug-discuss]
Tiebreaker disk question</font><br><font size=1 color=#5f5f5f face="sans-serif">Sent by:
</font><font size=1 face="sans-serif">gpfsug-discuss-bounces@spectrumscale.org</font><br><hr noshade><br><br><br><font size=3>This doesn't sound like normal behaviour. It shouldn't
matter which filesystem your tiebreaker disks belong to. I think the failure
was caused by something else, but am not able to guess from the little
information you posted.. The mmfs.log will probably tell you the reason.<br><br><br>-jf</font><br><br><font size=3>ons. 3. mai 2017 kl. 19.08 skrev Shaun Anderson <</font><a href=mailto:SAnderson@convergeone.com><font size=3 color=blue><u>SAnderson@convergeone.com</u></font></a><font size=3>>:</font><br><font size=3 face="Calibri">We noticed some odd behavior recently.
I have a customer with a small Scale (with Archive on top) configuration
that we recently updated to a dual node configuration. We are using
CES and setup a very small 3 nsd shared-root filesystem(gpfssr).
We also set up tiebreaker disks and figured it would be ok to use
the gpfssr NSDs for this purpose. </font><p><p><font size=3 face="Calibri">When we tried to perform some basic failover
testing, both nodes came down. It appears from the logs that when
we initiated the node failure (via mmshutdown command...not great, I know)
it unmounts and remounts the shared-root filesystem. When it did
this, the cluster lost access to the tiebreaker disks, figured it had lost
quorum and the other node came down as well.</font><p><p><font size=3 face="Calibri">We got around this by changing the tiebreaker
disks to our other normal gpfs filesystem. After that failover
worked as expected. This is documented nowhere as far as I could
find. I wanted to know if anybody else had experienced this and
if this is expected behavior. All is well now and operating as we
want so I don't think we'll pursue a support request.</font><p><p><font size=3 face="Calibri">Regards,</font><p><br><font size=2 color=#002060 face="sans-serif"><b>SHAUN ANDERSON</b></font><br><font size=1 color=#00a1e0 face="sans-serif">STORAGE ARCHITECT</font><br><font size=1 color=#00a1e0 face="sans-serif">O</font><font size=2 color=#004080 face="Calibri"></font><font size=2 color=#808080 face="sans-serif">208.577.2112</font><br><font size=1 color=#00a1e0 face="sans-serif">M</font><font size=2 color=#004080 face="Calibri"></font><font size=2 color=#808080 face="sans-serif">214.263.7014</font><br><br><br><font size=3 face="Calibri">NOTICE: This email message and any attachments
here to may contain confidential<br>information. Any unauthorized review, use, disclosure, or distribution
of such<br>information is prohibited. If you are not the intended recipient, please
contact<br>the sender by reply email and destroy the original message and all copies
of it.</font><br><font size=3>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at </font><a href=http://spectrumscale.org/ target=_blank><font size=3 color=blue><u>spectrumscale.org</u></font></a><font size=3 color=blue><u><br></u></font><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" target=_blank><font size=3 color=blue><u>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</u></font></a><tt><font size=2>_______________________________________________<br>gpfsug-discuss mailing list<br>gpfsug-discuss at spectrumscale.org<br></font></tt><a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"><tt><font size=2>http://gpfsug.org/mailman/listinfo/gpfsug-discuss</font></tt></a><tt><font size=2><br></font></tt><br><br><BR>