<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>All nodes in a GPFS cluster need to be able to communicate over
the data and admin network with the exception of remote clusters
which can have their own separate admin network (for their own
cluster that they are a member of) but still require
communications over the daemon network.</p>
<p>The networks can be routed and on different subnets, however the
each member of the cluster will need to be able to communicate
with every other member.<br>
</p>
<p>With this in mind:</p>
<p>1) The quorum node will need to be accessible on both the
10.1.1.0/24 and 192.168.1.0/24 however again the network that the
quorum node is on could be routed.<br>
2) Remote clusters don't need access to the home clusters admin
network, as they will use their own clusters admin network.</p>
<p>As Eric has mentioned I would double check your 2+1 cluster
suggestion, do you mean 2 x Servers with NSD's (with a quorum
role) and 1 quorum node without NSD's? which gives you 3 quorum,
or are you only going to have 1 quorum?</p>
<p>If the latter that I would suggest using all 3 servers for quorum
as they should be licensed as GPFS servers anyway due to their
roles.<br>
</p>
<p>-- Lauz<br>
</p>
<br>
<div class="moz-cite-prefix">On 10/04/2017 17:58, J. Eric Wonderley
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CABOSGQe82-bQJ64hSgFZp8Wg-ixs=haXZRG2cCqXj-3D8FuSDg@mail.gmail.com">
<div dir="ltr">
<div>
<div>1) You want more that one quorum node on your server
cluster. The non-quorum node does need a daemon network
interface exposed to the client cluster as does the quorum
nodes.<br>
<br>
</div>
2) No. Admin network is for intra cluster
communications...not inter cluster(between clusters). Daemon
interface(port 1191) is used for communications between
clusters. I think there is little benefit gained by having
designated an admin network...maybe someone can point out
benefits of an admin network.<br>
<br>
<br>
<br>
</div>
Eric Wonderley<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Apr 10, 2017 at 12:47 PM,
Hans-Joachim Ehlers <span dir="ltr"><<a
href="mailto:service@metamodul.com" target="_blank"
moz-do-not-send="true">service@metamodul.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<p>My understanding of the GPFS networks is not quite
clear.<br>
</p>
<p>For an GPFS setup i would like to use 2 Networks<br>
</p>
<p>1 Daemon (data) network using port 1191 using for
example. <a href="http://10.1.1.0/24" target="_blank"
moz-do-not-send="true">10.1.1.0/24</a><br>
</p>
<p>2 Admin Network using for example: <a
href="http://192.168.1.0/24" target="_blank"
moz-do-not-send="true">192.168.1.0/24</a> network<br>
</p>
<p>Questions<br>
</p>
<p>1) Thus in a 2+1 Cluster ( 2 GPFS Server + 1 Quorum
Server ) Config - Does the Tiebreaker Node needs to
have access to the daemon(data) 10.1.1. network or is it
sufficient for the tiebreaker node to be configured as
part of the admin 192.168.1 network ?</p>
<p>2) Does a remote cluster needs access to the GPFS Admin
192.168.1 network or is it sufficient for the remote
cluster to access the 10.1.1 network ? If so i assume
that remotecluster commands and ping to/from remote
cluster are going via the Daemon network ?<br>
</p>
<p>Note:<br>
</p>
<p>I am aware and read <a
href="https://www.ibm.com/developerworks/community/wikis/home?lang=en#%21/wiki/General%20Parallel%20File%20System%20%28GPFS%29/page/GPFS%20Network%20Communication%20Overview"
target="_blank" moz-do-not-send="true">https://www.ibm.com/<wbr>developerworks/community/<wbr>wikis/home?lang=en#!/wiki/<wbr>General%20Parallel%20File%<wbr>20System%20(GPFS)/page/GPFS%<wbr>20Network%20Communication%<wbr>20Overview</a><span
class="HOEnZb"><font color="#888888"><br>
</font></span></p>
<span class="HOEnZb"><font color="#888888">
<div class="m_-5172766803062684994io-ox-signature">
<p><span style="font-size:10pt">-- </span><br>
<span style="font-size:10pt">Unix Systems Engineer
</span><br>
<span style="font-size:10pt">------------------------------<wbr>--------------------</span><br>
<span style="font-size:10pt">MetaModul GmbH</span><br>
<span style="font-size:10pt">Süderstr. 12</span><br>
<span style="font-size:10pt">25336 Elmshorn</span><br>
<span style="font-size:10pt">HRB: 11873 PI</span><br>
<span style="font-size:10pt">UstID: DE213701983</span><br>
<span style="font-size:10pt">Mobil: <a
href="tel:+49%20177%204393994"
value="+491774393994" target="_blank"
moz-do-not-send="true">+ 49 177 4393994</a></span><br>
<span style="font-size:10pt">Mail: <a
href="mailto:service@metamodul.com"
target="_blank" moz-do-not-send="true">service@metamodul.com</a></span></p>
</div>
</font></span></div>
<br>
______________________________<wbr>_________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org"
rel="noreferrer" target="_blank" moz-do-not-send="true">spectrumscale.org</a><br>
<a href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss"
rel="noreferrer" target="_blank" moz-do-not-send="true">http://gpfsug.org/mailman/<wbr>listinfo/gpfsug-discuss</a><br>
<br>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
<a class="moz-txt-link-freetext" href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>
</pre>
</blockquote>
<br>
</body>
</html>