[gpfsug-discuss] Using VMs as quorum / admin nodes in a GPFS infiniband cluster

Jan-Frode Myklebust janfrode at tanso.net
Thu Jun 17 09:29:42 BST 2021


*All* nodes needs to be able to communicate on the daemon network. If they
don't have access to this network, they can't join the cluster. It doesn't
need to be same subnet, it can be routed. But they all have to be able to
reach each other. If you use IPoIB, you likely need something to route
between the IPoIB network and the outside world to reach the IP you have on
your VM. I don't think you will be able to use an IP address in the IPoIB
range for your VM, unless your vmware hypervisor is connected to the IB
fabric, and can bridge it.. (doubt that's possible).

I've seen some customers avoid using IPoIB, and rather mix an ethernet for
daemon network, and dedicate the infiniband network to RDMA.

  -jf

On Thu, Jun 17, 2021 at 8:35 AM Leonardo Sala <leonardo.sala at psi.ch> wrote:

> Hallo everybody
>
> thanks for the feedback! So, what it is suggested is to create on the VM
> (in my case hosted on vSphere, with only one NIC) a secondary IP within the
> IPoIP range, and create a route for that IP range to go over the public IP
> (and create a similar route on my bare-metal servers, so that the VM IPoIB
> IPs are reached over the public network) - is that correct?
>
> The only other options would be to ditch IPoIB as daemon network, right?
> What happens if some nodes have access to the daemon network over IPoIB,
> and other not - GPFS goes back to public ip cluster wide, or else?
>
> Thanks again!
>
> regards
>
> leo
>
> Paul Scherrer Institut
> Dr. Leonardo Sala
> Group Leader High Performance Computing
> Deputy Section Head Science IT
> Science IT
> WHGA/036
> Forschungstrasse 111
> 5232 Villigen PSI
> Switzerland
>
> Phone: +41 56 310 3369leonardo.sala at psi.chwww.psi.ch
>
> On 07.06.21 21:49, Jan-Frode Myklebust wrote:
>
>
> I’ve done this a few times. Once with IPoIB as daemon network, and then
> created a separate routed network on the hypervisor to bridge (?) between
> VM and IPoIB network.
>
> Example RHEL config where bond0 is an IP-over-IB bond on the hypervisor:
> ————————
>
> To give the VMs access to the daemon network, we need create an internal
> network for the VMs, that is then routed into the IPoIB network on the
> hypervisor.
>
> ~~~
> # cat <<EOF > routed34.xml
> <network>
>   <name>routed34</name>
>   <forward mode='route' dev='bond0'/>
>   <bridge name='virbr34' stp='on' delay='2'/>
>   <ip address='10.0.0.1' netmask='255.255.255.0'>
>     <dhcp>
>       <range start='10.0.0.128' end='10.0.0.254'/>
>     </dhcp>
>   </ip>
> </network>
> EOF
> # virsh net-define routed34.xml
> Network routed34 defined from routed34.xml
>
> # virsh net-start routed34
> Network routed34 started
>
> # virsh net-autostart routed34
> Network routed34 marked as autostarted
>
> # virsh net-list --all
>  Name                 State      Autostart     Persistent
> ----------------------------------------------------------
>  default              active     yes           yes
>  routed34           active     yes           yes
>
> ~~~
>
> ————————-
>
>
> I see no issue with it — but beware that the FAQ lists some required
> tunings if the VM is to host desconly disks (paniconiohang?)…
>
>
>
>   -jf
>
>
> man. 7. jun. 2021 kl. 14:55 skrev Leonardo Sala <leonardo.sala at psi.ch>:
>
>> Hallo,
>>
>> we do have multiple bare-metal GPFS clusters with infiniband fabric, and
>> I am actually considering adding some VMs in the mix, to perform admin
>> tasks (so that the bare metal servers do not need passwordless ssh keys)
>> and quorum nodes. Has anybody tried this? What could be the drawbacks /
>> issues at GPFS level?
>>
>> Thanks a lot for the insights!
>>
>> cheers
>>
>> leo
>>
>> --
>> Paul Scherrer Institut
>> Dr. Leonardo Sala
>> Group Leader High Performance Computing
>> Deputy Section Head Science IT
>> Science IT
>> WHGA/036
>> Forschungstrasse 111
>> 5232 Villigen PSI
>> Switzerland
>>
>> Phone: +41 56 310 3369leonardo.sala at psi.chwww.psi.ch
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210617/67ed0a8c/attachment-0002.htm>


More information about the gpfsug-discuss mailing list