[gpfsug-discuss] Using VMs as quorum / admin nodes in a GPFS infiniband cluster

Jan-Frode Myklebust janfrode at tanso.net
Mon Jun 7 20:49:02 BST 2021

I’ve done this a few times. Once with IPoIB as daemon network, and then
created a separate routed network on the hypervisor to bridge (?) between
VM and IPoIB network.

Example RHEL config where bond0 is an IP-over-IB bond on the hypervisor:

To give the VMs access to the daemon network, we need create an internal
network for the VMs, that is then routed into the IPoIB network on the

# cat <<EOF > routed34.xml
  <forward mode='route' dev='bond0'/>
  <bridge name='virbr34' stp='on' delay='2'/>
  <ip address='' netmask=''>
      <range start='' end=''/>
# virsh net-define routed34.xml
Network routed34 defined from routed34.xml

# virsh net-start routed34
Network routed34 started

# virsh net-autostart routed34
Network routed34 marked as autostarted

# virsh net-list --all
 Name                 State      Autostart     Persistent
 default              active     yes           yes
 routed34           active     yes           yes



I see no issue with it — but beware that the FAQ lists some required
tunings if the VM is to host desconly disks (paniconiohang?)…


man. 7. jun. 2021 kl. 14:55 skrev Leonardo Sala <leonardo.sala at psi.ch>:

> Hallo,
> we do have multiple bare-metal GPFS clusters with infiniband fabric, and I
> am actually considering adding some VMs in the mix, to perform admin tasks
> (so that the bare metal servers do not need passwordless ssh keys) and
> quorum nodes. Has anybody tried this? What could be the drawbacks / issues
> at GPFS level?
> Thanks a lot for the insights!
> cheers
> leo
> --
> Paul Scherrer Institut
> Dr. Leonardo Sala
> Group Leader High Performance Computing
> Deputy Section Head Science IT
> Science IT
> WHGA/036
> Forschungstrasse 111
> 5232 Villigen PSI
> Switzerland
> Phone: +41 56 310 3369leonardo.sala at psi.chwww.psi.ch
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20210607/01df8d83/attachment-0002.htm>

More information about the gpfsug-discuss mailing list