[gpfsug-discuss] GPFS inside OpenStack guests
jonathan at buzzard.me.uk
Thu Nov 20 10:03:01 GMT 2014
On Wed, 2014-11-19 at 20:56 +0000, orlando.richards at ed.ac.uk wrote:
> On Wed, 19 Nov 2014, Simon Thompson (Research Computing - IT Services)
> > Yes, what about the random name nature of a vm image?
> > For example I spin up a new vm, how does it join the gpfs cluster to be able to use nsd protocol?
> I *think* this bit should be solvable - assuming one can pre-define the
> range of names the node will have, and can pre-populate your gpfs cluster
> config with these node names. The guest image should then have the full
> /var/mmfs tree (pulled from another gpfs node), but with the
> /var/mmfs/gen/mmfsNodeData file removed. When it starts up, it'll figure
> out "who" it is and regenerate that file, pull the latest cluster config
> from the primary config server, and start up.
It's perfectly solvable with a bit of scripting and putting the cluster
into admin mode central.
> > And how about attaching to the netowkrk as neutron networking uses per tenant networks, so how would you actually get access to the gpfs cluster?
> This bit is where I can see the potential pitfall. OpenStack naturally
> uses NAT to handle traffic to and from guests - will GPFS cope with
> nat'ted clients in this way?
Not going to work with NAT. GPFS has some "funny" ideas about
networking, but to put it succinctly all the nodes have to be on the
same class A, B or C network. Though it considers every address in a
class A network to be on the same network even though you may have
divided it up internally into different networks. Consequently the
network model in GPFS is broken.
You would need to use bridged mode aka FlatNetworking in OpenStacks for
this to work, but surely Jan knows all this.
Jonathan A. Buzzard Email: jonathan (at) buzzard.me.uk
Fife, United Kingdom.
More information about the gpfsug-discuss