[gpfsug-discuss] Reexporting GPFS via NFS on VM host

Dean Hildebrand dhildeb at us.ibm.com
Thu Aug 27 21:36:26 BST 2015


Hi Christopher,


> >
> > Chris, are you using -v to give the container access to the nfs subdir
> > (and hence to a gpfs subdir) (and hence achieve a level of
> > multi-tenancy)?
>
> -v option to what?

I was referring to how you were using docker/containers to expose the NFS
storage to the container...there are several different ways to do it and
one way is to simply expose a directory to the container via the -v option
https://docs.docker.com/userguide/dockervolumes/

>
> > Even without containers, I wonder if this could allow
> > users to run their own VMs as root as well...and preventing them from
> > becoming root on gpfs...
>
>
> >
> > I'd love for you to share your experience (mgmt and perf) with this
> > architecture once you get it up and running.
>
> A quick and dirty test:
>
>  From a VM:
> -bash-4.1$ time dd if=/dev/zero of=cjwtestfile2 bs=1M count=10240
> real 0m20.411s 0m22.137s 0m21.431s 0m21.730s 0m22.056s 0m21.759s
> user 0m0.005s  0m0.007s  0m0.006s  0m0.003s  0m0.002s  0m0.004s
> sys  0m11.710s 0m10.615s 0m10.399s 0m10.474s 0m10.682s 0m9.965s
>
>  From the underlying hypervisor.
>
> real 0m11.138s 0m9.813s 0m9.761s 0m9.793s 0m9.773s 0m9.723s
> user 0m0.006s  0m0.013s 0m0.009s 0m0.008s 0m0.008s 0m0.009s
> sys  0m5.447s  0m5.529s 0m5.802s 0m5.580s 0m6.190s 0m5.516s
>
> So there's a factor of just over 2 slowdown.
>
> As it's still 500MB/s, I think it's good enough for now.

Interesting test... I assume you have VLANs setup so that the data doesn't
leave the VM, go to the network switch, and then back to the nfs server in
the hypervisor again?  Also, there might be a few NFS tuning options you
could try, like increasing the number of nfsd threads, etc...but there are
extra data copies occuring so the perf will never match...

>
> The machine has a 10Gbit/s network connection and both hypervisor and VM
> are running SL6.
>
> > Some side benefits of this
> > architecture that we have been thinking about as well  is that it
allows
> > both the containers and VMs to be somewhat ephemeral, while the gpfs
> > continues to run in the hypervisor...
>
> Indeed. This is another advantage.
>
> If we were running Debian, it would be possible to export part of a
> filesystem to a VM. Which would presumably work.

I'm not aware of this...is this through VirtFS or something else?

In redhat derived OSs
> (we are currently using Scientific Linux), I don't believe it is - hence
> exporting via NFS.
>
> >
> > To ensure VMotion works relatively smoothly, just ensure each VM is
> > given a hostname to mount that always routes back to the localhost nfs
> > server on each machine...and I think things should work relatively
> > smoothly.  Note you'll need to maintain the same set of nfs exports
> > across the entire cluster as well, so taht when a VM moves to another
> > machine it immediately has an available export to mount.
>
> Yes, we are doing this.
>
> Simon alludes to potential problems at the NFS layer on live migration.
> Otherwise, yes indeed the setup should be fine.  I'm not familiar enough
> with the details of NFS - but I have heard NFS described as "a stateless
> filesystem with state". It's the stateful bits I'm concerned about.

Are you using v3 or v4?  It doesn't really matter though, as in either
case, gpfs would handle the state failover parts...  Ideally the vM would
umount the local nfs server, do VMotion, and then mount the new local nfs
server, but given there might be open files...it makes sense that this may
not be possible...

Dean

>
> Chris
>
> >
> > Dean Hildebrand
> > IBM Almaden Research Center
> >
> >
> > Inactive hide details for "Simon Thompson (Research Computing - IT
> > Services)" ---08/13/2015 07:33:16 AM--->I've set up a couple"Simon
> > Thompson (Research Computing - IT Services)" ---08/13/2015 07:33:16
> > AM--->I've set up a couple of VM hosts to export some of its GPFS
> > filesystem >via NFS to machines on that
> >
> > From: "Simon Thompson (Research Computing - IT Services)"
> > <S.J.Thompson at bham.ac.uk>
> > To: gpfsug main discussion list <gpfsug-discuss at gpfsug.org>
> > Date: 08/13/2015 07:33 AM
> > Subject: Re: [gpfsug-discuss] Reexporting GPFS via NFS on VM host
> > Sent by: gpfsug-discuss-bounces at gpfsug.org
> >
> >
------------------------------------------------------------------------
> >
> >
> >
> >
> >  >I've set up a couple of VM hosts to export some of its GPFS
filesystem
> >  >via NFS to machines on that VM host[1,2].
> >
> > Provided all your sockets no the VM host are licensed.
> >
> >  >Is live migration of VMs likely to work?
> >  >
> >  >Live migration isn't a hard requirement, but if it will work, it
could
> >  >make our life easier.
> >
> > Live migration using a GPFS file-system on the hypervisor node should
work
> > (subject to the usual caveats of live migration).
> >
> > Whether live migration and your VM instances would still be able to NFS
> > mount (assuming loopback address?) if they moved to a different
> > hypervisor, pass, you might get weird NFS locks. And if they are still
> > mounting from the original VM host, then you are not doing what the FAQ
> > says you can do.
> >
> > Simon
> >
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at gpfsug.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >
> >
> >
> >
> >
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at gpfsug.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> >
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at gpfsug.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20150827/1ac782ab/attachment-0002.htm>


More information about the gpfsug-discuss mailing list