[gpfsug-discuss] cross-cluster mounting different versions of gpfs

Damir Krstic damir.krstic at gmail.com
Wed Mar 16 19:06:02 GMT 2016


Jonathan,

Gradual upgrade is indeed a nice feature of GPFS. We are planning to
gradually upgrade our clients to 4.2. However, before all, or even most
clients are upgraded, we have to be able to mount this new 4.2 filesystem
on all our compute nodes that are running version 3.5. Here is our
environment today:

storage cluster - 14 nsd servers * gpfs3.5
compute cluster - 500+ clients  * gpfs3.5 <--- this cluster is mounting
storage cluster filesystems
new to us ESS cluster * gpfs4.2

ESS will become its own GPFS cluster and we want to mount its filesystems
on our compute cluster. So far so good. We understand that we will
eventually want to upgrade all our nodes in compute cluster to 4.2 and we
know the upgrade path (3.5 --> 4.1 --> 4.2).

The reason for this conversation is: with ESS and GPFS 4.2 can we remote
mount it on our compute cluster? The answer we got is, yes if you build a
new filesystem with --version flag. Sven, however, has just pointed out
that this may not be desirable option since there are some features that
are permanently lost when building a filesystem with --version.

In our case, however, even though we will upgrade our clients to 4.2 (some
gradually as pointed elsewhere in this conversation, and most in June), we
have to be able to mount the new ESS filesystem on our compute cluster
before the clients are upgraded.

It seems like, even though Sven is recommending against it, building a
filesystem with --version flag is our only option. I guess we have another
option, and that is to upgrade all our clients first, but we can't do that
until June so I guess it's really not an option at this time.

I hope this makes our constraints clear: mainly, without being able to take
downtime on our compute cluster, we are forced to build a filesystem on ESS
using --version flag.

Thanks,
Damir


On Wed, Mar 16, 2016 at 1:47 PM Jonathan Buzzard <jonathan at buzzard.me.uk>
wrote:

> On 16/03/16 18:07, Damir Krstic wrote:
> > Sven,
> >
> > For us, at least, at this point in time, we have to create new
> > filesystem with version flag. The reason is we can't take downtime to
> > upgrade all of our 500+ compute nodes that will cross-cluster mount this
> > new storage. We can take downtime in June and get all of the nodes up to
> > 4.2 gpfs version but we have users today that need to start using the
> > filesystem.
> >
>
> You can upgrade a GPFS file system piece meal. That is there should be
> no reason to take the whole system off-line to perform the upgrade. So
> you can upgrade a compute nodes to GPFS 4.2 one by one and they will
> happily continue to talk to the NSD's running 3.5 while the other nodes
> continue to use the file system.
>
> In a properly designed GPFS cluster you should also be able to take
> individual NSD nodes out for the upgrade. Though I wouldn't recommend
> running mixed versions on a long term basis, it is definitely fine for
> the purposes of upgrading.
>
> Then once all nodes in the GPFS cluster are upgraded you issue the
> mmchfs -V full. How long this will take will depend on the maximum run
> time you allow for your jobs.
>
> You would need to check that you can make a clean jump from 3.5 to 4.2
> but IBM support should be able to confirm that for you.
>
> This is one of the nicer features of GPFS; its what I refer to as
> "proper enterprise big iron computing". That is if you have to take the
> service down at any time for any reason you are doing it wrong.
>
> JAB.
>
> --
> Jonathan A. Buzzard                 Email: jonathan (at) buzzard.me.uk
> Fife, United Kingdom.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160316/d659ab81/attachment-0002.htm>


More information about the gpfsug-discuss mailing list