[gpfsug-discuss] cross-cluster mounting different versions of gpfs

Steve Duersch duersch at us.ibm.com
Wed Mar 16 20:25:23 GMT 2016


Please see question 2.10 in our faq.
http://www.ibm.com/support/knowledgecenter/api/content/nl/en-us/STXKQY/gpfsclustersfaq.pdf

We only support clusters that are running release n and release n-1 and
release n+1.  So 4.1 is supported to work with 3.5 and 4.2.  Release 4.2 is
supported to work with 4.1, but not with gpfs 3.5.  It may indeed work, but
it is not supported.


Steve Duersch
Spectrum Scale (GPFS) FVTest
845-433-7902
IBM Poughkeepsie, New York


>>Message: 1
>>Date: Wed, 16 Mar 2016 18:07:59 +0000
>>From: Damir Krstic <damir.krstic at gmail.com>
>>To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
>>Subject: Re: [gpfsug-discuss] cross-cluster mounting different
>>		 versions of		 gpfs
>>Message-ID:
>>		 <CAKV
+WqezNdrNX1KsAqZt9A5o6gQcrh560HKN1=iNV4ZoTCLTRA at mail.gmail.com>
>>Content-Type: text/plain; charset="utf-8"
>>
>>Sven,
>>
>>For us, at least, at this point in time, we have to create new filesystem
>>with version flag. The reason is we can't take downtime to upgrade all of
>>our 500+ compute nodes that will cross-cluster mount this new storage. We
>>can take downtime in June and get all of the nodes up to 4.2 gpfs version
>>but we have users today that need to start using the filesystem.
>>
>>So at this point in time, we either have ESS built with 4.1 version and
>>cross mount its filesystem (also built with --version flag I assume) to
our
>>3.5 compute cluster, or...we proceed with 4.2 ESS and build filesystems
>>with --version flag and then in June when we get all of our clients
upgrade
>>we run =latest gpfs command and then mmchfs -V to get filesystem back up
to
>>4.2 features.
>>
>>It's unfortunate that we are in this bind with the downtime of the
compute
>>cluster. If we were allowed to upgrade our compute nodes before June, we
>>could proceed with 4.2 build without having to worry about filesystem
>>versions.
>>
>>Thanks for your reply.
>>
>>Damir





From:	gpfsug-discuss-request at spectrumscale.org
To:	gpfsug-discuss at spectrumscale.org
Date:	03/16/2016 02:08 PM
Subject:	gpfsug-discuss Digest, Vol 50, Issue 47
Sent by:	gpfsug-discuss-bounces at spectrumscale.org



Send gpfsug-discuss mailing list submissions to
		 gpfsug-discuss at spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit
		 http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
		 gpfsug-discuss-request at spectrumscale.org

You can reach the person managing the list at
		 gpfsug-discuss-owner at spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: cross-cluster mounting different versions of		 gpfs
      (Damir Krstic)


----------------------------------------------------------------------

Message: 1
Date: Wed, 16 Mar 2016 18:07:59 +0000
From: Damir Krstic <damir.krstic at gmail.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] cross-cluster mounting different
		 versions of		 gpfs
Message-ID:
		 <CAKV
+WqezNdrNX1KsAqZt9A5o6gQcrh560HKN1=iNV4ZoTCLTRA at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Sven,

For us, at least, at this point in time, we have to create new filesystem
with version flag. The reason is we can't take downtime to upgrade all of
our 500+ compute nodes that will cross-cluster mount this new storage. We
can take downtime in June and get all of the nodes up to 4.2 gpfs version
but we have users today that need to start using the filesystem.

So at this point in time, we either have ESS built with 4.1 version and
cross mount its filesystem (also built with --version flag I assume) to our
3.5 compute cluster, or...we proceed with 4.2 ESS and build filesystems
with --version flag and then in June when we get all of our clients upgrade
we run =latest gpfs command and then mmchfs -V to get filesystem back up to
4.2 features.

It's unfortunate that we are in this bind with the downtime of the compute
cluster. If we were allowed to upgrade our compute nodes before June, we
could proceed with 4.2 build without having to worry about filesystem
versions.

Thanks for your reply.

Damir

On Wed, Mar 16, 2016 at 12:18 PM Sven Oehme <oehmes at gmail.com> wrote:

> while this is all correct people should think twice about doing this.
> if you create a filesystem with older versions, it might prevent you from
> using some features like data-in-inode, encryption, adding 4k disks to
> existing filesystem, etc even if you will eventually upgrade to the
latest
> code.
>
> for some customers its a good point in time to also migrate to larger
> blocksizes compared to what they run right now and migrate the data. i
have
> seen customer systems gaining factors of performance improvements even on
> existing HW by creating new filesystems with larger blocksize and latest
> filesystem layout (that they couldn't before due to small file waste
which
> is now partly solved by data-in-inode). while this is heavily dependent
on
> workload and environment its at least worth thinking about.
>
> sven
>
>
>
> On Wed, Mar 16, 2016 at 4:20 PM, Marc A Kaplan <makaplan at us.ibm.com>
> wrote:
>
>> The key point is that you must create the file system so that is "looks"
>> like a 3.5 file system.  See mmcrfs ... --version.  Tip: create or find
a
>> test filesystem back on the 3.5 cluster and look at the version string.
>>  mmslfs xxx -V.  Then go to the 4.x system and try to create a file
system
>> with the same version string....
>>
>>
>> [image: Marc A Kaplan]
>>
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160316/58097bbf/attachment.html
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 21994 bytes
Desc: not available
URL: <
http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160316/58097bbf/attachment.gif
>

------------------------------

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


End of gpfsug-discuss Digest, Vol 50, Issue 47
**********************************************


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160316/fac52177/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160316/fac52177/attachment-0002.gif>


More information about the gpfsug-discuss mailing list