[gpfsug-discuss] Importing a Spectrum Scale a filesystem from 4.2.3 cluster to 5.0.4.3 cluster

Chris Scott chrisjscott at gmail.com
Mon Jun 1 14:14:02 BST 2020


Sounds like it would work fine.

I recently exported a 3.5 version filesystem from a GPFS 3.5 cluster to a
'Scale cluster at 5.0.2.3 software and 5.0.2.0 cluster version. I
concurrently mapped the NSDs to new NSD servers in the 'Scale cluster,
mmexported the filesystem and changed the NSD servers configuration of the
NSDs using the mmimportfs ChangeSpecFile. The original (creation)
filesystem version of this filesystem is 3.2.1.5.

To my pleasant surprise the filesystem mounted and worked fine while still
at 3.5 filesystem version. Plan B would have been to "mmchfs <filesystem>
-V full" and then mmmount, but I was able to update the filesystem to
5.0.2.0 version while already mounted.

This was further pleasantly successful as the filesystem in question is
DMAPI-enabled, with the majority of the data on tape using Spectrum Protect
for Space Management than the volume resident/pre-migrated on disk.

The complexity is further compounded by this filesystem being associated to
a different Spectrum Protect server than an existing DMAPI-enabled
filesystem in the 'Scale cluster. Preparation of configs and subsequent
commands to enable and use Spectrum Protect for Space Management
multiserver for migration and backup all worked smoothly as per the docs.

I was thus able to get rid of the GPFS 3.5 cluster on legacy hardware, OS,
GPFS and homebrew CTDB SMB and NFS and retain the filesystem with its
majority of tape-stored data on current hardware, OS and 'Scale/'Protect
with CES SMB and NFS.

The future objective remains to move all the data from this historical
filesystem to a newer one to get the benefits of larger block and inode
sizes, etc, although since the data is mostly dormant and kept for
compliance/best-practice purposes, the main goal will be to head off
original file system version 3.2 era going end of support.

Cheers
Chris

On Thu, 28 May 2020 at 23:31, Prasad Surampudi <
prasad.surampudi at theatsgroup.com> wrote:

> We have two scale clusters, cluster-A running version Scale 4.2.3 and
> RHEL6/7 and Cluster-B running Spectrum Scale  5.0.4 and RHEL 8.1. All the
> nodes in both Cluster-A and Cluster-B are direct attached and no NSD
> servers. We have our current filesystem gpfs_4 in Cluster-A  and new
> filesystem gpfs_5 in Cluster-B. We want to copy all our data from gpfs_4
> filesystem into gpfs_5 which has variable block size.  So, can we map NSDs
> of gpfs_4 to Cluster-B nodes and do a mmexportfs of gpfs_4 from Cluster-A
> and mmimportfs into Cluster-B so that we have both filesystems available on
> same node in Cluster-B for copying data across fiber channel? If
> mmexportfs/mmimportfs works, can we delete nodes from Cluster-A and add
> them to Cluster-B without upgrading RHEL or GPFS versions for now and  plan
> upgrading them at a later time?
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200601/ff60db93/attachment-0001.htm>


More information about the gpfsug-discuss mailing list