[gpfsug-discuss] Importing a Spectrum Scale a filesystem from 4.2.3 cluster to 5.0.4.3 cluster

Chris Scott chrisjscott at gmail.com
Tue Jun 2 14:31:05 BST 2020


Hi Fred

The imported filesystem has ~1.5M files that are migrated to Spectrum
Protect. Spot checking transparent and selective recalls of a handful of
files has been successful after associating them with their correct
Spectrum Protect server. They're all also backed up to primary and copy
pools in the Spectrum Protect server so having to do a restore instead of
recall if it wasn't working was an acceptable risk in favour of trying to
persist the GPFS 3.5 cluster on dying hardware and insecure OS, etc.

Cheers
Chris

On Mon, 1 Jun 2020 at 17:53, Frederick Stock <stockf at us.ibm.com> wrote:

> Chris, it was not clear to me if the file system you imported had files
> migrated to Spectrum Protect, that is stub files in GPFS.  If the file
> system does contain files migrated to Spectrum Protect with just a stub
> file in the file system, have you tried to recall any of them to see if
> that still works?
>
> Fred
> __________________________________________________
> Fred Stock | IBM Pittsburgh Lab | 720-430-8821
> stockf at us.ibm.com
>
>
>
> ----- Original message -----
> From: Chris Scott <chrisjscott at gmail.com>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] Importing a Spectrum Scale a
> filesystem from 4.2.3 cluster to 5.0.4.3 cluster
> Date: Mon, Jun 1, 2020 9:14 AM
>
> Sounds like it would work fine.
>
> I recently exported a 3.5 version filesystem from a GPFS 3.5 cluster to a
> 'Scale cluster at 5.0.2.3 software and 5.0.2.0 cluster version. I
> concurrently mapped the NSDs to new NSD servers in the 'Scale cluster,
> mmexported the filesystem and changed the NSD servers configuration of the
> NSDs using the mmimportfs ChangeSpecFile. The original (creation)
> filesystem version of this filesystem is 3.2.1.5.
>
> To my pleasant surprise the filesystem mounted and worked fine while still
> at 3.5 filesystem version. Plan B would have been to "mmchfs <filesystem>
> -V full" and then mmmount, but I was able to update the filesystem to
> 5.0.2.0 version while already mounted.
>
> This was further pleasantly successful as the filesystem in question is
> DMAPI-enabled, with the majority of the data on tape using Spectrum Protect
> for Space Management than the volume resident/pre-migrated on disk.
>
> The complexity is further compounded by this filesystem being associated
> to a different Spectrum Protect server than an existing DMAPI-enabled
> filesystem in the 'Scale cluster. Preparation of configs and subsequent
> commands to enable and use Spectrum Protect for Space Management
> multiserver for migration and backup all worked smoothly as per the docs.
>
> I was thus able to get rid of the GPFS 3.5 cluster on legacy hardware, OS,
> GPFS and homebrew CTDB SMB and NFS and retain the filesystem with its
> majority of tape-stored data on current hardware, OS and 'Scale/'Protect
> with CES SMB and NFS.
>
> The future objective remains to move all the data from this historical
> filesystem to a newer one to get the benefits of larger block and inode
> sizes, etc, although since the data is mostly dormant and kept for
> compliance/best-practice purposes, the main goal will be to head off
> original file system version 3.2 era going end of support.
>
> Cheers
> Chris
>
> On Thu, 28 May 2020 at 23:31, Prasad Surampudi <
> prasad.surampudi at theatsgroup.com> wrote:
>
> We have two scale clusters, cluster-A running version Scale 4.2.3 and
> RHEL6/7 and Cluster-B running Spectrum Scale  5.0.4 and RHEL 8.1. All the
> nodes in both Cluster-A and Cluster-B are direct attached and no NSD
> servers. We have our current filesystem gpfs_4 in Cluster-A  and new
> filesystem gpfs_5 in Cluster-B. We want to copy all our data from gpfs_4
> filesystem into gpfs_5 which has variable block size.  So, can we map NSDs
> of gpfs_4 to Cluster-B nodes and do a mmexportfs of gpfs_4 from Cluster-A
> and mmimportfs into Cluster-B so that we have both filesystems available on
> same node in Cluster-B for copying data across fiber channel? If
> mmexportfs/mmimportfs works, can we delete nodes from Cluster-A and add
> them to Cluster-B without upgrading RHEL or GPFS versions for now and  plan
> upgrading them at a later time?
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200602/d60140e5/attachment-0002.htm>


More information about the gpfsug-discuss mailing list