[gpfsug-discuss] How to join GNR nodes to a non-GNR cluster

Jan-Frode Myklebust janfrode at tanso.net
Wed Dec 4 11:21:54 GMT 2019


Adding the GL2 into your existing cluster shouldn’t be any problem. You
would just delete the existing cluster on the GL2, then on the EMS run
something like:

   gssaddnode -N gssio1-hs --cluster-node netapp-node --nodetype gss
--accept-license
   gssaddnode -N gssio2-hs --cluster-node netapp-node --nodetype gss
--accept-license

and then afterwards create the RGs:

   gssgenclusterrgs -G gss_ppc64 --suffix=-hs

Then create the vdisks/nsds and add to your existing filesystem.

Beware that last time I did this, gssgenclusterrgs triggered a "mmshutdown
-a" on the whole cluster, because it wanted to change some config
settings... Caught me a bit by surprise..



  -jf


ons. 4. des. 2019 kl. 10:44 skrev Dorigo Alvise (PSI) <alvise.dorigo at psi.ch
>:

> Thank you all for the answer. I try to recap my answers to your questions:
>
>
>
>    1. the purpose is not to merge clusters "per se"; it is adding the
>    GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp
>    (which is running out of free space); of course I know well the
>    heterogeneity of this hypothetical system, so the GL2's NSD would go to a
>    special pool; but in the end I need a unique namespace for files.
>    2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2
>    cluster) because the former is in production and I cannot schedule long
>    downtimes
>    3. All system have proper licensing of course; what does it means that
>    I could loose IBM support ? if the support is for a failing disk drive I do
>    not think so; if the support is for a "strange" behaviour of GPFS I can
>    probably understand
>    4. NSD (in the NetApp system) are in their roles: what do you mean
>    exactly ? there's RAIDset attached to servers that are actually NSD
>    together with their attached LUN
>
>
>    Alvise
> ------------------------------
> *From:* gpfsug-discuss-bounces at spectrumscale.org <
> gpfsug-discuss-bounces at spectrumscale.org> on behalf of Lyle Gayne <
> lgayne at us.ibm.com>
> *Sent:* Tuesday, December 3, 2019 8:30:31 PM
> *To:* gpfsug-discuss at spectrumscale.org
> *Cc:* gpfsug-discuss at spectrumscale.org
> *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster
>
> For:
>
> - A NetApp system with hardware RAID
> - SpectrumScale 4.2.3-13 running on top of the NetApp *< --- Are these
> NSD servers in their GPFS roles (where Scale "runs on top"*?
> - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)
>
> What I need to do is to merge the GL2 in the other GPFS cluster (running
> on the NetApp) without loosing, of course, the RecoveryGroup configuration,
> etc.
>
> I'd like to ask the experts
> 1.        whether it is feasible, considering the difference in the GPFS
> versions, architectures differences (x86_64 vs. power)
> 2.        if yes, whether anyone already did something like this and what
> is the best strategy suggested
> 3.        finally: is there any documentation dedicated to that, or at
> least inspiring the correct procedure ?
>
> ......
> Some observations:
>
>
> 1) Why do you want to MERGE the GL2 into a single cluster with the rest
> cluster, rather than simply allowing remote mount of the ESS servers by the
> other GPFS (NSD client) nodes?
>
> 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our
> coexistence rules.
>
> 3) Mixing x86 and Power, especially as NSD servers, should pose no
> issues.  Having them as separate file systems (NetApp vs. ESS) means no
> concerns regarding varying architectures within the same fs serving or
> failover scheme.  Mixing such as compute nodes would mean some performance
> differences across the different clients, but you haven't described your
> compute (NSD client) details.
>
> Lyle
>
> ----- Original message -----
> From: "Tomer Perry" <TOMP at il.ibm.com>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a
> non-GNR cluster
> Date: Tue, Dec 3, 2019 10:03 AM
>
> Hi,
>
> Actually, I believe that GNR is not a limiting factor here.
> mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR
> configuration as well:
> "If the specified file system device is a IBM Spectrum Scale RAID-based
> file system, then all affected IBM Spectrum Scale RAID objects will be
> exported as well. This includes recovery groups, declustered arrays,
> vdisks, and any other file systems that are based on these objects. For
> more information about IBM Spectrum Scale RAID, see *IBM Spectrum
> Scale RAID: Administration*. "
>
> OTOH, I suspect that due to the version mismatch, it wouldn't work - since
> I would assume that the cluster config version is to high for the NetApp
> based cluster.
> I would also suspect that the filesystem version on the ESS will be
> different.
>
>
> Regards,
>
> Tomer Perry
> Scalable I/O Development (Spectrum Scale)
> email: tomp at il.ibm.com
> 1 Azrieli Center, Tel Aviv 67021, Israel
> Global Tel:    +1 720 3422758
> Israel Tel:      +972 3 9188625
> Mobile:         +972 52 2554625
>
>
>
>
> From:        "Olaf Weiser" <olaf.weiser at de.ibm.com>
> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date:        03/12/2019 16:54
> Subject:        [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to
> a non-GNR cluster
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hallo
> "merging" 2 different GPFS cluster into one .. is not possible ..
> for sure you can do "nested" mounts .. .but that's most likely not, what
> you want to do ..
>
> if you want to add a GL2 (or any other ESS) ..to an existing (other)
> cluster... -  you can't preserve ESS's RG definitions...
> you need to create the RGs after adding the IO-nodes to the existing
> cluster...
>
> so if you got a new ESS.. (no data on it) .. simply unconfigure cluster ..
>  .. add the nodes to your existing cluster.. and then start configuring the
> RGs
>
>
>
>
>
> From:        "Dorigo Alvise (PSI)" <alvise.dorigo at psi.ch>
> To:        "gpfsug-discuss at spectrumscale.org" <
> gpfsug-discuss at spectrumscale.org>
> Date:        12/03/2019 09:35 AM
> Subject:        [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a
> non-GNR cluster
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hello everyone,
> I have:
> - A NetApp system with hardware RAID
> - SpectrumScale 4.2.3-13 running on top of the NetApp
> - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)
>
> What I need to do is to merge the GL2 in the other GPFS cluster (running
> on the NetApp) without loosing, of course, the RecoveryGroup configuration,
> etc.
>
> I'd like to ask the experts
> 1.        whether it is feasible, considering the difference in the GPFS
> versions, architectures differences (x86_64 vs. power)
> 2.        if yes, whether anyone already did something like this and what
> is the best strategy suggested
> 3.        finally: is there any documentation dedicated to that, or at
> least inspiring the correct procedure ?
>
> Thank you very much,
>
>   Alvise Dorigo_______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>*
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20191204/0fc2a17b/attachment-0002.htm>


More information about the gpfsug-discuss mailing list