[gpfsug-discuss] How to join GNR nodes to a non-GNR cluster

Jan-Frode Myklebust janfrode at tanso.net
Thu Dec 5 15:59:07 GMT 2019


There’s still being maintained the ESS v5.2 release stream with gpfs
v4.2.3.x for customer that are stuck on v4. You should probably install
that on your ESS if you want to add it to your existing cluster.

BTW: I think Tomer misunderstood the task a bit. It sounded like you needed
to keep the existing recoverygroups from the ESS in the merge. That would
probably be complicated.. Adding an empty ESS to an existing cluster should
not be complicated —- it’s just not properly documented anywhere I’m aware
of.



 -jf

tor. 5. des. 2019 kl. 15:50 skrev Dorigo Alvise (PSI) <alvise.dorigo at psi.ch
>:

> This is a quite critical storage for data taking. It is not easy to update
> to GPFS5 because in that facility we have very short shutdown period. Thank
> you for pointing out that 4.2.3. But the entire storage will be replaced in
> the future; at the moment we just need to expand it to survive for a while.
>
>
> This merge seems quite tricky to implement and I haven't seen consistent
> opinions among the people that kindly answered. According to Jan Frode,
> Kaplan and T. Perry it should be possible, in principle, to do the merge...
> Other people suggest a remote mount, which is not a solution for my use
> case. Other suggest not to do that...
>
>
>    A
>
>
>
> ------------------------------
> *From:* gpfsug-discuss-bounces at spectrumscale.org <
> gpfsug-discuss-bounces at spectrumscale.org> on behalf of Daniel Kidger <
> daniel.kidger at uk.ibm.com>
> *Sent:* Thursday, December 5, 2019 11:24:08 AM
>
> *To:* gpfsug main discussion list
> *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster
>
> One additional question to ask is : what are your long term plans for the
> 4.2.3 Spectrum Scake cluster?  Do you expect to upgrade it to version 5.x
> (hopefully before 4.2.3 goes out of support)?
>
> Also I assume your Netapp hardware is the standard Netapp block storage,
> perhaps based on their standard 4U60 shelves daisy-chained together?
>
> Daniel
>
> _________________________________________________________
> *Daniel Kidger*
> IBM Technical Sales Specialist
> Spectrum Scale, Spectrum Discover and IBM Cloud Object Store
>
> + <+44-7818%20522%20266>44-(0)7818 522 266 <+44-7818%20522%20266>
> daniel.kidger at uk.ibm.com
>
> <https://www.youracclaim.com/badges/687cf790-fe65-4a92-b129-d23ae41862ac/public_url>
> <https://www.youracclaim.com/badges/8153c6a7-3e02-40be-87ee-24e27ae9459c/public_url>
> <https://www.youracclaim.com/badges/78197e2c-4277-4ec9-808b-ad6abe1e1b16/public_url>
>
>
>
> On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) <alvise.dorigo at psi.ch> wrote:
>
> 
>
> Thank Anderson for the material. In principle our idea was to scratch the
> filesystem in the GL2, put its NSD on a dedicated pool and then merge it
> into the Filesystem which would remain on V4. I do not want to create a FS
> in the GL2 but use its space to expand the space of the other cluster.
>
>
>    A
> ------------------------------
> *From:* gpfsug-discuss-bounces at spectrumscale.org <
> gpfsug-discuss-bounces at spectrumscale.org> on behalf of Anderson Ferreira
> Nobre <anobre at br.ibm.com>
> *Sent:* Wednesday, December 4, 2019 3:07:18 PM
> *To:* gpfsug-discuss at spectrumscale.org
> *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster
>
> Hi Dorigo,
>
> From point of view of cluster administration I don't think it's a good
> idea to have hererogeneous cluster. There are too many diferences between
> V4 and V5. And much probably many of enhancements of V5 you won't take
> advantage. One example is the new filesystem layout in V5. And at this
> moment the way to migrate the filesystem is create a new filesystem in V5
> with the new layout and migrate the data. That is inevitable. I have seen
> clients saying that they don't need all that enhancements, but the true is
> when you face performance issue that is only addressable with the new
> features someone will raise the question why we didn't consider that in the
> beginning.
>
> Use this time to review if it would be better to change the block size of
> your fileystem. There's a script called filehist
> in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in your
> current filesystem. Here's the link with additional information:
>
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/Data%20and%20Metadata
>
> Different RAID configurations also brings unexpected performance
> behaviors. Unless you are planning create different pools and use ILM to
> manage the files in different pools.
>
> One last thing, it's a good idea to follow the recommended levels for
> Spectrum Scale:
>
> https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning
>
> Anyway, you are the system administrator, you know better than anyone how
> complex is to manage this cluster.
>
> Abraços / Regards / Saludos,
>
>
> *AndersonNobre*
> Power and Storage Consultant
> IBM Systems Hardware Client Technical Team – IBM Systems Lab Services
>
> [image: community_general_lab_services]
>
> ------------------------------
> Phone:55-19-2132-4317
> E-mail: anobre at br.ibm.com [image: IBM]
>
>
>
> ----- Original message -----
> From: "Dorigo Alvise (PSI)" <alvise.dorigo at psi.ch>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: "gpfsug-discuss at spectrumscale.org" <gpfsug-discuss at spectrumscale.org>
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a
> non-GNR cluster
> Date: Wed, Dec 4, 2019 06:44
>
>
> Thank you all for the answer. I try to recap my answers to your questions:
>
>
>
>    1. the purpose is not to merge clusters "per se"; it is adding the
>    GL2's 700TB raw space to the current filesystem provided by the GPFS/NetApp
>    (which is running out of free space); of course I know well the
>    heterogeneity of this hypothetical system, so the GL2's NSD would go to a
>    special pool; but in the end I need a unique namespace for files.
>    2. I do not want to do the opposite (mergin GPFS/NetApp into the GL2
>    cluster) because the former is in production and I cannot schedule long
>    downtimes
>    3. All system have proper licensing of course; what does it means that
>    I could loose IBM support ? if the support is for a failing disk drive I do
>    not think so; if the support is for a "strange" behaviour of GPFS I can
>    probably understand
>    4. NSD (in the NetApp system) are in their roles: what do you mean
>    exactly ? there's RAIDset attached to servers that are actually NSD
>    together with their attached LUN
>
>
>    Alvise
> ------------------------------
> *From:* gpfsug-discuss-bounces at spectrumscale.org <
> gpfsug-discuss-bounces at spectrumscale.org> on behalf of Lyle Gayne <
> lgayne at us.ibm.com>
> *Sent:* Tuesday, December 3, 2019 8:30:31 PM
> *To:* gpfsug-discuss at spectrumscale.org
> *Cc:* gpfsug-discuss at spectrumscale.org
> *Subject:* Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster
>
> For:
>
> - A NetApp system with hardware RAID
> - SpectrumScale 4.2.3-13 running on top of the NetApp *< --- Are these
> NSD servers in their GPFS roles (where Scale "runs on top"*?
> - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)
>
> What I need to do is to merge the GL2 in the other GPFS cluster (running
> on the NetApp) without loosing, of course, the RecoveryGroup configuration,
> etc.
>
> I'd like to ask the experts
> 1.        whether it is feasible, considering the difference in the GPFS
> versions, architectures differences (x86_64 vs. power)
> 2.        if yes, whether anyone already did something like this and what
> is the best strategy suggested
> 3.        finally: is there any documentation dedicated to that, or at
> least inspiring the correct procedure ?
>
> ......
> Some observations:
>
>
> 1) Why do you want to MERGE the GL2 into a single cluster with the rest
> cluster, rather than simply allowing remote mount of the ESS servers by the
> other GPFS (NSD client) nodes?
>
> 2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our
> coexistence rules.
>
> 3) Mixing x86 and Power, especially as NSD servers, should pose no
> issues.  Having them as separate file systems (NetApp vs. ESS) means no
> concerns regarding varying architectures within the same fs serving or
> failover scheme.  Mixing such as compute nodes would mean some performance
> differences across the different clients, but you haven't described your
> compute (NSD client) details.
>
> Lyle
>
> ----- Original message -----
> From: "Tomer Perry" <TOMP at il.ibm.com>
> Sent by: gpfsug-discuss-bounces at spectrumscale.org
> To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a
> non-GNR cluster
> Date: Tue, Dec 3, 2019 10:03 AM
>
> Hi,
>
> Actually, I believe that GNR is not a limiting factor here.
> mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR
> configuration as well:
> "If the specified file system device is a IBM Spectrum Scale RAID-based
> file system, then all affected IBM Spectrum Scale RAID objects will be
> exported as well. This includes recovery groups, declustered arrays,
> vdisks, and any other file systems that are based on these objects. For
> more information about IBM Spectrum Scale RAID, see *IBM Spectrum
> Scale RAID: Administration*."
>
> OTOH, I suspect that due to the version mismatch, it wouldn't work - since
> I would assume that the cluster config version is to high for the NetApp
> based cluster.
> I would also suspect that the filesystem version on the ESS will be
> different.
>
>
> Regards,
>
> Tomer Perry
> Scalable I/O Development (Spectrum Scale)
> email: tomp at il.ibm.com
> 1 Azrieli Center, Tel Aviv 67021, Israel
> Global Tel:    +1 720 3422758
> Israel Tel:      +972 3 9188625
> Mobile:         +972 52 2554625
>
>
>
>
> From:        "Olaf Weiser" <olaf.weiser at de.ibm.com>
> To:        gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> Date:        03/12/2019 16:54
> Subject:        [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to
> a non-GNR cluster
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hallo
> "merging" 2 different GPFS cluster into one .. is not possible ..
> for sure you can do "nested" mounts .. .but that's most likely not, what
> you want to do ..
>
> if you want to add a GL2 (or any other ESS) ..to an existing (other)
> cluster... -  you can't preserve ESS's RG definitions...
> you need to create the RGs after adding the IO-nodes to the existing
> cluster...
>
> so if you got a new ESS.. (no data on it) .. simply unconfigure cluster ..
>  .. add the nodes to your existing cluster.. and then start configuring the
> RGs
>
>
>
>
>
> From:        "Dorigo Alvise (PSI)" <alvise.dorigo at psi.ch>
> To:        "gpfsug-discuss at spectrumscale.org" <
> gpfsug-discuss at spectrumscale.org>
> Date:        12/03/2019 09:35 AM
> Subject:        [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to a
> non-GNR cluster
> Sent by:        gpfsug-discuss-bounces at spectrumscale.org
> ------------------------------
>
>
>
> Hello everyone,
> I have:
> - A NetApp system with hardware RAID
> - SpectrumScale 4.2.3-13 running on top of the NetApp
> - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)
>
> What I need to do is to merge the GL2 in the other GPFS cluster (running
> on the NetApp) without loosing, of course, the RecoveryGroup configuration,
> etc.
>
> I'd like to ask the experts
> 1.        whether it is feasible, considering the difference in the GPFS
> versions, architectures differences (x86_64 vs. power)
> 2.        if yes, whether anyone already did something like this and what
> is the best strategy suggested
> 3.        finally: is there any documentation dedicated to that, or at
> least inspiring the correct procedure ?
>
> Thank you very much,
>
>   Alvise Dorigo_______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> *http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> <http://gpfsug.org/mailman/listinfo/gpfsug-discuss>*
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20191205/2bf32564/attachment-0002.htm>


More information about the gpfsug-discuss mailing list