[gpfsug-discuss] How to join GNR nodes to a non-GNR cluster

Lyle Gayne lgayne at us.ibm.com
Thu Dec 5 15:58:39 GMT 2019


One tricky bit in this case is that ESS is always recommended to be its own
standalone cluster, so MERGING it as a storage pool or pools into a cluster
already containing NetApp storage wouldn't be generally recommended.

Yet you cannot achieve the stated goal of a single fs image/mount point
containing both types of storage that way.

Perhaps our ESS folk should weigh in regarding possible routs?

Lyle



From:	Christopher Black <cblack at nygenome.org>
To:	gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:	12/05/2019 10:53 AM
Subject:	[EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a
            non-GNR cluster
Sent by:	gpfsug-discuss-bounces at spectrumscale.org



If you have two clusters that are hard to merge, but you are facing the
need to provide capacity for more writes, another option to consider would
be to set up a filesystem on GL2 with an AFM relationship to the filesystem
on the netapp gpfs cluster for accessing older data and point clients to
the new GL2 filesystem.
Some downsides to that approach include introducing a dependency on afm
(and potential performance reduction) to get to older data. There may also
be complications depending on how your filesets are laid out.
At some point when you have more capacity in 5.x cluster and/or are ready
to move off netapp, you could use afm to prefetch all data into new
filesystem. In theory, you could then (re)build nsd servers connected to
netapp on 5.x and add them to new cluster and use them for a separate pool
or keep them as a separate 5.x cluster.

Best,
Chris

From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of "Dorigo
Alvise (PSI)" <alvise.dorigo at psi.ch>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: Thursday, December 5, 2019 at 9:50 AM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster



This is a quite critical storage for data taking. It is not easy to update
to GPFS5 because in that facility we have very short shutdown period. Thank
you for pointing out that 4.2.3. But the entire storage will be replaced in
the future; at the moment we just need to expand it to survive for a while.





This merge seems quite tricky to implement and I haven't seen consistent
opinions among the people that kindly answered. According to Jan Frode,
Kaplan and T. Perry it should be possible, in principle, to do the merge...
Other people suggest a remote mount, which is not a solution for my use
case. Other suggest not to do that...





   A







From: gpfsug-discuss-bounces at spectrumscale.org
<gpfsug-discuss-bounces at spectrumscale.org> on behalf of Daniel Kidger
<daniel.kidger at uk.ibm.com>
Sent: Thursday, December 5, 2019 11:24:08 AM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR cluster

One additional question to ask is : what are your long term plans for the
4.2.3 Spectrum Scake cluster?  Do you expect to upgrade it to version 5.x
(hopefully before 4.2.3 goes out of support)?

Also I assume your Netapp hardware is the standard Netapp block storage,
perhaps based on their standard 4U60 shelves daisy-chained together?
Daniel

_________________________________________________________
Daniel Kidger
IBM Technical Sales Specialist
Spectrum Scale, Spectrum Discover and IBM Cloud Object Store

+44-(0)7818 522 266
daniel.kidger at uk.ibm.com


                                     
                                     
                                     








      On 5 Dec 2019, at 09:29, Dorigo Alvise (PSI) <alvise.dorigo at psi.ch>
      wrote:
      Thank Anderson for the material. In principle our idea was to scratch
      the filesystem in the GL2, put its NSD on a dedicated pool and then
      merge it into the Filesystem which would remain on V4. I do not want
      to create a FS in the GL2 but use its space to expand the space of
      the other cluster.





         A

      From: gpfsug-discuss-bounces at spectrumscale.org
      <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Anderson
      Ferreira Nobre <anobre at br.ibm.com>
      Sent: Wednesday, December 4, 2019 3:07:18 PM
      To: gpfsug-discuss at spectrumscale.org
      Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR
      cluster

      Hi Dorigo,

      From point of view of cluster administration I don't think it's a
      good idea to have hererogeneous cluster. There are too many
      diferences between V4 and V5. And much probably many of enhancements
      of V5 you won't take advantage. One example is the new filesystem
      layout in V5. And at this moment the way to migrate the filesystem is
      create a new filesystem in V5 with the new layout and migrate the
      data. That is inevitable. I have seen clients saying that they don't
      need all that enhancements, but the true is when you face performance
      issue that is only addressable with the new features someone will
      raise the question why we didn't consider that in the beginning.

      Use this time to review if it would be better to change the block
      size of your fileystem. There's a script called filehist
      in /usr/lpp/mmfs/samples/debugtools to create a histogram of files in
      your current filesystem. Here's the link with additional information:
      https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20
(GPFS)/page/Data%20and%20Metadata

      Different RAID configurations also brings unexpected performance
      behaviors. Unless you are planning create different pools and use ILM
      to manage the files in different pools.

      One last thing, it's a good idea to follow the recommended levels for
      Spectrum Scale:
      https://www.ibm.com/support/pages/ibm-spectrum-scale-software-version-recommendation-preventive-service-planning

      Anyway, you are the system administrator, you know better than anyone
      how complex is to manage this cluster.



                                                                                     
 Abraços / Regards / Saludos,                                                        
                                                                                     
                                                                                     
                                                                                     
 AndersonNobre                                                                       
 Power and Storage Consultant                                                        
 IBM Systems Hardware Client Technical Team – IBM Systems Lab Services               
                                                                                     
                                                                                     
                                                                                     





                                                                                      
                                                                                      
                                                                                      
 Phone:55-19-2132-4317                                                                
 E-mail: anobre at br.ibm.com                                                            
                                                                                      




       ----- Original message -----
       From: "Dorigo Alvise (PSI)" <alvise.dorigo at psi.ch>
       Sent by: gpfsug-discuss-bounces at spectrumscale.org
       To: "gpfsug-discuss at spectrumscale.org"
       <gpfsug-discuss at spectrumscale.org>
       Cc:
       Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a
       non-GNR cluster
       Date: Wed, Dec 4, 2019 06:44




       Thank you all for the answer. I try to recap my answers to your
       questions:



          1.	the purpose is not to merge clusters "per se"; it is adding
             the GL2's 700TB raw space to the current filesystem provided
             by the GPFS/NetApp (which is running out of free space); of
             course I know well the heterogeneity of this hypothetical
             system, so the GL2's NSD would go to a special pool; but in
             the end I need a unique namespace for files.
          2.	I do not want to do the opposite (mergin GPFS/NetApp into the
             GL2 cluster) because the former is in production and I cannot
             schedule long downtimes
          3.	All system have proper licensing of course; what does it means
             that I could loose IBM support ? if the support is for a
             failing disk drive I do not think so; if the support is for a
             "strange" behaviour of GPFS I can probably understand
          4.	NSD (in the NetApp system) are in their roles: what do you
             mean exactly ? there's RAIDset attached to servers that are
             actually NSD together with their attached LUN

          Alvise

       From: gpfsug-discuss-bounces at spectrumscale.org
       <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Lyle Gayne
       <lgayne at us.ibm.com>
       Sent: Tuesday, December 3, 2019 8:30:31 PM
       To: gpfsug-discuss at spectrumscale.org
       Cc: gpfsug-discuss at spectrumscale.org
       Subject: Re: [gpfsug-discuss] How to join GNR nodes to a non-GNR
       cluster

       For:

       - A NetApp system with hardware RAID
       - SpectrumScale 4.2.3-13 running on top of the NetApp < --- Are
       these NSD servers in their GPFS roles (where Scale "runs on top"?
       - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)

       What I need to do is to merge the GL2 in the other GPFS cluster
       (running on the NetApp) without loosing, of course, the
       RecoveryGroup configuration, etc.

       I'd like to ask the experts
       1.        whether it is feasible, considering the difference in the
       GPFS versions, architectures differences (x86_64 vs. power)
       2.        if yes, whether anyone already did something like this and
       what is the best strategy suggested
       3.        finally: is there any documentation dedicated to that, or
       at least inspiring the correct procedure ?

       ......
       Some observations:


       1) Why do you want to MERGE the GL2 into a single cluster with the
       rest cluster, rather than simply allowing remote mount of the ESS
       servers by the other GPFS (NSD client) nodes?

       2) Interop of the mix of 4.2 and 5.0 levels should be allowed by our
       coexistence rules.

       3) Mixing x86 and Power, especially as NSD servers, should pose no
       issues.  Having them as separate file systems (NetApp vs. ESS) means
       no concerns regarding varying architectures within the same fs
       serving or failover scheme.  Mixing such as compute nodes would mean
       some performance differences across the different clients, but you
       haven't described your compute (NSD client) details.

       Lyle
       ----- Original message -----
       From: "Tomer Perry" <TOMP at il.ibm.com>
       Sent by: gpfsug-discuss-bounces at spectrumscale.org
       To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
       Cc:
       Subject: [EXTERNAL] Re: [gpfsug-discuss] How to join GNR nodes to a
       non-GNR cluster
       Date: Tue, Dec 3, 2019 10:03 AM

       Hi,

       Actually, I believe that GNR is not a limiting factor here.
       mmexportfs and mmimportfs ( man mm??portfs) will export/import GNR
       configuration as well:
       "If the specified file system device is a IBM Spectrum Scale
       RAID-based file system, then all affected IBM Spectrum Scale RAID
       objects will be exported as well. This includes recovery groups,
       declustered arrays, vdisks, and any other file systems that are
       based on these objects. For more information about IBM Spectrum
       Scale RAID, see IBM Spectrum Scale RAID: Administration."

       OTOH, I suspect that due to the version mismatch, it wouldn't work -
       since I would assume that the cluster config version is to high for
       the NetApp based cluster.
       I would also suspect that the filesystem version on the ESS will be
       different.


       Regards,

       Tomer Perry
       Scalable I/O Development (Spectrum Scale)
       email: tomp at il.ibm.com
       1 Azrieli Center, Tel Aviv 67021, Israel
       Global Tel:    +1 720 3422758
       Israel Tel:      +972 3 9188625
       Mobile:         +972 52 2554625




       From:        "Olaf Weiser" <olaf.weiser at de.ibm.com>
       To:        gpfsug main discussion list
       <gpfsug-discuss at spectrumscale.org>
       Date:        03/12/2019 16:54
       Subject:        [EXTERNAL] Re: [gpfsug-discuss] How to join GNR
       nodes to a non-GNR cluster
       Sent by:        gpfsug-discuss-bounces at spectrumscale.org




       Hallo
       "merging" 2 different GPFS cluster into one .. is not possible ..
       for sure you can do "nested" mounts .. .but that's most likely not,
       what you want to do ..

       if you want to add a GL2 (or any other ESS) ..to an existing (other)
       cluster... -  you can't preserve ESS's RG definitions...
       you need to create the RGs after adding the IO-nodes to the existing
       cluster...

       so if you got a new ESS.. (no data on it) .. simply unconfigure
       cluster ..  .. add the nodes to your existing cluster.. and then
       start configuring the RGs





       From:        "Dorigo Alvise (PSI)" <alvise.dorigo at psi.ch>
       To:        "gpfsug-discuss at spectrumscale.org"
       <gpfsug-discuss at spectrumscale.org>
       Date:        12/03/2019 09:35 AM
       Subject:        [EXTERNAL] [gpfsug-discuss] How to join GNR nodes to
       a non-GNR cluster
       Sent by:        gpfsug-discuss-bounces at spectrumscale.org




       Hello everyone,
       I have:
       - A NetApp system with hardware RAID
       - SpectrumScale 4.2.3-13 running on top of the NetApp
       - A GL2 system with ESS 5.3.2.1 (Spectrum Scale 5.0.2-1)

       What I need to do is to merge the GL2 in the other GPFS cluster
       (running on the NetApp) without loosing, of course, the
       RecoveryGroup configuration, etc.

       I'd like to ask the experts
       1.        whether it is feasible, considering the difference in the
       GPFS versions, architectures differences (x86_64 vs. power)
       2.        if yes, whether anyone already did something like this and
       what is the best strategy suggested
       3.        finally: is there any documentation dedicated to that, or
       at least inspiring the correct procedure ?

       Thank you very much,

         Alvise Dorigo_______________________________________________
       gpfsug-discuss mailing list
       gpfsug-discuss at spectrumscale.org
       http://gpfsug.org/mailman/listinfo/gpfsug-discuss


       _______________________________________________
       gpfsug-discuss mailing list
       gpfsug-discuss at spectrumscale.org
       http://gpfsug.org/mailman/listinfo/gpfsug-discuss



       _______________________________________________
       gpfsug-discuss mailing list
       gpfsug-discuss at spectrumscale.org
       http://gpfsug.org/mailman/listinfo/gpfsug-discuss

       _______________________________________________
       gpfsug-discuss mailing list
       gpfsug-discuss at spectrumscale.org
       http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU



This message is for the recipient’s use only, and may contain confidential,
privileged or protected information. Any unauthorized use or dissemination
of this communication is prohibited. If you received this message in error,
please immediately notify the sender and destroy all copies of this
message. The recipient should check this email and any attachments for the
presence of viruses, as we accept no liability for any damage caused by any
virus transmitted by this email.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=IbxtjdkPAM2Sbon4Lbbi4w&m=96nejPA0lJgbr9YP3LlaHsFUacfAy3QObHRl5SSeu6o&s=E1HEKXJOzKNDJan1TBYUlV1ckkhUjDiqUXT-x-p-QbI&e=



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20191205/4c988706/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20191205/4c988706/attachment-0004.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: ecblank.gif
Type: image/gif
Size: 45 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20191205/4c988706/attachment-0005.gif>


More information about the gpfsug-discuss mailing list