[gpfsug-discuss] Hardware refresh -

Marc A Kaplan makaplan at us.ibm.com
Thu Oct 13 16:27:14 BST 2016


IMO, it is simplest to have both the old and new file systems both mounted 
on the same node(s).  Then you can use AFM and/or any other utilities to 
migrate/copy your files from the old file system to the new. 




From:   "Shankar Balasubramanian" <shankbal at in.ibm.com>
To:     gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date:   10/13/2016 06:46 AM
Subject:        Re: [gpfsug-discuss] Hardware refresh -
Sent by:        gpfsug-discuss-bounces at spectrumscale.org



Please note - though one of the supported use case of AFM is such 
migration scenarios, infact AFM does particularly well in faithfully 
migrating the data when the source and destination cluster are GPFS, the 
scalability of this solution for multi million/multi terabyte file systems 
has its own set of challenges. These have to carefully understood and 
checked if AFM will fit the bill. 


Best Regards,
Shankar Balasubramanian
STSM, AFM & Async DR Development
IBM Systems
Bangalore - Embassy Golf Links 
India



"Marc A Kaplan" ---10/12/2016 11:25:20 PM---Yes, you can AFM within a 
single cluster, in fact with just a single node. I just set this up on m

From: "Marc A Kaplan" <makaplan at us.ibm.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 10/12/2016 11:25 PM
Subject: Re: [gpfsug-discuss] Hardware refresh -
Sent by: gpfsug-discuss-bounces at spectrumscale.org



Yes, you can AFM within a single cluster, in fact with just a single node. 
 I just set this up on my toy system:

[root at bog-wifi cmvc]# mmlsfileset yy afmlu --afm
Filesets in file system 'yy':
Name                     Status    Path    afmTarget
afmlu                    Linked    /yy/afmlu    gpfs:///xx

[root at bog-wifi cmvc]# mount
 ...
yy on /yy type gpfs (rw,relatime,seclabel)
xx on /xx type gpfs (rw,relatime,seclabel)

[root at bog-wifi cmvc]# mmafmctl yy getstate
Fileset Name    Fileset Target                                Cache State  
       Gateway Node    Queue Length   Queue numExec
------------    -------------- -------------        ------------ 
------------   -------------
afmlu           gpfs:///xx                                    Active   
bog-wifi        0              7

So, you may add nodes, add disks to an existing cluster, upgrade your 
software, define a new FS,
migrate data from an old FS to a new FS
then delete nodes and disks that are no longer needed...



From: Stephen Ulmer <ulmer at ulmer.org>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 10/11/2016 09:30 PM
Subject: Re: [gpfsug-discuss] Hardware refresh
Sent by: gpfsug-discuss-bounces at spectrumscale.org



I think that the OP was asking why not expand the existing cluster with 
the new hardware, and just make a new FS?

I’ve not tried to make a cluster talk AFM to itself yet. If that’s 
impossible, then there’s one good reason to make a new cluster (to use AFM 
for migration).

Liberty,

-- 
Stephen



On Oct 11, 2016, at 8:40 PM, Mark.Bush at siriuscom.comwrote:

Only compelling reason for new cluster would be old hardware is EOL or no 
longer want to pay maintenance on it.

From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Marc A 
Kaplan <makaplan at us.ibm.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: Tuesday, October 11, 2016 at 2:58 PM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Hardware refresh

New FS? Yes there are some good reasons. 
New cluster? I did not see a compelling argument either way.



From: "Mark.Bush at siriuscom.com" <Mark.Bush at siriuscom.com>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: 10/11/2016 03:34 PM
Subject: Re: [gpfsug-discuss] Hardware refresh
Sent by: gpfsug-discuss-bounces at spectrumscale.org





Ok. I think I am hearing that a new cluster with a new FS and copying data 
from old to new cluster is the best way forward. Thanks everyone for your 
input. 

From: <gpfsug-discuss-bounces at spectrumscale.org> on behalf of Yuri L 
Volobuev <volobuev at us.ibm.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Date: Tuesday, October 11, 2016 at 12:22 PM
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Hardware refresh
This depends on the committed cluster version level (minReleaseLevel) and 
file system format. Since NFSv2 is an on-disk format change, older code 
wouldn't be able to understand what it is, and thus if there's a 
possibility of a downlevel node looking at the NSD, the NFSv1 format is 
going to be used. The code does NSDv1<->NSDv2 conversions under the covers 
as needed when adding an empty NSD to a file system.

I'd strongly recommend getting a fresh start by formatting a new file 
system. Many things have changed over the course of the last few years. In 
particular, having a 4K-aligned file system can be a pretty big deal, 
depending on what hardware one is going to deploy in the future, and this 
is something that can't be bolted onto an existing file system. Having 4K 
inodes is very handy for many reasons. New directory format and NSD format 
changes are attractive, too. And disks generally tend to get larger with 
time, and at some point you may want to add a disk to an existing storage 
pool that's larger than the existing allocation map format allows. 
Obviously, it's more hassle to migrate data to a new file system, as 
opposed to extending an existing one. In a perfect world, GPFS would offer 
a conversion tool that seamlessly and robustly converts old file systems, 
making them as good as new, but in the real world such a tool doesn't 
exist. Getting a clean slate by formatting a new file system every few 
years is a good long-term investment of time, although it comes 
front-loaded with extra work.

yuri

<image001.gif>Aaron Knister ---10/10/2016 04:45:31 PM---Can one format 
NSDv2 NSDs and put them in a filesystem with NSDv1 NSD's? -Aaron

From: Aaron Knister <aaron.s.knister at nasa.gov>
To: <gpfsug-discuss at spectrumscale.org>, 
Date: 10/10/2016 04:45 PM
Subject: Re: [gpfsug-discuss] Hardware refresh
Sent by: gpfsug-discuss-bounces at spectrumscale.org







Can one format NSDv2 NSDs and put them in a filesystem with NSDv1 NSD's?

-Aaron

On 10/10/16 7:40 PM, Luis Bolinches wrote:
> Hi
>
> Creating a new FS sounds like a best way to go. NSDv2 being a very good
> reason to do so.
>
> AFM for migrations is quite good, latest versions allows to use NSD
> protocol for mounts as well. Olaf did a great job explaining this
> scenario on the redbook chapter 6
>
> http://www.redbooks.ibm.com/abstracts/sg248254.html?Open
>
> --
> Cheers
>
> On 10 Oct 2016, at 23.05, Buterbaugh, Kevin L
> <Kevin.Buterbaugh at Vanderbilt.Edu
> <mailto:Kevin.Buterbaugh at Vanderbilt.Edu>> wrote:
>
>> Hi Mark,
>>
>> The last time we did something like this was 2010 (we’re doing rolling
>> refreshes now), so there are probably lots of better ways to do this
>> than what we did, but we:
>>
>> 1) set up the new hardware
>> 2) created new filesystems (so that we could make adjustments we
>> wanted to make that can only be made at FS creation time)
>> 3) used rsync to make a 1st pass copy of everything
>> 4) coordinated a time with users / groups to do a 2nd rsync when they
>> weren’t active
>> 5) used symbolic links during the transition (i.e. rm -rvf
>> /gpfs0/home/joeuser; ln -s /gpfs2/home/joeuser /gpfs0/home/joeuser)
>> 6) once everybody was migrated, updated the symlinks (i.e. /home
>> became a symlink to /gpfs2/home)
>>
>> HTHAL…
>>
>> Kevin
>>
>>> On Oct 10, 2016, at 2:56 PM, Mark.Bush at siriuscom.com
>>> <mailto:Mark.Bush at siriuscom.com> wrote:
>>>
>>> Have a very old cluster built on IBM X3650’s and DS3500. Need to
>>> refresh hardware. Any lessons learned in this process? Is it
>>> easiest to just build new cluster and then use AFM? Add to existing
>>> cluster then decommission nodes? What is the recommended process for
>>> this?
>>>
>>>
>>> Mark
>>>
>>> This message (including any attachments) is intended only for the use
>>> of the individual or entity to which it is addressed and may contain
>>> information that is non-public, proprietary, privileged,
>>> confidential, and exempt from disclosure under applicable law. If you
>>> are not the intended recipient, you are hereby notified that any use,
>>> dissemination, distribution, or copying of this communication is
>>> strictly prohibited. This message may be viewed by parties at Sirius
>>> Computer Solutions other than those named in the message header. This
>>> message does not contain an official representation of Sirius
>>> Computer Solutions. If you have received this communication in error,
>>> notify Sirius Computer Solutions immediately and (i) destroy this
>>> message if a facsimile or (ii) delete this message immediately if
>>> this is an electronic communication. Thank you.
>>>
>>> Sirius Computer Solutions <http://www.siriuscom.com/>
>>> _______________________________________________
>>> gpfsug-discuss mailing list
>>> gpfsug-discuss at spectrumscale.org<http://spectrumscale.org/>
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
>>>> Kevin Buterbaugh - Senior System Administrator
>> Vanderbilt University - Advanced Computing Center for Research and
>> Education
>> Kevin.Buterbaugh at vanderbilt.edu
>> <mailto:Kevin.Buterbaugh at vanderbilt.edu> - (615)875-9633
>>
>>
>>
>
> Ellei edellä ole toisin mainittu: / Unless stated otherwise above:
> Oy IBM Finland Ab
> PL 265, 00101 Helsinki, Finland
> Business ID, Y-tunnus: 0195876-3
> Registered in Finland
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
This message (including any attachments) is intended only for the use of 
the individual or entity to which it is addressed and may contain 
information that is non-public, proprietary, privileged, confidential, and 
exempt from disclosure under applicable law. If you are not the intended 
recipient, you are hereby notified that any use, dissemination, 
distribution, or copying of this communication is strictly prohibited. 
This message may be viewed by parties at Sirius Computer Solutions other 
than those named in the message header. This message does not contain an 
official representation of Sirius Computer Solutions. If you have received 
this communication in error, notify Sirius Computer Solutions immediately 
and (i) destroy this message if a facsimile or (ii) delete this message 
immediately if this is an electronic communication. Thank you.
Sirius Computer Solutions_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161013/2f113d1b/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20161013/2f113d1b/attachment-0002.gif>


More information about the gpfsug-discuss mailing list