[gpfsug-discuss] migrating data from GPFS3.5 to ESS appliance(GPFS4.1)

Damir Krstic damir.krstic at gmail.com
Mon Feb 22 16:11:31 GMT 2016


Thanks for the reply - but that explanation does not mean no downtime
without elaborating on "cut over." I can do the sync via rsync or tar today
but eventually I will have to cut over to the new system.

Is this the case with AFM as well - once everything is synced over -
cutting over means users will have to "cut over" by:

1. either mounting new AFM-synced system on all compute nodes with same
mount as the old system (which means downtime to unmount the existing
filesystem and mounting new filesystem)

or

2.  end-user training i.e. starting using new filesystem, move your own
files you need because eventually we will shutdown the old filesystem.

If, then, it's true that AFM requires some sort of cut over (either by
disconnecting the old system and mounting new system as the old mount
point, or by instruction to users to start using new filesystem at once) I
am not sure that AFM gets me anything more than rsync or tar when it comes
to taking a downtime (cutting over) for the end user.

Thanks,
Damir




On Mon, Feb 22, 2016 at 7:39 AM Yaron Daniel <YARD at il.ibm.com> wrote:

> Hi
>
> AFM - Active File Management (AFM) is an asynchronous cross cluster
> utility
>
> It means u create new GPFS cluster - migrate the data without downtime ,
> and when u r ready - u do last sync and cut-over.
>
> Hope this help.
>
>
>
> Regards
>
>
>
> ------------------------------
>
>
>
> *Yaron Daniel*  94 Em Ha'Moshavot Rd
> *Server, **Storage and Data Services*
> <https://w3-03.ibm.com/services/isd/secure/client.wss/Somt?eventType=getHomePage&somtId=115>*-
> Team Leader*   Petach Tiqva, 49527
> *Global Technology Services*  Israel
> Phone: +972-3-916-5672
> Fax: +972-3-916-5672
> Mobile: +972-52-8395593
> e-mail: yard at il.ibm.com
> *IBM Israel* <http://www.ibm.com/il/he/>
>
>
>
>
>
> gpfsug-discuss-bounces at spectrumscale.org wrote on 02/22/2016 03:12:14 PM:
>
> > From: Damir Krstic <damir.krstic at gmail.com>
> > To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
> > Date: 02/22/2016 03:12 PM
>
>
> > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS
> > appliance (GPFS4.1)
>
> > Sent by: gpfsug-discuss-bounces at spectrumscale.org
>
>
> >
> > Sorry to revisit this question - AFM seems to be the best way to do
> > this. I was wondering if anyone has done AFM migration. I am looking
> > at this wiki page for instructions:
> > https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/
> > wiki/General%20Parallel%20File%20System%20(GPFS)/page/Migrating%
> > 20Data%20Using%20AFM
> > and I am little confused by step 3 "cut over users" <-- does this
> > mean, unmount existing filesystem and point users to new filesystem?
> >
> > The reason we were looking at AFM is to not have downtime - make the
> > transition as seamless as possible to the end user. Not sure what,
> > then, AFM buys us if we still have to take "downtime" in order to
> > cut users over to the new system.
> >
> > Thanks,
> > Damir
> >
> > On Thu, Feb 4, 2016 at 3:15 PM Damir Krstic <damir.krstic at gmail.com>
> wrote:
> > Thanks all for great suggestions. We will most likely end up using
> > either AFM or some mechanism of file copy (tar/rsync etc.).
> >
> > On Mon, Feb 1, 2016 at 12:39 PM Wahl, Edward <ewahl at osc.edu> wrote:
> > Along the same vein I've patched rsync to maintain source atimes in
> > Linux for large transitions such as this.  Along with the stadnard
> > "patches" mod for destination atimes it is quite useful.   Works in
> > 3.0.8 and 3.0.9.  I've not yet ported it to 3.1.x
> > https://www.osc.edu/sites/osc.edu/files/staff_files/ewahl/onoatime.diff
> >
> > Ed Wahl
> > OSC
> >
> > ________________________________________
> > From: gpfsug-discuss-bounces at spectrumscale.org [gpfsug-discuss-
> > bounces at spectrumscale.org] on behalf of Orlando Richards [
> > orlando.richards at ed.ac.uk]
> > Sent: Monday, February 01, 2016 4:25 AM
> > To: gpfsug-discuss at spectrumscale.org
> > Subject: Re: [gpfsug-discuss] migrating data from GPFS3.5 to ESS
> > appliance (GPFS4.1)
> >
> > For what it's worth - there's a patch for rsync which IBM provided a
> > while back that will copy NFSv4 ACLs (maybe other stuff?). I put it up
> > on the gpfsug github here:
> >
> >    https://github.com/gpfsug/gpfsug-tools/tree/master/bin/rsync
> >
> >
> >
> > On 29/01/16 22:36, Sven Oehme wrote:
> > > Doug,
> > >
> > > This won't really work if you make use of ACL's or use special GPFS
> > > extended attributes or set quotas, filesets, etc
> > > so unfortunate the answer is you need to use a combination of things
> and
> > > there is work going on to make some of this simpler (e.g. for ACL's) ,
> > > but its a longer road to get there.  so until then you need to think
> > > about multiple aspects .
> > >
> > > 1. you need to get the data across and there are various ways to do
> this.
> > >
> > > a) AFM is the simplest of all as it not just takes care of ACL's and
> > > extended attributes and alike as it understands the GPFS internals it
> > > also is operating in parallel can prefetch data, etc so its a efficient
> > > way to do this but as already pointed out doesn't transfer quota or
> > > fileset informations.
> > >
> > > b) you can either use rsync or any other pipe based copy program. the
> > > downside is that they are typical single threaded and do a file by file
> > > approach, means very metadata intensive on the source as well as target
> > > side and cause a lot of ios on both side.
> > >
> > > c) you can use the policy engine to create a list of files to transfer
> > > to at least address the single threaded scan part, then partition the
> > > data and run multiple instances of cp or rsync in parallel, still
> > > doesn't fix the ACL / EA issues, but the data gets there faster.
> > >
> > > 2. you need to get ACL/EA informations over too. there are several
> > > command line options to dump the data and restore it, they kind of
> > > suffer the same problem as data transfers , which is why using AFM is
> > > the best way of doing this if you rely on ACL/EA  informations.
> > >
> > > 3. transfer quota / fileset infos.  there are several ways to do this,
> > > but all require some level of scripting to do this.
> > >
> > > if you have TSM/HSM you could also transfer the data using SOBAR it's
> > > described in the advanced admin book.
> > >
> > > sven
> > >
> > >
> > > On Fri, Jan 29, 2016 at 11:35 AM, Hughes, Doug
> > > <Douglas.Hughes at deshawresearch.com
> > > <mailto:Douglas.Hughes at deshawresearch.com
> <Douglas.Hughes at deshawresearch.com>>> wrote:
> > >
> > >     I have found that a tar pipe is much faster than rsync for this
> sort
> > >     of thing. The fastest of these is ‘star’ (schily tar). On average
> it
> > >     is about 2x-5x faster than rsync for doing this. After one pass
> with
> > >     this, you can use rsync for a subsequent or last pass synch.____
> > >
> > >     __ __
> > >
> > >     e.g.____
> > >
> > >     $ cd /export/gpfs1/foo____
> > >
> > >     $ star –c H=xtar | (cd /export/gpfs2/foo; star –xp)____
> > >
> > >     __ __
> > >
> > >     This also will not preserve filesets and quotas, though. You should
> > >     be able to automate that with a little bit of awk, perl, or
> whatnot.____
> > >
> > >     __ __
> > >
> > >     __ __
> > >
> > >     *From:*gpfsug-discuss-bounces at spectrumscale.org
> > >     <mailto:gpfsug-discuss-bounces at spectrumscale.org
> <gpfsug-discuss-bounces at spectrumscale.org>>
> > >     [mailto:gpfsug-discuss-bounces at spectrumscale.org
> <gpfsug-discuss-bounces at spectrumscale.org>
> > >     <mailto:gpfsug-discuss-bounces at spectrumscale.org
> <gpfsug-discuss-bounces at spectrumscale.org>>] *On Behalf Of
> > >     *Damir Krstic
> > >     *Sent:* Friday, January 29, 2016 2:32 PM
> > >     *To:* gpfsug main discussion list
> > >     *Subject:* [gpfsug-discuss] migrating data from GPFS3.5 to ESS
> > >     appliance (GPFS4.1)____
> > >
> > >     __ __
> > >
> > >     We have recently purchased ESS appliance from IBM (GL6) with 1.5PT
> > >     of storage. We are in planning stages of implementation. We would
> > >     like to migrate date from our existing GPFS installation (around
> > >     300TB) to new solution. ____
> > >
> > >     __ __
> > >
> > >     We were planning of adding ESS to our existing GPFS cluster and
> > >     adding its disks and then deleting our old disks and having the
> data
> > >     migrated this way. However, our existing block size on our projects
> > >     filesystem is 1M and in order to extract as much performance out of
> > >     ESS we would like its filesystem created with larger block size.
> > >     Besides rsync do you have any suggestions of how to do this without
> > >     downtime and in fastest way possible? ____
> > >
> > >     __ __
> > >
> > >     I have looked at AFM but it does not seem to migrate quotas and
> > >     filesets so that may not be an optimal solution. ____
> > >
> > >
> > >     _______________________________________________
> > >     gpfsug-discuss mailing list
> > >     gpfsug-discuss at spectrumscale.org <http://spectrumscale.org>
> > >     http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > gpfsug-discuss mailing list
> > > gpfsug-discuss at spectrumscale.org
> > > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> > >
> >
> > --
> >              --
> >         Dr Orlando Richards
> >      Research Services Manager
> >         Information Services
> >     IT Infrastructure Division
> >         Tel: 0131 650 4994
> >       skype: orlando.richards
> >
> > The University of Edinburgh is a charitable body, registered in
> > Scotland, with registration number SC005336.
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> > _______________________________________________
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160222/a98f01f4/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 1851 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160222/a98f01f4/attachment-0004.gif>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 1851 bytes
Desc: not available
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160222/a98f01f4/attachment-0005.gif>


More information about the gpfsug-discuss mailing list