[gpfsug-discuss] Best way to migrate data

Ryan Novosielski novosirj at rutgers.edu
Mon Oct 22 16:21:06 BST 2018


It seems like the primary way that this helps us is that we transfer user home directories and many of them have VERY large numbers of small files (in the millions), so running multiple simultaneous rsyncs allows the transfer to continue past that one slow area. I guess it balances the bandwidth constraint and the I/O constraints on generating a file list. There are unfortunately one or two known bugs that slow it down — it keeps track of its rsync PIDs but sometimes a former rsync PID is reused by the system which it counts against the number of running rsyncs. It can also think rsync is still running at the end when it’s really something else now using the PID. I know the author is looking at that. For shorter transfers, you likely won’t run into this.

I’m not sure I have the time or the programming ability to make this happen, but it seems to me that one could make some major gains by replacing fpart with mmfind in a GPFS environment. Generating lists of files takes a significant amount of time and mmfind can probably do it faster than anything else that does not have direct access to GPFS metadata.

> On Oct 19, 2018, at 6:37 AM, Dwayne.Hart at med.mun.ca wrote:
> 
> Thank you Ryan. I’ll have a more in-depth look at this application later today and see how it deals with some of the large genetic files that are generated by the sequencer. By copying it from GPFS fs to another GPFS fs.
> 
> Best,
> Dwayne
>> Dwayne Hart | Systems Administrator IV
> 
> CHIA, Faculty of Medicine 
> Memorial University of Newfoundland 
> 300 Prince Philip Drive
> St. John’s, Newfoundland | A1B 3V6
> Craig L Dobbin Building | 4M409
> T 709 864 6631
> 
>> On Oct 19, 2018, at 7:04 AM, Ryan Novosielski <novosirj at rutgers.edu> wrote:
>> 
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>> 
>> We use parsyncfp. Our target is not GPFS, though. I was really hoping
>> to hear about something snazzier for GPFS-GPFS. Lenovo would probably
>> tell you that HSM is the way to go (we asked something similar for a
>> replacement for our current setup or for distributed storage).
>> 
>>> On 10/18/2018 01:19 PM, Dwayne.Hart at med.mun.ca wrote:
>>> Hi,
>>> 
>>> Just wondering what the best recipe for migrating a user’s home
>>> directory content from one GFPS file system to another which hosts
>>> a larger research GPFS file system? I’m currently using rsync and
>>> it has maxed out the client system’s IB interface.
>>> 
>>> Best, Dwayne — Dwayne Hart | Systems Administrator IV
>>> 
>>> CHIA, Faculty of Medicine Memorial University of Newfoundland 300
>>> Prince Philip Drive St. John’s, Newfoundland | A1B 3V6 Craig L
>>> Dobbin Building | 4M409 T 709 864 6631 
>>> _______________________________________________ gpfsug-discuss
>>> mailing list gpfsug-discuss at spectrumscale.org 
>>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>> 
>> 
>> - -- 
>> ____
>> || \\UTGERS,     |----------------------*O*------------------------
>> ||_// the State  |    Ryan Novosielski - novosirj at rutgers.edu
>> || \\ University | Sr. Technologist - 973/972.0922 ~*~ RBHS Campus
>> ||  \\    of NJ  | Office of Advanced Res. Comp. - MSB C630, Newark
>>     `'
>> -----BEGIN PGP SIGNATURE-----
>> 
>> iEYEARECAAYFAlvI51AACgkQmb+gadEcsb62SQCfWBAru3KkJd+UftG2BXaRzjTG
>> p/wAn0mpC5XCZc50fZfMPRRXR40HsmEk
>> =dMDg
>> -----END PGP SIGNATURE-----
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss



More information about the gpfsug-discuss mailing list