[gpfsug-discuss] gpfs filesets question

Stephan Graf st.graf at fz-juelich.de
Mon Apr 20 09:29:17 BST 2020


Hi,

we recognized this behavior when we tried to move HSM migrated files 
between filesets. This cases a recall. Very annoying when the data are 
afterword stored on the same pools and have to be migrated back to tape.
@IBM: should we open a RFE to address this?

Stephan

Am 18.04.2020 um 17:04 schrieb Stephen Ulmer:
> Is this still true if the source and target fileset are both in the same 
> storage pool? It seems like they could just move the metadata… 
> Especially in the case of dependent filesets where the metadata is 
> actually in the same allocation area for both the source and target.
> 
> Maybe this just doesn’t happen often enough to optimize?
> 
> -- 
> Stephen
> 
> 
> 
>> On Apr 16, 2020, at 12:50 PM, Oesterlin, Robert 
>> <Robert.Oesterlin at nuance.com <mailto:Robert.Oesterlin at nuance.com>> wrote:
>>
>> Moving data between filesets is like moving files between file 
>> systems. Normally when you move files between directories, it’s simple 
>> metadata, but with filesets (dependent or independent) is a full copy 
>> and delete of the old data.
>> Bob Oesterlin
>> Sr Principal Storage Engineer, Nuance
>> *From:*<gpfsug-discuss-bounces at spectrumscale.org 
>> <mailto:gpfsug-discuss-bounces at spectrumscale.org>> on behalf of "J. 
>> Eric Wonderley" <eric.wonderley at vt.edu <mailto:eric.wonderley at vt.edu>>
>> *Reply-To:*gpfsug main discussion list 
>> <gpfsug-discuss at spectrumscale.org 
>> <mailto:gpfsug-discuss at spectrumscale.org>>
>> *Date:*Thursday, April 16, 2020 at 11:32 AM
>> *To:*gpfsug main discussion list <gpfsug-discuss at spectrumscale.org 
>> <mailto:gpfsug-discuss at spectrumscale.org>>
>> *Subject:*[EXTERNAL] [gpfsug-discuss] gpfs filesets question
>> I have filesets setup in a filesystem...looks like:
>> [root at cl005 ~]# mmlsfileset home -L
>> Filesets in file system 'home':
>> Name                            Id      RootInode  ParentId Created   
>>                    InodeSpace      MaxInodes    AllocInodes Comment
>> root                             0              3        -- Tue Jun 30 
>> 07:54:09 2015        0            402653184      320946176 root fileset
>> hess                             1      543733376         0 Tue Jun 13 
>> 14:56:13 2017        0                    0              0
>> predictHPC                       2        1171116         0 Thu Jan  5 
>> 15:16:56 2017        0                    0              0
>> HYCCSIM                          3      544258049         0 Wed Jun 14 
>> 10:00:41 2017        0                    0              0
>> socialdet                        4      544258050         0 Wed Jun 14 
>> 10:01:02 2017        0                    0              0
>> arc                              5        1171073         0 Thu Jan  5 
>> 15:07:09 2017        0                    0              0
>> arcadm                           6        1171074         0 Thu Jan  5 
>> 15:07:10 2017        0                    0              0
>> I beleive these are dependent filesets.  Dependent on the root 
>> fileset.   Anyhow a user wants to move a large amount of data from one 
>> fileset to another.   Would this be a metadata only operation?  He has 
>> attempted to small amount of data and has noticed some thrasing.
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss atspectrumscale.org <http://spectrumscale.org/>
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
> 
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5360 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20200420/a9fdf1bf/attachment-0002.bin>


More information about the gpfsug-discuss mailing list